Wednesday, 16 January 2013

Matrix Functions in Parallel

Last year I wrote a blog post about NAG’s work on parallelising the computation of the matrix square root. More recently, as part of our Matrix Functions Knowledge Transfer Partnership with the University of Manchester, we’ve been investigating parallel implementations of the Schur-Parlett algorithm [1].
Most algorithms for computing functions of matrices are tailored for a specific function, such as the matrix exponential or the matrix square root. The Schur-Parlett algorithm is much more general; it will work for any “well behaved” function (this general term can be given a more mathematically precise meaning). For a function such as
using the Schur-Parlett algorithm should be quicker than computing the individual components separately.  This algorithm made its first appearance in Mark 23 of the NAG C Library. Our parallel improvements will be found in Mark 24 of the C Library, next year.
At the heart of the algorithm lies a Taylor series (which is truncated when it has converged sufficiently). For example:
Before this however, the matrix A is converted into upper triangular Schur form. This serves two purposes. It reduces the number of arithmetic operations required to evaluate the Taylor series. It also gives us the eigenvalues of A. The Taylor series will converge fastest when the eigenvalues lie close together. The clever bit of the algorithm is to rearrange the Schur form of A, grouping similar eigenvalues together. The diagonal part of the matrix is then split into multiple different sized sub-matrices according to this grouping. A Taylor series is evaluated for each of these diagonal blocks. The off-diagonal blocks are then computed by solving the ‘Parlett recurrence’. This is all very confusing and is best explained by the diagram below:
Here we can see the original matrix (in black), the triangular Schur form (in red), the blocking strategy (in blue) and finally the computation of the off-diagonal blocks (in green). The exact distribution of eigenvalues and therefore the number and size of diagonal blocks is totally dependent on the input matrix.

Wednesday, 2 January 2013

Our own Hall of Fame – the NAG Life Service Recognition Award

A bit of background…

NAG was established as a not-for-profit organisation and remains so, which means it has no shareholders or investors to answer to. All surpluses made are ploughed back into R&D. NAG is ‘owned’ by its members of which are people that founded NAG, current and past employees, collaborators and friends of the company. This all serves to make NAG a pretty unique place to work which might be a reason for the longevity of serving staff (NAG also happens to be an Investors in People Gold company).

It’s certainly not rare for those of us working here to see our colleagues reach the milestone that could be considered ‘lifetime service’. Many staff work at NAG for the majority of their careers and continue to do so well into their ‘golden years’.

Rewarding greatness…

During NAG’s 40th anniversary celebrations in 2011, a NAG ‘Hall of Fame’ was established to recognise outstanding efforts and commitment to the company by a member or collaborator of NAG. The Life Service Recognition Award has now been bestowed twice.