Tuesday, 24 July 2012

A cure for the common code: NAG at The Trading Show Chicago

A few weeks ago, my colleagues and I attended The Trading Show Chicago, which looks at the world of quantitative financial analysis, trading strategy and technology.  The conference was preceded by two technology workshops; I attended one of these, which was on the use of independent component analysis (ICA) and so-called second order statistics for hedge fund data.  After the session, I had an interesting conversation with the presenters (Prof Andrew Kumiega from IIT and Dr Greg Sterijevski from UBS O’Connor) afterwards about ICA, NAG’s functionality – with which they were already familiar, and complementary about – and the way in which it could be used in ICA.

John Morrissey, Paul Hipes and Brian Spector offering the benefit of their numerical expertise in the NAG Clinic at  The Trading Show Chicago

Following a plenary session on the opening day of the conference, the proceedings were split into four streams: Trading Congress, HFT World, Quant Invest and Exchange Technology.  There was a very nice talk by Professor Emanuel Derman on the use of metaphor, models and theory by quants; I had a brief chat with Derman afterwards, during which he referred to NAG in positive terms.  After the conference, my colleague kindly lent me Derman’s book, "My Life As A Quant" which provides some valuable insights on his evolving views about the applicability of models to quantitative analysis, as well as interesting background on his personal history (he started his academic career as a particle physicist, switching to finance when he joined Goldman Sachs in 1985 to become one of the first so-called POWS – i.e. Physicists On Wall Street).

Wednesday, 11 July 2012

The Matrix Square Root, Blocking and Parallelism

NAG recently embarked on a ‘Knowledge Transfer Partnership’ with the University of Manchester to introduce matrix function capabilities into the NAG Library.  As part of this collaboration, Nick Higham (University of Manchester), Rui Ralha (University of Minho, Portugal) and I have been investigating how blocking can be used to speed up the computation of matrix square roots.

There is plenty of interesting mathematical theory concerning matrix square roots, but for now we’ll just use the definition that a matrix X is a square root of A if X2=A. Matrix roots have applications in finance and population modelling, where transition matrices are used to describe the evolution of a system from over a certain time interval, t. The square root of a transition matrix can be used to describe the evolution for the interval t/2. The matrix square root also forms a key part of the algorithms used to compute other matrix functions.

To find a square root of a matrix, we start by computing a Schur decomposition. The square root U of the resulting upper triangular matrix T can then be found via a simple recurrence over the elements Uij  and Tij:

We call this the ‘point’ method.

In many numerical linear algebra routines (such as LAPACK and the BLAS) run times can be drastically reduced by grouping operations into blocks to make more efficient use of a computer’s cache memory. We found that run times for the point method could be similarly reduced by solving the recurrence in blocks. We found that even greater speed ups could be obtained by using an alternative blocking scheme in which the matrix is repeatedly split into four blocks and the algorithm calls itself recursively.