Posts

Showing posts from 2014

Problem needed for research on Bermudan Option Pricing Algorithms

Introduction NAG together with Prof. Oosterlee and an MSc student from TU Delft are investigating the recent Stochastic Grid Bundling Method (SGBM) [1,2]. The objective is to compare the performance of SGBM to the well-known Longstaff-Schwartz (least squares method or LSM) in a non-academic setting, i.e. on the pricing of a Bermudan option, with underlying asset(s) driven by a realistic process such as Heston or LMM. We are looking for an interesting case to test these two methods. This includes the type of option, the underlying processes and any other important features or details.

Outline The well known LSM by Longstaff-Schwartz[3] is the industry standard for pricing multi-dimensional Bermudan options by simulation and regression. LSM is based on the regression now principle, whereas the Stochastic Grid Bundling Method (SGBM) by Jain and Oosterlee applies regression later in order to get more accurate approximations. However, this limits us to apply SGBM to processes where an an…

Gaussian Mixture Model

Image
With the release of Mark 24 of the NAG C Library comes a plethora of new functionality including matrix functions, pricing Heston options w/term structure, best subset selection, and element-wise weightings for the nearest correlation matrix.

Among the new routines I was excited to test out was the Gaussian mixture model (g03ga). This routine will take a set of data points and fit a mixture of Gaussians for a given (co)variance structure by maximizing the log-likelihood function. The user inputs the (co)variance structure, number of groups, and (optionally) the initial membership probabilities.

I decided to test out this new functionality, which is also in Mark 24 of the NAG Toolbox for MATLAB. Often I will use MATLAB with the NAG Toolbox before switching to C++ and the NAG C Library for my production code. So I generated some data and tried the routine to see if it could find the covariance structure. You can download the script and try it out for yourself here. The example will gene…

Secrets of HPC Procurement

Liked my article today in HPC Wire "Secrets of the Supercomputers"? I firmly poke fun at various elements of an imaginary supercomputer procurement process. However, I'm sure many readers will also see familiar and painfully serious aspects in the fictional story.

As I mention at the bottom of that article, NAG can help make the process of buying HPC systems much better than the worrying case in the article.

For example, the tutorial I ran at SC13 with Terry Hewitt titled "Effective Procurement of HPC Systems" was very popular (~100 attendees). We have provided similar training as part of consulting engagements and we are now looking at running the tutorial again as an open short course.

Testing Matrix Function Algorithms Using Identities

Image
Edvin Deadman and Nick Higham (University of Manchester) write:


In a previous blog post we explained how testing new algorithms is difficult. We discussed the forward error (how far from the actual solution are we?) and the backward error (what problem have we actually solved?) and how we'd like the backward error to be close to the unit roundoff, u.
For matrix functions, we also mentioned the idea of using identities such as sin2A + cos2A = I to test algorithms. In practice, rather than I, we might find that we obtain a matrix R close to I, perhaps with ||R-I|| ≈ 10-13. What does this tell us about how the algorithms for sin A and cos A are performing? In particular, does it tell us anything about the backward errors? We've just written a paper which aims to answer these questions. This work is an output of NAG's Knowledge Transfer Partnership with the University of Manchester, so we thought we'd blog about it here.
Let's consider the identity exp(log A) - A = 0. …

The Wilkinson Prize for Numerical Software

In honour of the outstanding contributions of James Hardy Wilkinson to the field of numerical software, Argonne National Laboratory, the National Physical Laboratory, and the Numerical Algorithms Group award the Wilkinson Prize for Numerical Software (US $3000).

The 2015 prize will be awarded at the International Conference in Industrial and Applied Mathematics (ICIAM) in Beijing, China, August 2015. Entries must be received by July 1, 2014. Additional details on the Wilkinson Prize for Numerical Software and the official rules can
be found at the URL: http://www.nag.co.uk/other/WilkinsonPrize.html

Submissions can be sent by email to wilkinson-prize@nag.co.uk, contact this address for further information.

Previous prizes have also been awarded at ICIAM:

1991 - Linda Petzold for DASSL
1995 - Chris Bischof and Alan Carle for ADIFOR 2.0
1999 - Matteo Frigo and Steven Johnson for FFTW
2003 - Jonathan Shewchuk  for Triangle
2007 - Wolfgang Bangerth, Ralf Hartmann and Guido Kanschat for deal.II
2…

C++ wrappers for the NAG C Library

Motivation

Occasionally, we receive requests to make the NAG C Library easier to call from C++. In the past, we found it difficult to build something that would work across all of the code our C++ users write. With the advent of the C++11 standard, many of the key features of the widely used Boost library have been incorporated into the STL, and finally provide a standardized way to address many of the difficulties we've encountered (the code we describe here works with Visual Studios 2010 and later, as well as several different versions of the Intel compiler and gcc).

We have created example wrappers that can serve as templates for creating C++ wrappers around NAG functions. Specifically, the examples now allow the user to:

1) Pass function pointers, functors, class member functions and lamda functions as callbacks to the NAG library.
2) Use raw pointers, smart pointers, STL containers or boost containers to store data and pass these to the NAG library.

 A note: these are NOT…

Out and about this week – The London Thalesians Seminar

Image
NAG’s Brian Spector gave a great talk to a packed audience of finance professionals in London this week. The Thalesians describe themselves as a “think tank of dedicated professionals with an interest in quantitative finance, economics, mathematics, physics and computer science”. Brian was delighted to present "Implied Volatility using Python's Pandas Library" at their recent London Seminar on Wednesday 15 January 2014. 

You can learn more about the subject of Brian’s talk in one of his NAG and Python blogs below: Implied Volatility using Python’s Pandas LibraryA nag4py UpdateA quick but dirty way to use the NAG C Library Mark 23 from Python
Additional NAG and Python information features on our website including how you can download the NAG Python Bindings that will enable use of the NAG C Library from Python. You can follow The Thalesians on Twitter @thalesians and NAG @NAGTalk.