Thursday, 2 October 2014

Problem needed for research on Bermudan Option Pricing Algorithms

Introduction

NAG together with Prof. Oosterlee and an MSc student from TU Delft are investigating the recent Stochastic Grid Bundling Method (SGBM) [1,2]. The objective is to compare the performance of SGBM to the well-known Longstaff-Schwartz (least squares method or LSM) in a non-academic setting, i.e. on the pricing of a Bermudan option, with underlying asset(s) driven by a realistic process such as Heston or LMM. We are looking for an interesting case to test these two methods. This includes the type of option, the underlying processes and any other important features or details.

Outline

The well known LSM by Longstaff-Schwartz[3] is the industry standard for pricing multi-dimensional Bermudan options by simulation and regression. LSM is based on the regression now principle, whereas the Stochastic Grid Bundling Method (SGBM) by Jain and Oosterlee applies regression later in order to get more accurate approximations. However, this limits us to apply SGBM to processes where an analytical or approximate expression of the discounted moments are available.

Another advantage of SGBM is its regression on bundles instead of the whole data set in a further attempt to decrease the regression error, and SGBM allows the computation of the Greeks at almost no extra cost.

Numerical results show that, compared to LSM, a higher accuracy can be obtained at comparable computational time. However, these tests were performed on academic problems, using geometric Brownian motion for the underlying assets.

Very recent research (Feng and Oosterlee, Sept 2014) extended SGBM for stochastic volatility and stochastic interest rate dynamics (Heston-Hull-White). This led to the possibility of comparing the performance of SGBM in a non-academic setting and is the cause for our research. We therefore wish to compare LSM and SGBM on a problem which is interesting and relevant to industry. Realistically, handling all the complexities of traded products will probably require more time than we have (around two months), so ideally we seek a problem which captures all the salient features and allowing us to see whether SGBM outperforms LSM.

Industry Involvement

We would like some advice in defining the product to be valued, especially regarding dimensionality, process, correlation structure, payoff and exercise features. Any additional industry involvement will be light: perhaps a few emails to clarify details, a conf call or face to face meeting.

For correspondence or further details please email support@nag.co.uk

References

[1] S. Jain and C. W. Oosterlee, "The Stochastic Grid Bundling Method: Efficient Pricing of Bermudan Options and their Greeks,'' papers, SSRN, Sept. 2013.
[2] S. Jain and C. W. Oosterlee, "Pricing high-dimensional Bermudan options using the stochastic grid method,'' International Journal of Computer Mathematics, 89(9):1186-1211, 2012.
[3] F. Longstaff and E. Schwartz,  "Valuing American Options by Simulation: a Simple Least-squares Approach,''  Review of Financial Studies, vol. 14, no. 1, pp. 113-147, 2001.

Thursday, 21 August 2014

Gaussian Mixture Model

With the release of Mark 24 of the NAG C Library comes a plethora of new functionality including matrix functions, pricing Heston options w/term structure, best subset selection, and element-wise weightings for the nearest correlation matrix.

Among the new routines I was excited to test out was the Gaussian mixture model (g03ga). This routine will take a set of data points and fit a mixture of Gaussians for a given (co)variance structure by maximizing the log-likelihood function. The user inputs the (co)variance structure, number of groups, and (optionally) the initial membership probabilities.

I decided to test out this new functionality, which is also in Mark 24 of the NAG Toolbox for MATLAB. Often I will use MATLAB with the NAG Toolbox before switching to C++ and the NAG C Library for my production code. So I generated some data and tried the routine to see if it could find the covariance structure. You can download the script and try it out for yourself here. The example will generate the test data, run the NAG Gaussian mixture routine and plot the results. An example of the output is given below:


The blue points are the generated data, while the red and yellow ovals show the covariance structure output from NAG Gaussian mixture model (the ovals are contours of ~0.60 density for their respective groups).

While running the example a couple times and re-sampling through the starting values for the initial membership probabilities, I noticed what I thought to be unusual behavior for the routine. Namely, the Gaussian mixture model algorithm isn't able to identify the Gaussian mixtures. The function would occasionally converge to the below structure (run the above script and click 'Resample' 3 times):


It appears the routines has converged to local extrema of the likelihood function. This happens as a result of randomizing the initial membership probabilities.

Since we have the power of the NAG Library at our disposal, I've added a K-means clustering option in the above script to initialize the membership probabilities to a particular cluster before being input into the Gaussian mixture model.


My colleagues tell me that k-means can also get stuck in a local minima and exhibit this 'wrong' behavior as well, thus one should always be careful with initial allocations - luckily the NAG Library provides a generally acceptable default allocation as an option! Many thanks to Martyn Byng and Stephen Langdell for comments on this post.

Tuesday, 17 June 2014

Secrets of HPC Procurement

Liked my article today in HPC Wire "Secrets of the Supercomputers"? I firmly poke fun at various elements of an imaginary supercomputer procurement process. However, I'm sure many readers will also see familiar and painfully serious aspects in the fictional story.

As I mention at the bottom of that article, NAG can help make the process of buying HPC systems much better than the worrying case in the article.

For example, the tutorial I ran at SC13 with Terry Hewitt titled "Effective Procurement of HPC Systems" was very popular (~100 attendees). We have provided similar training as part of consulting engagements and we are now looking at running the tutorial again as an open short course.

Thursday, 10 April 2014

Testing Matrix Function Algorithms Using Identities

Edvin Deadman and Nick Higham (University of Manchester) write:


In a previous blog post we explained how testing new algorithms is difficult. We discussed the forward error (how far from the actual solution are we?) and the backward error (what problem have we actually solved?) and how we'd like the backward error to be close to the unit roundoff, u.

For matrix functions, we also mentioned the idea of using identities such as sin2A + cos2A = I to test algorithms. In practice, rather than I, we might find that we obtain a matrix R close to I, perhaps with ||R-I|| ≈ 10-13. What does this tell us about how the algorithms for sin A and cos A are performing? In particular, does it tell us anything about the backward errors? We've just written a paper which aims to answer these questions. This work is an output of NAG's Knowledge Transfer Partnership with the University of Manchester, so we thought we'd blog about it here.

Let's consider the identity exp(log A) - A = 0. Suppose that when we evaluate the left-hand side in floating point arithmetic we get a nonzero residual R rather than 0. We'll assume that this residual is  caused by some backward errors E1 and E2 so that exp(log(A + E1) + E2) = R. We'd like to investigate how big R can be when E1 and E2 are small, so we expand the left-hand side in a Taylor series to linear order. After a bit of algebra, the result is a linear operator relating R to E1 and E2R = L(E1E2). The operator is different for each identity considered, but it always involves the Fréchet derivatives of the matrix functions in the identity (the full gory details, including formulae for the linear operators associated with various identities, are in our paper).

We now have two options. The first option is to estimate the norm of the linear operator. This enables us to determine the maximum value of ||R|| consistent with backward errors of size u. The second option is to use the linear operator to explicitly estimate E1 and E2. This works just as well, but it is more expensive.

Our paper contains several examples demonstrating our approach in practice, so we'll just pick one here, involving the matrix exponential. The 10 x 10 Forsythe matrix is


where we set the parameter u =  2-53 ≈ 10-16 (the unit roundoff). It's known that the Schur-Parlett general-purpose matrix function algorithm struggles with this matrix, whereas algorithms based on scaling and squaring have no such problems. Does our approach reflect this behaviour? Using the identity ee-A = I, the maximum normwise residual consistent with backward stability is found to be 7.1 x 10-15. The Schur-Parlett approach gives a normwise residual of 7.7 x 10-11, but the scaling and squaring algorithm gives a residual of 2.2 x 10-15. Consistent with this, the backward error estimates are  2.5 x 10-11 and 4.5 x 10-16 respectively. So our approach has correctly identified the preferred algorithm.

There are some subleties which we haven't mentioned (for example certain matrices and identities can produce misleading cancellation effects) but we now have a useful method for testing matrix function algorithms when high-precision arithmetic isn't available. The relevant routines for computing matrix functions and Fréchet derivatives are available in chapter F01 of the NAG Library.

Friday, 21 March 2014

The Wilkinson Prize for Numerical Software

In honour of the outstanding contributions of James Hardy Wilkinson to the field of numerical software, Argonne National Laboratory, the National Physical Laboratory, and the Numerical Algorithms Group award the Wilkinson Prize for Numerical Software (US $3000).

The 2015 prize will be awarded at the International Conference in Industrial and Applied Mathematics (ICIAM) in Beijing, China, August 2015. Entries must be received by July 1, 2014. Additional details on the Wilkinson Prize for Numerical Software and the official rules can
be found at the URL: http://www.nag.co.uk/other/WilkinsonPrize.html

Submissions can be sent by email to wilkinson-prize@nag.co.uk, contact this address for further information.

Previous prizes have also been awarded at ICIAM:

1991 - Linda Petzold for DASSL
1995 - Chris Bischof and Alan Carle for ADIFOR 2.0
1999 - Matteo Frigo and Steven Johnson for FFTW
2003 - Jonathan Shewchuk  for Triangle
2007 - Wolfgang Bangerth, Ralf Hartmann and Guido Kanschat for deal.II
2011 - Andreas Waechter and Carl D. Laird for Ipopt

Mike Dewar, Maurice Cox, Jorge Moré
Board of Trustees
NAG, NPL, ANL

Thursday, 6 February 2014

C++ wrappers for the NAG C Library

Motivation

Occasionally, we receive requests to make the NAG C Library easier to call from C++. In the past, we found it difficult to build something that would work across all of the code our C++ users write. With the advent of the C++11 standard, many of the key features of the widely used Boost library have been incorporated into the STL, and finally provide a standardized way to address many of the difficulties we've encountered (the code we describe here works with Visual Studios 2010 and later, as well as several different versions of the Intel compiler and gcc).

We have created example wrappers that can serve as templates for creating C++ wrappers around NAG functions. Specifically, the examples now allow the user to:

1) Pass function pointers, functors, class member functions and lamda functions as callbacks to the NAG library.
2) Use raw pointers, smart pointers, STL containers or boost containers to store data and pass these to the NAG library.

 A note: these are NOT a C++ interface, but merely wrappers around the C Library.

Inside the Wrappers

Let's take a look at the protoype for one function. Here is the new signature for a NAG minimization routine:

void e04abcpp(const std::function<void(double,double*,NagComm*)> &callback, double e1, double e2, double *a, double *b, Integer max_fun, double *x, double *f, NagComm *user, NagError *error);

There are two differences from the NAG C routine; the callback function and the NagComm class (which we will discuss in a bit). Using the above signature, the NAG Library can be called via a function, a pointer to a function, a function object with a common interface, or a class member function. For example, the user could have the following NAG calls:

e04abcpp(MyFunctor(), e1, e2, &a, &b, max_fun, &x, &f, &comm, NULL);
e04abcpp([&](double x,double*fc,NagComm*comm) {
            return myclass.myFunc(x,fc,comm);
        }, e1, e2, &a, &b, max_fun, &x, &f, &comm, NULL);

Here, the MyFunctor and myclass.myFunc are the callbacks that return values of our objective function. In order to pass these to the C Library, we need a structure to evaluate and store them. The natural way to do this is to extend the NAG Communication class. Since we dont want a new NAG Comm class affecting the underlying Library, we create the first member to be the old comm structure (a pointer to a structure can be treated as as a pointer to the first member of the structure). We can then place our functional and new callback inside this structure:

class NagComm  // New Comm Class
{
    Nag_Comm comm;   // Old Comm Class
    friend void e04abcpp(const std::function<void(double,double*,NagComm*)> &callback, double e1, double e2, double *a, double *b, Integer max_fun, double *x, double *f, NagComm *user, NagError *error);

public:
    void * puser;
private:
    // A pointer to the C++11 std::function object
    const std::function<void(double,double*,NagComm*)> * callbackCPP11;

    static void NAG_CALL e04abDelegateCPP11(double xc, double *fc, Nag_Comm *comm) {
        // Safe cast since access to this routine is controlled
        NagComm * user = (NagComm*)comm;
        assert(user->callbackCPP11);
        // Call the function polymorphically
        (* (user->callbackCPP11) )(xc, fc, user);
    }

public:
    NagComm() {
        puser = NULL;
        callbackCPP11 = NULL;
    }
};

Once we have the new Comm class, we can make the cpp wrapper function:

void e04abcpp(const std::function<void(double,double*,NagComm*)> &callback, double e1, double e2, double *a, double *b, Integer max_fun, double *x, double *f, NagComm *user, NagError *error);
{
    // This is the only place where NagUser::callbackCPP11 is assigned
    user->callbackCPP11 = &callback;
    e04abc(NagComm::e04abDelegateCPP11, e1, e2, a, b, max_fun, x, f, (Nag_Comm*)user, error);
}

Smart Pointers/Containers

Want to pass different sets of pointers and containers into NAG routines? These are easily handled as well, for example in the follow wrapper to g01amc:

template<typename RV, typename Q, typename QV>
void g01amcpp(Integer n, RV && rv, Integer nq, Q && q, QV && qv, NagError *error)

Users can call the function using STL containers, boost types or smart pointers:

double rv[n]={.5, .729, .861, .44, .791, .001, .062, .912, .27, .141, .32, .133, .654,
     .285, .553, .438, .316, .696, .718, .293, .704, .029};
std::vector<double> q={0.0, .25, .73, .9, 1.0};
std::unique_ptr<double[]> qv(new double[nq]);
g01amcpp(n, rv, nq, q, qv, NULL);  

The g01amcpp wrapper is defined as follows:

template<typename RV, typename Q, typename QV>
void g01amcpp(Integer n, RV && rv, Integer nq, Q && q, QV && qv, NagError *error)
{
    // Get primitive pointers from input types.     
    double * p_rv = nagprivate::getPtr(rv);
    const double * p_q = nagprivate::getPtr(q);
    double * p_qv = nagprivate::getPtr(qv);

    g01amc(n, p_rv, nq, p_q, p_qv, error);
}


A different template parameter for each array argument means different container/smart pointer classes can be used for each argument. Inside the wrapper the function nagprivate::getPtr is called to return the address of the first element in each array. This function uses the auto/decltype feature which has a new meaning in C++11:

namespace nagprivate {
template<typename T> auto getPtr(T & p) -> decltype( &p[0] )
      {
         return &(p[0]);
      }
}

Note: the wrapper above assumes that the pointer/container classes implement operator[] and use contiguous arrays for internal storage. The wrapper will most likely not work for things such as linked lists.

Code Availability

We have created some example templates and more will available over time so key an eye on the NAG and C website and the bottom of this blog for updates. The examples are tested and work with VS10, g++ 4.7, and several versions of the Intel compilers. The wrappers are available in source form and can be compiled at the command line by:

Windows:
$ cl -EHsc /MT -I"C:\Program Files\NAG\CL23\clw6i23dal"\include e04ab.cpp /link /LIBPATH:"C:\Program Files\NAG\CL23\clw6i23dal"\lib "C:\Program Files\NAG\CL23\clw6i23dal"\lib\nagc_nag_MT.lib user32.lib

Linux:
 $ g++ -std=c++11 e04ab.cpp -I/opt/NAG/cll6a23dgl/include/ /opt/NAG/cll6a23dgl/lib/libnagc_nag.a -lpthread -lm -o e04ab.exe

Feedback

We would be interested in hearing feedback from users. You can leave a comment below or email support@nag.co.uk with the subject 'NAG and C++ wrappers'.

March 14 Update

We now have 14 examples including optimization, matrix functions, nearest correlation matrices, and quantile analysis that are available in the above links.

Thursday, 16 January 2014

Out and about this week – The London Thalesians Seminar


NAG’s Brian Spector gave a great talk to a packed audience of finance professionals in London this week. The Thalesians describe themselves as a “think tank of dedicated professionals with an interest in quantitative finance, economics, mathematics, physics and computer science”. Brian was delighted to present "Implied Volatility using Python's Pandas Library" at their recent London Seminar on Wednesday 15 January 2014.

Brian Spector presenting "Implied Volatility using Python's Pandas Library at the recent London Thalesians Seminar

You can learn more about the subject of Brian’s talk in one of his NAG and Python blogs below:

Additional NAG and Python information features on our website including how you can download the NAG Python Bindings that will enable use of the NAG C Library from Python.
You can follow The Thalesians on Twitter @thalesians and NAG @NAGTalk.

 

Thursday, 28 November 2013

Using the NAG Compiler with the NAG Fortran Library (Mark 24) on Windows

Blog written by David Sayers, NAG Principal Technical Consultant 


Introduction


The NAG Fortran Compiler is an excellent compiler for checking and running your Fortran code. We use it extensively here at NAG to ensure that our code for the library complies with the current Fortran standards.

Personally whenever I have a user problem report that I can’t resolve by inspection my first instinct is to request run the users code with the Compiler. Frequently this identifies the error immediately.

As I am a Windows user, I am able to make use of the Integrated Development Environment (IDE) for the compiler that is provided to our Windows users. We call this IDE ‘NAG Fortran Builder’. One of the nice features of this IDE is its ease of use with the prevailing NAG libraries. To do this the user normally specifies a ‘NAG Library Project’ at creation time; thereafter the relevant settings are made to the compiler, so that either the Windows 64 bit or Windows 32-bit DLLs are used. 

At points in the NAG cycle a situation arises where the latest Fortran Builder is released. It automatically picks up the then current NAG libraries. Information about these libraries is embedded within Fortran Builder. This information includes documentation, interface blocks and example program information, all of which is mark-dependent. Subsequently a new NAG Library is released and so our Windows users would want to use the latest library from Fortran Builder. Some guidance is given on this within the relevant Users’ Note. This note attempts to gather together and expand on this information.


New Fortran Builder Projects


Fortran Builder may be set to work in 64-bit or 32-bit mode and to link to the corresponding NAG Fortran libraries. Normally we expect these to be FLW6I24DCL or FLDLL244ML respectively and the rest of this note is written with these specific implementations in mind.

To use a Mark 24 Library with Fortran Builder 5.3.1 using FLW6I24DCL :


a) Open a Console Application (not a NAG Library Application)
b) Go to Project Settings via the Project menu
c) Click the Directories tab, then click the Include tab
d) Add the include directory install dir\nag_interface_blocks_nagfor (where install dir should be replaced with the full path to the NAG Library installation directory on your machine. Note that you should not put any quotation marks around the directory name even though it may include spaces)
e) Exit from the Directories tab, then click the Link tab
f) Add a link library, for example install dir\bin\FLW6I24DC_nag.dll (note that you must link to the DLL itself, not the associated import library)


g) Build the project and run your program in the usual way 

{If you have a 64 bit integer version of the Library FLW6I24DDL then the –i8 option must be set under the Fortran Compiler / Additional Options tab also }

To use a Mark 24 Library with Fortran Builder 5.3.1 using FLDLL244ML :


a) Open a Console Application (not a NAG Library Application)
b) Go to Project Settings via the Project menu
c) Click the Directories tab, then click the Include tab
d) Add the include directory install dir\nag_interface_blocks_nagfor (note that you should not put any quotation marks around the directory name even though it may include spaces)
e) Exit from the Directories tab, then click the Link tab
f) Add a link library, for example install dir\lib\FLDLL244M_mkl.lib
g) Click the Basic Settings tab, and tick the DLL Compatibility check box - this turns on the -compatible flag to ensure that the compiler uses the same stdcall calling convention used to build the library. (This is a step not required in 64-bit mode where DLLs are already compatible.)
h) Build the project and run your program in the usual way 

In both cases, if you build your project in Debug mode (the default), it is not possible to use the Undefined variables option which is accessible on the Fortran Compiler / Runtime Check tab of Project Settings. This is because the NAG Library was not compiled with this option.

Also in both cases, should you want to link to the NAG Library in all future Console Application projects tick the Set as Default option.

Please note that if you had decided to set these new library settings as defaults and you subsequently form a new ‘NAG_Library Application’ project by mistake then numerous warning messages will be generated on compilation. This is because of incompatibilities with the interface blocks. You may recover from this by unticking the ‘Use the NAG Fortran Library’ box under Project Settings/Basic Settings. If Fortran Builder is unable to find a supported NAG Library then it won’t let you create a NAG Library Application project.


Existing Fortran Builder Projects


If a Fortran Builder project already exists and has linked to the NAG Library then the chances are that it will be a ‘NAG Library Application’ project. If you still have the Mark 23 libraries on your system then these are the libraries that will be accessed and used.

You may however wish to use the latest, Mark 24, Library, either to get an updated version of the relevant routines or to exploit the new functionality on offer with Mark 24. Under these circumstances the project needs to be changed.

Unfortunately this entails a number of steps for each project that you need to convert:

  • unticking the ‘Use the NAG Fortran Library’ box under Project Settings/Basic Settings.
  • Now repeat steps b) to g) or b) to h) for the relevant library as described above.
An alternative, if you have already set up the default console application to incorporate NAG Mark 24, is to simply re-create the project as a Console Application.


Updating the NAG Help file available from Fortran Builder


You will require administrator privileges to do this, and you may well decide not to tamper in this manner, but it is possible to access the Mark 24 Help file directly from Fortran Builder. The process involves copying the help file provided with your library, nagdoc_fl24.chm into the bin directory of Fortran Builder. This location is typically C:\Program Files (x86)\NAG\EFBuilder 5.3.1\bin . We suggest you rename the existing help file nagdoc_fl23.chm to xnagdoc_fl23.chm. Now copy nagdoc_fl24.chm from the library into the .\bin directory and trick Fortran Builder by renaming nagdoc_fl24.chm to nagdoc_fl23.chm.

This works, but unless you feel strongly that this is what you need, it is not a process we recommend. The help file can always be accessed from the appropriate library directory rather than from Fortran Builder.


Updating the example program templates


Any tampering with the Fortran Builder files to access the Mark 24 example templates is even less recommended than the procedure for the documentation outline above.  The relevant directories are below C:\Program Files (x86)\NAG\EFBuilder 5.3.1\bin\resource\fldll234m.


It is our view that the user should note that each of the libraries is provided with example programs, data and results and we suggest that the user ought to simply copy the ones that they want into their project source directory. (It may seem obvious, but copying the files means that the original will still be available for use another time.)

Tuesday, 26 November 2013

NAG at SC13

A short summary of news from or about NAG at SC13.


Quick fact: NAG is one of very few organizations that have been at every SC since the series started.



Attacking the HPC skills need: NAG Supercomputing Support Service passes milestone of 2000 course attendees
NAG announces the latest milestone in addressing the skills needs of the scientific computing community with the skills they require to effectively exploit HPC systems - over 2000 attendees have now benefitted from NAG's highly rated HPC training courses under the UK's national Computational Support and Engineering (CSE) Support Service. Read full press release ...


Tutorial on HPC procurement: Room was packed!


New Services to underpin and innovate your numerical computing revealed by NAG
NAG revealed new services to enable users of numerical computing to benefit from NAG's 4 decades of experience and expertise in numerical solutions. NAG has delivered proven solutions in this area through its trusted Numerical Libraries for over 4 decades. In response to customer interest, NAG is now launching its Numerical Services to deliver the expertise and experience behind these highly reputed Libraries directly to developers and users of numerical computing applications - whether they use NAG products or not. This follows the success of NAG's HPC Services business and enables customer access to the full range of NAG experts - numerical analysts, computer scientists, mathematicians, algorithm developers, software engineers, and more. Read full press release ...


NAG staff member on Scientific Computing Sound Byte: Nechelle Dismer on Seeing Two Sides of the Conference


More HPC Innovation Awards for NAG at SC13
NAG has been awarded another two HPC Innovation Excellence Awards at SC13, Denver. The awards recognised NAG's HPC software innovations on two projects, CABARET and INCOMPACT 3D, undertaken by NAG's Computational Science and Engineering (CSE) Service, part of the UK's HECToR national supercomputing service. These two projects recognised by the HPC Innovation Awards at the world's largest supercomputing conference are among over 50 similarly successful application performance improvement projects within NAG's HECToR CSE Service. NAG is delighted to have its world class HPC Software Services acknowledged in this way for the second time. Read full press release ...


HPC Quiz: How Do Your SC13 Credentials Stack Up? (by @hpcnotes for HPC Wire)



NAG to broaden 64-bit ARMv8-A ecosystem
NAG announces a new technical collaboration with ARM, the world's leading semiconductor IP supplier. NAG's highly skilled team of HPC experts, numerical analysts and computer scientists will ensure the algorithms in the NAG Numerical Library and the facilities of the NAG FORTRAN Compiler are available for use on ARM's 64-bit ARMv8-A architecture-based platforms. By working with NAG, ARM is greatly enhancing the strong HPC infrastructure for ARMv8-A architecture through the enablement of numerical computation at its release. Read full press release ...


And looking to the future: SC14 will be in New Orleans, SC15 in Austin and SC16 in Salt Lake City. See you there!


Tuesday, 15 October 2013

Implied Volatility using Python's Pandas Library

Python has some nice packages such as numpy, scipy, and matplotlib for numerical computing and data visualization. Another package that deserves a mention that we have seen increasingly is Python's pandas library. Pandas has fast and efficient data analysis tools to store and process large amounts of data. Additionally, pandas has numpy and ctypes built into it which allow easy integration with NAG's nag4py package.

Below is an example using nag4py and the pandas library to calculate the implied volatility of options prices. All the code below can be downloaded to calculate your own implied volatility surface for data on the Chicago Board of Options Exchange website.


Background on Implied Volatility

The famous Black Scholes formula for pricing a Call/Put option on a stock is a function of 6 variables; Underlying Price, Interest Rate, Dividends, Strike Price, Time-to-Expiration, and Volatility. Note that for a given option contract we can observe the Underlying Price, Interest Rate, and Dividend Rate. In addition, the options contract specifies the Strike Price and Time-to-Expiration.

Thus the one variable we have to tweak is the volatility. We can then ask the question: For what volatility* does the Black Scholes equation price equal the market price.

F(volatility*)=Market Option Price

This volatility* is then denoted as the implied volatility observed in the market. Since the Black Scholes equation is a continuous function of volatility on (0, 1) we can use a NAG root finder to locate such volatility*. In fact, we will use a couple NAG functions; nag_zero_cont_func_brent will find the root using Brent's Algorithm, nag_bsm_price will calculate the theoretical option price, nag_2d_cheb_fit_lines will perform a least squares Chebyshev fit to the volatility surface, and nag_2d_cheb_eval will evaluate the surface at intermediate points.

Running the Script

We have created an example script that will run on data from the Chicago Board of Options Exchange (CBOE) website. The CBOE provides options data in downloadable format here.

The program uses the pandas package to easily store and manipulate the data via DataFrames. In the script, you'll notice pandas takes care of the many data processing functions and then calls the NAG Library for more complex analysis. The program will automatically read in the options data, calculate implied volatility for the call and put options, and plot the volatility curves and surface. The above code can be run as follows (given that you have pandas, matplotlib, nag4py, and ctypes):

python implied_volatility.py QuoteData.dat

Results

I ran the program on Apple Options data (SYM AAPL). Below is the output, generated in 10 seconds with roughly 500,000 calls to the NAG Library:

Implied Volatility for AAPL (APPLE INC),500.3691,+4.3291, Oct 15 2013 @ 15:25 ET,Bid,500.32,Ask,500.48,Size,3x1,Vol,10041069,
Calculating Implied Vol of Calls...
Calculated Implied Vol for 15800 Calls
Calculating Implied Vol of Puts...
Calculated Implied Vol for 15800 Puts
Plotting Volatility Curves/Surface

Figure 1: Volatility Curves (Click to Enlarge)
In the above picture we see some nice volatility skews/smiles that level out as the option expiration date increases. This occurs when investors demand higher premiums (and thus volatility) for deep in and out of the money option prices. As a result, implied volatility curves are oftentimes upward sloping, downward sloping, or U-shaped.

Below is the volatility surface (plotting Strike, Expiration, and Implied Volatility from Figure 1 on the same graph).

Figure 2: Volatility Surface (Click to Enlarge)
When performing the Chebyshev surface fit, two required inputs are k and l; the degrees of the polynomials in the x and y directions. I choose k = 3 and l = 3 for the above graph, but these can vary among stocks. It is best to first examine the volatility curves for various expiration dates (Figure 1) and then choose k and l appropriately.

So go ahead, download any options data the CBOE provides and plot your own volatility surface! Many thanks to John Morrissey and Chris Seymour for code suggestions.

Update: Download data during CBOE trading hours to ensure the graphs are not null.

Wednesday, 25 September 2013

NAG Toolbox for MATLAB® documentation for the Mark 24 release

A look at the documentation features in the new, Mark 24, release of the NAG Toolbox for MATLAB®.

MATLAB Documentation Help Center

Recent versions of MATLAB have a new documentation system known as the Help Center. This, however is restricted to products from MathWorks; third party extension toolboxes such as our NAG Toolbox for MATLAB® are now documented in a separate help application for Supplemental Software. This has necessitated changes in the way the NAG Toolbox documentation is packaged and installed, and we have taken the opportunity to update the system generally. This post explores some of the new features.

Installation Directory

Previous releases of the Toolbox have, by default, installed in the toolbox directory of the MATLAB installation on the user's machine. Due to changes in the MATLAB system this does not work for releases after R2012a. So by default the new Mark 24 release of the NAG Toolbox installs under a top level NAG directory in a similar location to our other Library products. The location may be changed as an option on installation, any location may be used that is not in the MATLAB tree. Note that this documentation takes up considerably more space than the previous version as previously it was distributed as a jar file (which is a zip compressed archive) but MATLAB no longer supports jar archive help files.

Rendering Technology

Earlier MATLAB releases used a java based HTML rendering application. This had the benefit that it was consistent across all MATLAB platforms but it was a relatively old technology and its support for more modern CSS and JavaScript features was somewhat variable. This meant that the display of mathematics was particularly tricky, and the usual MathML display in our library products was downgraded in our Toolbox documentation to whatever combination of nested tables and font changes worked. The current MATLAB releases use a new rendering agent (based on JxBrowser) that is a Java wrapper (as the MATLAB GUI is all Java based) around a modern Web browsing shared library. On Windows it uses the system installation of mshtml (so is essentially Internet Explorer) on Macintosh it uses the system webkit library (so is essentially Safari) and on Linux it is shipped with a gecko library (so is essentially Firefox). This means that all modern “HTML5” techniques (HTML, JavaScript and CSS) may be used, but does mean that the same browser differences that complicate website development now apply also to MATLAB help. In particular we have now chosen to use the MathJax JavaScript library and SVG to render the mathematics. The difference may be seen in the following example (from c06faf)

Old Rendering

n1
k = 1/(sqrt(n))xj × exp(i(2πjk)/n),  k = 0,1,,n1.
j = 0

New Rendering

While the old rendering was legible, hopefully most people will find the new rendering a great improvement, especially if there are nested superscripts and square roots etc. The new documents in fact contain both sets of markup, defaulting to the old one, and the new MathJax/SVG rendering is only enabled if a sufficiently new browser is detected. Mainly this will only affect Windows users where Internet Explorer 9 or higher is required for SVG support.

Finding the Documentation

Several people using the previous Toolbox have had difficulty finding the Supplemental Software help. Initially on installing MATLAB there is no access to it in the MATLAB GUI, however after the NAG (or other third party) Toolbox is installed, MATLAB detects this and adds a new link at the bottom of the home screen of the main Help Center (that can be reached from the ? button or choosing help from the menu or pressing the F1 key):

The Supplemental Software help looks familiar to users of previous MATLAB releases, with a folding table of contents and search box in the left pane and the documentation in the right. From R2013a (at least) the rendering uses the same JxBowser based rendering as the main Help Center. (The first version of the supplemental help browser in 2012a was essentially a copy of the old help system.) The search box here is particularly important as sadly there is no command-line search available as the standard docsearch command is now limited to MathWorks products

Documentation for 32 and 64 bit Versions

Previous releases of the Toolbox have had completely separate documentation for the implementations using 32 or 64 bit integer types. Partly as a result of MathWorks discontinuing support for 32 bit Linux, and partly as the int64 type now admits all the same operations as the int32 type, we have now combined the documentation. There is a link at the top of each page to toggle descriptions between int32, int64 and the nag_int type that works on either platform. This is initially set to int32 or int64 depending on the implementation. Unfortunately testing showed that in some browser configurations the CSS to switch between the types interfered with cutting and pasting the example code so in this release the link does not affect the examples which are always int32 or int64 depending on the implementation. Note the displayed results are always from the 64 bit Linux implementation.

Related to this the PDF version of the documentation is now only produced for the 64 bit version as the changes for int32 are entirely trivial. (Just replace int64 by int32 throughout.)

Access to the Examples

In previous releases, the example programs for each routine have only been available as part of the documentation. For simple examples they could be cut and pasted directly into the MATLAB command window, however syntactic restrictions on MATLAB function definitions meant that any examples defining functions had to be first copied to a .m file before being executed as you can not define functions at the top level MATLAB command prompt.

In this release all example programs are provided as executable .m files as well as appearing in the HTML documentation. They are written to use nag_int so they work in 32 and 64 bit implementations. The most convenient way to access these is a link Open in the MATLAB editor that is just above each example documentation. Clicking on this link (if the MATLAB help or web browser is being used) will open the MATLAB editor, clicking on the green arrow run button will then execute the example in the MATLAB Command Window.

The following screen-shot highlights the process of linking from the documentation in the Supplemental help browser, to the MATLAB editor, to running the program and producing results in the MATLAB Output window and additional MATLAB plots.

screenshot of example running

Additional Utility Functions

nag_doc

nag_doc name opens the documentation for the function name in the MATLAB web browser. name may be the long or short name of a function in the NAG Toolbox. If name is omitted, the browser is opened at the start page of the NAG Toolbox documentation.

Unfortunately it is not possible to directly open the help browser at a function document, however the web browser offers the same facilities apart from the search facility.

If HTML documentation for a command is not found, then the output from the help function is returned.

nag_demo
nag_demo uses the standard demo function to open the demo browser for the NAG Toolbox.

This also forms a convenient command-line command to open the Supplemental Software Browser as the Demo Browser is the same application and you may navigate from there to the documentation just using the table of contents pane.

As previously noted the MATLAB docsearch is no longer available to search the NAG Toolbox documentation. The full text index is still generated and is used by the search box in the Supplemental Software GUI, so this is the fastest way to search the documentation. Unfortunately the Java classes for this search are not exposed by MATLAB so it is not currently possible to make a nag_docsearch command-line version of this.

The standard MATLAB lookfor command may be used. This only searches the first line of the ASCII comments in the help files rather than the full documentation, and it does not benefit from a pre-indexed search database, however it can be useful. For example if you are looking for a function for solving LP optimisation problems

lookfor ' LP '

Produces the output

e04mf                          - : LP problem (dense)
e04nk                          - : LP or QP problem
                                  (sparse)
e04nq                          - : LP or QP problem
                                  (suitable for sparse problems)
h02bb                          - : Integer LP problem (dense)
h02bv                          - : Print IP or LP solutions with
                                  user-specified names for rows and columns
h02ce                          - : Integer LP or QP problem
                                  (sparse), using e04nk
nag_mip_ilp_dense              - : Integer LP problem (dense)
nag_mip_ilp_print              - : Print IP or LP solutions with
                                  user-specified names for rows and columns
nag_mip_iqp_sparse             - : Integer LP or QP problem
                                  (sparse), using e04nk
nag_opt_lp_solve               - : LP problem (dense)
nag_opt_qpconvex1_sparse_solve - : LP or QP problem 
                                  (sparse)
nag_opt_qpconvex2_sparse_solve - : LP or QP problem
                                  (suitable for sparse problems)

Where each function name is a link to its ascii help text, for example the first link produces:

e04mf: LP problem (dense)
  
Syntax: 
  
 [istate, x, iter, obj, ax, clamda, lwsav, iwsav, rwsav, ifail] =
                 e04mf(a, bl, bu, cvec, istate, x, lwsav,
                         iwsav, rwsav, 'n', n, 'nclin', nclin)
  
Further documentation is available in the NAG Toolbox help files.

The link at the end of the help text of each function uses the nag_doc command described above to show the help in the MATLAB web browser.

Documentation on the NAG Web Site

One advantage of the new documentation format is that it is designed to work in a standard web browser. The old documentation did not work well in standard browsers as it had so many features that were tuned to the MATLAB help system. So for this release we have the HTML as well as the PDF version of the documentation available from our website. The JavaScript used on the pages hides the MATLAB specific links such as Open in the MATLAB Editor when the pages are read via the Web.