## Tuesday, 17 June 2014

### Secrets of HPC Procurement

As I mention at the bottom of that article, NAG can help make the process of buying HPC systems much better than the worrying case in the article.

For example, the tutorial I ran at SC13 with Terry Hewitt titled "Effective Procurement of HPC Systems" was very popular (~100 attendees). We have provided similar training as part of consulting engagements and we are now looking at running the tutorial again as an open short course.

## Thursday, 10 April 2014

### Testing Matrix Function Algorithms Using Identities

*Edvin Deadman and Nick Higham (University of Manchester) write:*

*forward error*(how far from the actual solution are we?) and the

*backward error*(what problem have we actually solved?) and how we'd like the backward error to be close to the unit roundoff,

*u*.

*cos*

^{2}A +

^{2}*A = I*to test algorithms. In practice, rather than

*I*, we might find that we obtain a matrix

*R*close to

*I*, perhaps with

*||R-I|| ≈*10

^{-13}. What does this tell us about how the algorithms for sin

*A*and cos

*A*are performing? In particular, does it tell us anything about the backward errors? We've just written a paper which aims to answer these questions. This work is an output of NAG's Knowledge Transfer Partnership with the University of Manchester, so we thought we'd blog about it here.

*(*log

*A) - A = 0*. Suppose that when we evaluate the left-hand side in floating point arithmetic we get a nonzero residual

*R*rather than

*0*. We'll assume that this residual is caused by some backward errors

*E*and

_{1}*E*so that exp

_{2}*(*log

*(A + E*

_{1}) +*E*. We'd like to investigate how big

_{2}) = R*R*can be when

*E*and

_{1}*E*are small, so we expand the left-hand side in a Taylor series to linear order. After a bit of algebra, the result is a

_{2}*linear operator*relating

*R*to

*E*and

_{1}*E*:

_{2}*R = L(*

*E*,

_{1}*E*

_{2}*)*. The operator is different for each identity considered, but it always involves the Fréchet derivatives of the matrix functions in the identity (the full gory details, including formulae for the linear operators associated with various identities, are in our paper).

*||R||*consistent with backward errors of size

*u*. The second option is to use the linear operator to explicitly estimate

*E*and

_{1}*E*. This works just as well, but it is more expensive.

_{2}*u =*2

^{-53}

*≈*

*10*

^{-16}(the unit roundoff). It's known that the Schur-Parlett general-purpose matrix function algorithm struggles with this matrix, whereas algorithms based on scaling and squaring have no such problems. Does our approach reflect this behaviour? Using the identity

*e*, the maximum normwise residual consistent with backward stability is found to be 7.1 x 10

^{A }e^{-A}= I^{-15}. The Schur-Parlett approach gives a normwise residual of 7.7 x 10

^{-11}, but the scaling and squaring algorithm gives a residual of 2.2 x 10

^{-15}. Consistent with this, the backward error estimates are 2.5 x 10

^{-11}and 4.5 x 10

^{-16}respectively. So our approach has correctly identified the preferred algorithm.

## Friday, 21 March 2014

### The Wilkinson Prize for Numerical Software

The 2015 prize will be awarded at the International Conference in Industrial and Applied Mathematics (ICIAM) in Beijing, China, August 2015. Entries must be received by July 1, 2014. Additional details on the Wilkinson Prize for Numerical Software and the official rules can

be found at the URL: http://www.nag.co.uk/other/WilkinsonPrize.html

Submissions can be sent by email to wilkinson-prize@nag.co.uk, contact this address for further information.

Previous prizes have also been awarded at ICIAM:

1991 - Linda Petzold for DASSL

1995 - Chris Bischof and Alan Carle for ADIFOR 2.0

1999 - Matteo Frigo and Steven Johnson for FFTW

2003 - Jonathan Shewchuk for Triangle

2007 - Wolfgang Bangerth, Ralf Hartmann and Guido Kanschat for deal.II

2011 - Andreas Waechter and Carl D. Laird for Ipopt

Mike Dewar, Maurice Cox, Jorge Moré

Board of Trustees

NAG, NPL, ANL

## Thursday, 6 February 2014

### C++ wrappers for the NAG C Library

**Motivation**Occasionally, we receive requests to make the NAG C Library easier to call from C++. In the past, we found it difficult to build something that would work across all of the code our C++ users write. With the advent of the C++11 standard, many of the key features of the widely used Boost library have been incorporated into the STL, and finally provide a standardized way to address many of the difficulties we've encountered (the code we describe here works with Visual Studios 2010 and later, as well as several different versions of the Intel compiler and gcc).

We have created example wrappers that can serve as templates for creating C++ wrappers around NAG functions. Specifically, the examples now allow the user to:

1) Pass function pointers, functors, class member functions and lamda functions as callbacks to the NAG library.

2) Use raw pointers, smart pointers, STL containers or boost containers to store data and pass these to the NAG library.

A note: these are NOT a C++ interface, but merely wrappers around the C Library.

**Inside the Wrappers**Let's take a look at the protoype for one function. Here is the new signature for a NAG minimization routine:

void e04abcpp(const std::function<void(double,double*,NagComm*)> &callback, double e1, double e2, double *a, double *b, Integer max_fun, double *x, double *f, NagComm *user, NagError *error);

There are two differences from the NAG C routine; the callback function and the NagComm class (which we will discuss in a bit). Using the above signature, the NAG Library can be called via a function, a pointer to a function, a function object with a common interface, or a class member function. For example, the user could have the following NAG calls:

e04abcpp(MyFunctor(), e1, e2, &a, &b, max_fun, &x, &f, &comm, NULL);

e04abcpp([&](double x,double*fc,NagComm*comm) {

return myclass.myFunc(x,fc,comm);

}, e1, e2, &a, &b, max_fun, &x, &f, &comm, NULL);

Here, the MyFunctor and myclass.myFunc are the callbacks that return values of our objective function. In order to pass these to the C Library, we need a structure to evaluate and store them. The natural way to do this is to extend the NAG Communication class. Since we dont want a new NAG Comm class affecting the underlying Library, we create the first member to be the old comm structure (a pointer to a structure can be treated as as a pointer to the first member of the structure). We can then place our functional and new callback inside this structure:

class NagComm // New Comm Class

{

Nag_Comm comm; // Old Comm Class

friend void e04abcpp(const std::function<void(double,double*,NagComm*)> &callback, double e1, double e2, double *a, double *b, Integer max_fun, double *x, double *f, NagComm *user, NagError *error);

public:

void * puser;

private:

// A pointer to the C++11 std::function object

const std::function<void(double,double*,NagComm*)> * callbackCPP11;

static void NAG_CALL e04abDelegateCPP11(double xc, double *fc, Nag_Comm *comm) {

// Safe cast since access to this routine is controlled

NagComm * user = (NagComm*)comm;

assert(user->callbackCPP11);

// Call the function polymorphically

(* (user->callbackCPP11) )(xc, fc, user);

}

public:

NagComm() {

puser = NULL;

callbackCPP11 = NULL;

}

};

Once we have the new Comm class, we can make the cpp wrapper function:

void e04abcpp(const std::function<void(double,double*,NagComm*)> &callback, double e1, double e2, double *a, double *b, Integer max_fun, double *x, double *f, NagComm *user, NagError *error);

{

// This is the only place where NagUser::callbackCPP11 is assigned

user->callbackCPP11 = &callback;

e04abc(NagComm::e04abDelegateCPP11, e1, e2, a, b, max_fun, x, f, (Nag_Comm*)user, error);

}

__Smart Pointers/Containers__Want to pass different sets of pointers and containers into NAG routines? These are easily handled as well, for example in the follow wrapper to g01amc:

template<typename RV, typename Q, typename QV>

void g01amcpp(Integer n, RV && rv, Integer nq, Q && q, QV && qv, NagError *error)

Users can call the function using STL containers, boost types or smart pointers:

double rv[n]={.5, .729, .861, .44, .791, .001, .062, .912, .27, .141, .32, .133, .654,

.285, .553, .438, .316, .696, .718, .293, .704, .029};

std::vector<double> q={0.0, .25, .73, .9, 1.0};

std::unique_ptr<double[]> qv(new double[nq]);

g01amcpp(n, rv, nq, q, qv, NULL);

The g01amcpp wrapper is defined as follows:

template<typename RV, typename Q, typename QV>

void g01amcpp(Integer n, RV && rv, Integer nq, Q && q, QV && qv, NagError *error)

{

// Get primitive pointers from input types.

double * p_rv = nagprivate::getPtr(rv);

const double * p_q = nagprivate::getPtr(q);

double * p_qv = nagprivate::getPtr(qv);

g01amc(n, p_rv, nq, p_q, p_qv, error);

}

template<typename T> auto getPtr(T & p) -> decltype( &p[0] )

{

return &(p[0]);

}

Note: the wrapper above assumes that the pointer/container classes implement operator[] and use contiguous arrays for internal storage. The wrapper will most likely not work for things such as linked lists.

**Code Availability**We have created some example templates and more will available over time so key an eye on the NAG and C website and the bottom of this blog for updates. The examples are tested and work with VS10, g++ 4.7, and several versions of the Intel compilers. The wrappers are available in source form and can be compiled at the command line by:

Windows:

$ cl -EHsc /MT -I"C:\Program Files\NAG\CL23\clw6i23dal"\include e04ab.cpp /link /LIBPATH:"C:\Program Files\NAG\CL23\clw6i23dal"\lib "C:\Program Files\NAG\CL23\clw6i23dal"\lib\nagc_nag_MT.lib user32.lib

Linux:

$ g++ -std=c++11 e04ab.cpp -I/opt/NAG/cll6a23dgl/include/ /opt/NAG/cll6a23dgl/lib/libnagc_nag.a -lpthread -lm -o e04ab.exe

**Feedback**We would be interested in hearing feedback from users. You can leave a comment below or email support@nag.co.uk with the subject 'NAG and C++ wrappers'.

**March 14 Update**We now have 14 examples including optimization, matrix functions, nearest correlation matrices, and quantile analysis that are available in the above links.

## Thursday, 16 January 2014

### Out and about this week – The London Thalesians Seminar

**“think tank of dedicated professionals with an interest in***Brian was delighted to present "Implied Volatility using Python's Pandas Library" at their recent London Seminar on Wednesday 15 January 2014.*

**quantitative finance, economics, mathematics, physics and computer science”.**Brian Spector presenting "Implied Volatility using Python's Pandas Library at the recent London Thalesians Seminar |

## Thursday, 28 November 2013

### Using the NAG Compiler with the NAG Fortran Library (Mark 24) on Windows

#### Blog written by David Sayers, NAG Principal Technical Consultant

###
Introduction

As I am a Windows user, I am able to make use of the Integrated Development Environment (IDE) for the compiler that is provided to our Windows users. We call this IDE ‘NAG Fortran Builder’. One of the nice features of this IDE is its ease of use with the prevailing NAG libraries. To do this the user normally specifies a ‘NAG Library Project’ at creation time; thereafter the relevant settings are made to the compiler, so that either the Windows 64 bit or Windows 32-bit DLLs are used.

At points in the NAG cycle a situation arises where the latest Fortran Builder is released. It automatically picks up the then current NAG libraries. Information about these libraries is embedded within Fortran Builder. This information includes documentation, interface blocks and example program information, all of which is mark-dependent. Subsequently a new NAG Library is released and so our Windows users would want to use the latest library from Fortran Builder. Some guidance is given on this within the relevant Users’ Note. This note attempts to gather together and expand on this information.

###
New Fortran Builder Projects

*install dir*\nag_interface_blocks_nagfor (where

*install dir*should be replaced with the full path to the NAG Library installation directory on your machine. Note that you should not put any quotation marks around the directory name even though it may include spaces)

*install dir*\bin\FLW6I24DC_nag.dll (note that you must link to the DLL itself, not the associated import library)

*install dir*\nag_interface_blocks_nagfor (note that you should not put any quotation marks around the directory name even though it may include spaces)

*install dir*\lib\FLDLL244M_mkl.lib

In both cases, if you build your project in Debug mode (the default), it is not possible to use the

*Undefined variables*option which is accessible on the Fortran Compiler / Runtime Check tab of Project Settings. This is because the NAG Library was

*not*compiled with this option.

Also in both cases, should you want to link to the NAG Library in all future Console Application projects tick the Set as Default option.

Please note that if you had decided to set these new library settings as defaults and you subsequently form a new ‘NAG_Library Application’ project by mistake then numerous warning messages will be generated on compilation. This is because of incompatibilities with the interface blocks. You may recover from this by unticking the ‘Use the NAG Fortran Library’ box under Project Settings/Basic Settings. If Fortran Builder is unable to find a supported NAG Library then it won’t let you create a NAG Library Application project.

###
Existing Fortran Builder Projects

If a Fortran Builder project already exists and has linked to the NAG Library then the chances are that it will be a ‘NAG Library Application’ project. If you still have the Mark 23 libraries on your system then these are the libraries that will be accessed and used.

- unticking the ‘Use the NAG Fortran Library’ box under Project Settings/Basic Settings.
- Now repeat steps b) to g) or b) to h) for the relevant library as described above.

###
Updating the NAG Help file available from Fortran Builder

You will require administrator privileges to do this, and you may well decide not to tamper in this manner, but it is possible to access the Mark 24 Help file directly from Fortran Builder. The process involves copying the help file provided with your library, nagdoc_fl24.chm into the bin directory of Fortran Builder. This location is typically C:\Program Files (x86)\NAG\EFBuilder 5.3.1\bin . We suggest you rename the existing help file nagdoc_fl23.chm to xnagdoc_fl23.chm. Now copy nagdoc_fl24.chm from the library into the .\bin directory and trick Fortran Builder by renaming nagdoc_fl24.chm to nagdoc_fl23.chm.

###
Updating the example program templates

Any tampering with the Fortran Builder files to access the Mark 24 example templates is even less recommended than the procedure for the documentation outline above. The relevant directories are below C:\Program Files (x86)\NAG\EFBuilder 5.3.1\bin\resource\fldll234m.

## Tuesday, 26 November 2013

### NAG at SC13

*A short summary of news from or about NAG at SC13.*

**Quick fact:**NAG is one of very few organizations that have been at every SC since the series started.

**Attacking the HPC skills need: NAG Supercomputing Support Service passes milestone of 2000 course attendees**

NAG announces the latest milestone in addressing the skills needs of the scientific computing community with the skills they require to effectively exploit HPC systems - over 2000 attendees have now benefitted from NAG's highly rated HPC training courses under the UK's national Computational Support and Engineering (CSE) Support Service. Read full press release ...

**Tutorial on HPC procurement:**Room was packed!

**New Services to underpin and innovate your numerical computing revealed by NAG**

NAG revealed new services to enable users of numerical computing to benefit from NAG's 4 decades of experience and expertise in numerical solutions. NAG has delivered proven solutions in this area through its trusted Numerical Libraries for over 4 decades. In response to customer interest, NAG is now launching its Numerical Services to deliver the expertise and experience behind these highly reputed Libraries directly to developers and users of numerical computing applications - whether they use NAG products or not. This follows the success of NAG's HPC Services business and enables customer access to the full range of NAG experts - numerical analysts, computer scientists, mathematicians, algorithm developers, software engineers, and more. Read full press release ...

**NAG staff member on Scientific Computing Sound Byte:**Nechelle Dismer on Seeing Two Sides of the Conference

**More HPC Innovation Awards for NAG at SC13**

NAG has been awarded another two HPC Innovation Excellence Awards at SC13, Denver. The awards recognised NAG's HPC software innovations on two projects, CABARET and INCOMPACT 3D, undertaken by NAG's Computational Science and Engineering (CSE) Service, part of the UK's HECToR national supercomputing service. These two projects recognised by the HPC Innovation Awards at the world's largest supercomputing conference are among over 50 similarly successful application performance improvement projects within NAG's HECToR CSE Service. NAG is delighted to have its world class HPC Software Services acknowledged in this way for the second time. Read full press release ...

**HPC Quiz:**How Do Your SC13 Credentials Stack Up? (by @hpcnotes for HPC Wire)

**NAG to broaden 64-bit ARMv8-A ecosystem**

NAG announces a new technical collaboration with ARM, the world's leading semiconductor IP supplier. NAG's highly skilled team of HPC experts, numerical analysts and computer scientists will ensure the algorithms in the NAG Numerical Library and the facilities of the NAG FORTRAN Compiler are available for use on ARM's 64-bit ARMv8-A architecture-based platforms. By working with NAG, ARM is greatly enhancing the strong HPC infrastructure for ARMv8-A architecture through the enablement of numerical computation at its release. Read full press release ...

**And looking to the future:**

**SC14**will be in New Orleans,

**SC15**in Austin and

**SC16**in Salt Lake City.

*See you there!*

## Tuesday, 15 October 2013

### Implied Volatility using Python's Pandas Library

Below is an example using nag4py and the pandas library to calculate the implied volatility of options prices. All the code below can be downloaded to calculate your own implied volatility surface for data on the Chicago Board of Options Exchange website.

**Background on Implied Volatility**The famous Black Scholes formula for pricing a Call/Put option on a stock is a function of 6 variables; Underlying Price, Interest Rate, Dividends, Strike Price, Time-to-Expiration, and Volatility. Note that for a given option contract we can observe the Underlying Price, Interest Rate, and Dividend Rate. In addition, the options contract specifies the Strike Price and Time-to-Expiration.

Thus the one variable we have to tweak is the volatility. We can then ask the question: For what

**volatility*** does the Black Scholes equation price equal the market price.

**F(volatility*)=Market Option Price**

This

**volatility***is then denoted as the implied volatility observed in the market. Since the Black Scholes equation is a continuous function of volatility on (0, 1) we can use a NAG root finder to locate such

**volatility***. In fact, we will use a couple NAG functions; nag_zero_cont_func_brent will find the root using Brent's Algorithm, nag_bsm_price will calculate the theoretical option price, nag_2d_cheb_fit_lines will perform a least squares Chebyshev fit to the volatility surface, and nag_2d_cheb_eval will evaluate the surface at intermediate points.

**Running the Script**__We have created an example script that will run on data from the Chicago Board of Options Exchange (CBOE) website. The CBOE provides options data in downloadable format here.__

The program uses the pandas package to easily store and manipulate the data via DataFrames. In the script, you'll notice pandas takes care of the many data processing functions and then calls the NAG Library for more complex analysis. The program will automatically read in the options data, calculate implied volatility for the call and put options, and plot the volatility curves and surface. The above code can be run as follows (given that you have pandas, matplotlib, nag4py, and ctypes):

**Results**I ran the program on Apple Options data (SYM AAPL). Below is the output, generated in 10 seconds with roughly 500,000 calls to the NAG Library:

Implied Volatility for AAPL (APPLE INC),500.3691,+4.3291, Oct 15 2013 @ 15:25 ET,Bid,500.32,Ask,500.48,Size,3x1,Vol,10041069,

Figure 1: Volatility Curves (Click to Enlarge) |

Below is the volatility surface (plotting Strike, Expiration, and Implied Volatility from Figure 1 on the same graph).

Figure 2: Volatility Surface (Click to Enlarge) |

So go ahead, download any options data the CBOE provides and plot your own volatility surface! Many thanks to John Morrissey and Chris Seymour for code suggestions.

**Update**: Download data during CBOE trading hours to ensure the graphs are not null.

## Wednesday, 25 September 2013

###
NAG Toolbox for MATLAB^{®} documentation for the Mark 24 release

*A look at the documentation features in the new, Mark 24, release of the NAG Toolbox for MATLAB ^{®}.*

### MATLAB Documentation Help Center

Recent versions of MATLAB have a new documentation system known as the *Help Center*. This, however is restricted to products from MathWorks; third party extension toolboxes such as our NAG Toolbox *for MATLAB ^{®}* are now documented in a separate help application for

*Supplemental Software*. This has necessitated changes in the way the NAG Toolbox documentation is packaged and installed, and we have taken the opportunity to update the system generally. This post explores some of the new features.

### Installation Directory

Previous releases of the Toolbox have, by default, installed in the `toolbox` directory of the MATLAB installation on the user's machine. Due to changes in the MATLAB system this *does not work* for releases after R2012a. So by default the new Mark 24 release of the NAG Toolbox installs under a top level NAG directory in a similar location to our other Library products. The location may be changed as an option on installation, any location may be used that is *not* in the MATLAB tree. Note that this documentation takes up considerably more space than the previous version as previously it was distributed as a `jar`

file (which is a zip compressed archive) but MATLAB no longer supports jar archive help files.

### Rendering Technology

Earlier MATLAB releases used a java based HTML rendering application. This had the benefit that it was consistent across all MATLAB platforms but it was a relatively old technology and its support for more modern CSS and JavaScript features was somewhat variable. This meant that the display of mathematics was particularly tricky, and the usual MathML display in our library products was downgraded in our Toolbox documentation to whatever combination of nested tables and font changes worked. The current MATLAB releases use a new rendering agent (based on JxBrowser) that is a Java wrapper (as the MATLAB GUI is all Java based) around a modern Web browsing shared library. On Windows it uses the system installation of mshtml (so is essentially Internet Explorer) on Macintosh it uses the system webkit library (so is essentially Safari) and on Linux it is shipped with a gecko library (so is essentially Firefox). This means that all modern “HTML5” techniques (HTML, JavaScript and CSS) may be used, but does mean that the same browser differences that complicate website development now apply also to MATLAB help. In particular we have now chosen to use the MathJax JavaScript library and SVG to render the mathematics. The difference may be seen in the following example (from c06faf)

**Old Rendering**

n − 1 | ||

ẑ_{k} = 1/(sqrt(n)) | ∑ | x_{j} × exp( − i(2πjk)/n), k = 0,1, … ,n − 1. |

j = 0 |

While the old rendering was legible, hopefully most people will find the new rendering a great improvement, especially if there are nested superscripts and square roots etc. The new documents in fact contain both sets of markup, defaulting to the old one, and the new MathJax/SVG rendering is only enabled if a sufficiently new browser is detected. Mainly this will only affect Windows users where Internet Explorer 9 or higher is required for SVG support.

### Finding the Documentation

Several people using the previous Toolbox have had difficulty finding the Supplemental Software help. Initially on installing MATLAB there is no access to it in the MATLAB GUI, however after the NAG (or other third party) Toolbox is installed, MATLAB detects this and adds a new link at the bottom of the home screen of the main Help Center (that can be reached from the **?** button or choosing `help` from the menu or pressing the F1 key):

The Supplemental Software help looks familiar to users of previous MATLAB releases, with a folding table of contents and search box in the left pane and the documentation in the right. From R2013a (at least) the rendering uses the same JxBowser based rendering as the main Help Center. (The first version of the supplemental help browser in 2012a was essentially a copy of the old help system.) The search box here is particularly important as sadly there is no command-line search available as the standard `docsearch`

command is now limited to MathWorks products

### Documentation for 32 and 64 bit Versions

Previous releases of the Toolbox have had completely separate documentation for the implementations using 32 or 64 bit integer types. Partly as a result of MathWorks discontinuing support for 32 bit Linux, and partly as the `int64`

type now admits all the same operations as the `int32`

type, we have now combined the documentation. There is a link at the top of each page to toggle descriptions between `int32`

, `int64`

and the `nag_int`

type that works on either platform. This is initially set to `int32`

or `int64`

depending on the implementation. Unfortunately testing showed that in some browser configurations the CSS to switch between the types interfered with cutting and pasting the example code so in this release the link does not affect the examples which are always `int32`

or `int64`

depending on the implementation. Note the displayed results are always from the 64 bit Linux implementation.

Related to this the PDF version of the documentation is now only produced for the 64 bit version as the changes for `int32`

are entirely trivial. (Just replace `int64`

by `int32`

throughout.)

### Access to the Examples

In previous releases, the example programs for each routine have only been available as part of the documentation. For simple examples they could be cut and pasted directly into the MATLAB command window, however syntactic restrictions on MATLAB function definitions meant that any examples defining functions had to be first copied to a `.m`

file before being executed as you can not define functions at the top level MATLAB command prompt.

In this release all example programs are provided as executable `.m`

files as well as appearing in the HTML documentation. They are written to use `nag_int`

so they work in 32 and 64 bit implementations. The most convenient way to access these is a link **Open in the MATLAB editor** that is just above each example documentation. Clicking on this link (if the MATLAB help or web browser is being used) will open the MATLAB editor, clicking on the green arrow *run* button will then execute the example in the MATLAB Command Window.

The following screen-shot highlights the process of linking from the documentation in the Supplemental help browser, to the MATLAB editor, to running the program and producing results in the MATLAB Output window and additional MATLAB plots.

### Additional Utility Functions

##### nag_doc

`nag_doc`

*name* opens the documentation for the function *name* in the MATLAB web browser. *name* may be the long or short name of a function in the NAG Toolbox. If *name* is omitted, the browser is opened at the start page of the NAG Toolbox documentation.

Unfortunately it is not possible to directly open the help browser at a function document, however the web browser offers the same facilities apart from the search facility.

If HTML documentation for a command is not found, then the output from the `help`

function is returned.

##### nag_demo

`nag_demo`

uses the standard `demo`

function to open the demo browser for the NAG Toolbox.
This also forms a convenient command-line command to open the Supplemental Software Browser as the Demo Browser is the same application and you may navigate from there to the documentation just using the table of contents pane.

### Searching the Documentation

As previously noted the MATLAB `docsearch`

is no longer available to search the NAG Toolbox documentation. The full text index is still generated and is used by the search box in the Supplemental Software GUI, so this is the fastest way to search the documentation. Unfortunately the Java classes for this search are not exposed by MATLAB so it is not currently possible to make a `nag_docsearch`

command-line version of this.

The standard MATLAB `lookfor`

command may be used. This only searches the first line of the ASCII comments in the help files rather than the full documentation, and it does not benefit from a pre-indexed search database, however it can be useful. For example if you are looking for a function for solving LP optimisation problems

lookfor ' LP '

Produces the output

e04mf - : LP problem (dense) e04nk - : LP or QP problem (sparse) e04nq - : LP or QP problem (suitable for sparse problems) h02bb - : Integer LP problem (dense) h02bv - : Print IP or LP solutions with user-specified names for rows and columns h02ce - : Integer LP or QP problem (sparse), using e04nk nag_mip_ilp_dense - : Integer LP problem (dense) nag_mip_ilp_print - : Print IP or LP solutions with user-specified names for rows and columns nag_mip_iqp_sparse - : Integer LP or QP problem (sparse), using e04nk nag_opt_lp_solve - : LP problem (dense) nag_opt_qpconvex1_sparse_solve - : LP or QP problem (sparse) nag_opt_qpconvex2_sparse_solve - : LP or QP problem (suitable for sparse problems)

Where each function name is a link to its ascii help text, for example the first link produces:

e04mf: LP problem (dense) Syntax: [istate, x, iter, obj, ax, clamda, lwsav, iwsav, rwsav, ifail] = e04mf(a, bl, bu, cvec, istate, x, lwsav, iwsav, rwsav, 'n', n, 'nclin', nclin) Further documentation is available in the NAG Toolbox help files.

The link at the end of the help text of each function uses the `nag_doc`

command described above to show the help in the MATLAB web browser.

### Documentation on the NAG Web Site

One advantage of the new documentation format is that it is designed to work in a standard web browser. The old documentation did not work well in standard browsers as it had so many features that were tuned to the MATLAB help system. So for this release we have the HTML as well as the PDF version of the documentation available from our website. The JavaScript used on the pages hides the MATLAB specific links such as *Open in the MATLAB Editor* when the pages are read via the Web.

## Wednesday, 18 September 2013

### How do I know I'm getting the right answer?

*A*stored in floating point, there is no guarantee that, for example, its square root

*A*can be represented exactly in floating point arithmetic. In addition, rounding errors introduced early on can propagate through the computation so that their effects are magnified. We'd like to be able to take into account this sort of behaviour when we test our software.

^{1/2}*f(A),*the computed solution

*F*can be written as

*F=f(A)+∆f,*where

*∆f*is known as the

*forward error*. In some cases it may be possible to compute the forward error, by obtaining

*f(A)*via some other means (for example, analytically, or by using extended-precision arithmetic). Usually we are interested in the norm of

*∆f*, which we scale by the norm of

*f(A*) to form a relative forward error. As an example, consider the following matrix, whose exponential is known explicitly:

*A*was used as a test matrix in the 2009 version of the scaling and squaring algorithm for the matrix exponential [2]. With this algorithm, we obtain a relative forward error of 2.5x10

^{-16}. If we compute the exponential of

*A*using the 2005 version of the scaling and squaring algorithm [1], then the relative forward error is about 3.0x10

^{-12}. Clearly the newer algorithm is giving a better result, but how do we decide whether these forward errors are small enough?

*backward error*. We assume that the computed solution can also be written as

*F=f(A+∆A).*Whereas the forward error tells me how far from the actual solution I am, the backward error

*∆A*tells me what problem I have actually solved.

*condition number*of the problem. Uncertainties in the input data, together with the limitations of floating point arithmetic, mean that a relative backward error in the region of unit roundoff is the best that we should expect. Thus if we can estimate the condition number, then we can get an idea of what size of forward error is acceptable. If a problem is poorly conditioned, then the forward error could be very large, even though the backward error is small and the algorithm is performing stably. At NAG we've been developing routines for estimating condition numbers of matrix functions, based on algorithms developed at the University of Manchester.

Returning to our matrix

*A*, we find that the condition number of the exponential is very large: 1.6x10

^{15}. The product of the condition number and unit roundoff is about 1.7x10

^{-1}so it looks like both algorithms were actually performing quite well.

*A*))=

*A,*

*cos*

^{2}A+*,*

^{2}A=I^{-14}, is that acceptable or is it too large? We're currently working on some error analysis for such identities to answer this question, and hope to publish a paper on this soon.

[1] N.J. Higham

*The scaling and squaring method for the matrix exponential revisited,*SIAM J. Matrix. Anal. Appl., 26 (2005), pp. 1179-1193.

[2] A.H. Al-Mohy and N.J. Higham

*A new scaling and squaring algorithm for the matrix exponential,*SIAM J. Matrix. Anal. Appl., 26 (2009), pp. 970-989.

## Friday, 30 August 2013

### All the performance comes from parallel processing

In short, if your software does not exploit parallel processing techniques, then your code is limited to less than 2% of the potential performance of the processor. And this is just for a single processor - it is even more critical if the code has to run on a cluster or a supercomputer.

*This is why NAG provides help to users and developers: training in parallel programming techniques; software development services to parallelize and scale your applications; advice and consulting; and our libraries - parallelized and tuned for a range of processors and co-processors.*