Thursday, 28 November 2013

Using the NAG Compiler with the NAG Fortran Library (Mark 24) on Windows

Blog written by David Sayers, NAG Principal Technical Consultant 


The NAG Fortran Compiler is an excellent compiler for checking and running your Fortran code. We use it extensively here at NAG to ensure that our code for the library complies with the current Fortran standards.

Personally whenever I have a user problem report that I can’t resolve by inspection my first instinct is to request run the users code with the Compiler. Frequently this identifies the error immediately.

As I am a Windows user, I am able to make use of the Integrated Development Environment (IDE) for the compiler that is provided to our Windows users. We call this IDE ‘NAG Fortran Builder’. One of the nice features of this IDE is its ease of use with the prevailing NAG libraries. To do this the user normally specifies a ‘NAG Library Project’ at creation time; thereafter the relevant settings are made to the compiler, so that either the Windows 64 bit or Windows 32-bit DLLs are used. 

At points in the NAG cycle a situation arises where the latest Fortran Builder is released. It automatically picks up the then current NAG libraries. Information about these libraries is embedded within Fortran Builder. This information includes documentation, interface blocks and example program information, all of which is mark-dependent. Subsequently a new NAG Library is released and so our Windows users would want to use the latest library from Fortran Builder. Some guidance is given on this within the relevant Users’ Note. This note attempts to gather together and expand on this information.

New Fortran Builder Projects

Fortran Builder may be set to work in 64-bit or 32-bit mode and to link to the corresponding NAG Fortran libraries. Normally we expect these to be FLW6I24DCL or FLDLL244ML respectively and the rest of this note is written with these specific implementations in mind.

To use a Mark 24 Library with Fortran Builder 5.3.1 using FLW6I24DCL :

a) Open a Console Application (not a NAG Library Application)
b) Go to Project Settings via the Project menu
c) Click the Directories tab, then click the Include tab
d) Add the include directory install dir\nag_interface_blocks_nagfor (where install dir should be replaced with the full path to the NAG Library installation directory on your machine. Note that you should not put any quotation marks around the directory name even though it may include spaces)
e) Exit from the Directories tab, then click the Link tab
f) Add a link library, for example install dir\bin\FLW6I24DC_nag.dll (note that you must link to the DLL itself, not the associated import library)

g) Build the project and run your program in the usual way 

{If you have a 64 bit integer version of the Library FLW6I24DDL then the –i8 option must be set under the Fortran Compiler / Additional Options tab also }

To use a Mark 24 Library with Fortran Builder 5.3.1 using FLDLL244ML :

a) Open a Console Application (not a NAG Library Application)
b) Go to Project Settings via the Project menu
c) Click the Directories tab, then click the Include tab
d) Add the include directory install dir\nag_interface_blocks_nagfor (note that you should not put any quotation marks around the directory name even though it may include spaces)
e) Exit from the Directories tab, then click the Link tab
f) Add a link library, for example install dir\lib\FLDLL244M_mkl.lib
g) Click the Basic Settings tab, and tick the DLL Compatibility check box - this turns on the -compatible flag to ensure that the compiler uses the same stdcall calling convention used to build the library. (This is a step not required in 64-bit mode where DLLs are already compatible.)
h) Build the project and run your program in the usual way 

In both cases, if you build your project in Debug mode (the default), it is not possible to use the Undefined variables option which is accessible on the Fortran Compiler / Runtime Check tab of Project Settings. This is because the NAG Library was not compiled with this option.

Also in both cases, should you want to link to the NAG Library in all future Console Application projects tick the Set as Default option.

Please note that if you had decided to set these new library settings as defaults and you subsequently form a new ‘NAG_Library Application’ project by mistake then numerous warning messages will be generated on compilation. This is because of incompatibilities with the interface blocks. You may recover from this by unticking the ‘Use the NAG Fortran Library’ box under Project Settings/Basic Settings. If Fortran Builder is unable to find a supported NAG Library then it won’t let you create a NAG Library Application project.

Existing Fortran Builder Projects

If a Fortran Builder project already exists and has linked to the NAG Library then the chances are that it will be a ‘NAG Library Application’ project. If you still have the Mark 23 libraries on your system then these are the libraries that will be accessed and used.

You may however wish to use the latest, Mark 24, Library, either to get an updated version of the relevant routines or to exploit the new functionality on offer with Mark 24. Under these circumstances the project needs to be changed.

Unfortunately this entails a number of steps for each project that you need to convert:

  • unticking the ‘Use the NAG Fortran Library’ box under Project Settings/Basic Settings.
  • Now repeat steps b) to g) or b) to h) for the relevant library as described above.
An alternative, if you have already set up the default console application to incorporate NAG Mark 24, is to simply re-create the project as a Console Application.

Updating the NAG Help file available from Fortran Builder

You will require administrator privileges to do this, and you may well decide not to tamper in this manner, but it is possible to access the Mark 24 Help file directly from Fortran Builder. The process involves copying the help file provided with your library, nagdoc_fl24.chm into the bin directory of Fortran Builder. This location is typically C:\Program Files (x86)\NAG\EFBuilder 5.3.1\bin . We suggest you rename the existing help file nagdoc_fl23.chm to xnagdoc_fl23.chm. Now copy nagdoc_fl24.chm from the library into the .\bin directory and trick Fortran Builder by renaming nagdoc_fl24.chm to nagdoc_fl23.chm.

This works, but unless you feel strongly that this is what you need, it is not a process we recommend. The help file can always be accessed from the appropriate library directory rather than from Fortran Builder.

Updating the example program templates

Any tampering with the Fortran Builder files to access the Mark 24 example templates is even less recommended than the procedure for the documentation outline above.  The relevant directories are below C:\Program Files (x86)\NAG\EFBuilder 5.3.1\bin\resource\fldll234m.

It is our view that the user should note that each of the libraries is provided with example programs, data and results and we suggest that the user ought to simply copy the ones that they want into their project source directory. (It may seem obvious, but copying the files means that the original will still be available for use another time.)

Tuesday, 26 November 2013

NAG at SC13

A short summary of news from or about NAG at SC13.

Quick fact: NAG is one of very few organizations that have been at every SC since the series started.

Attacking the HPC skills need: NAG Supercomputing Support Service passes milestone of 2000 course attendees
NAG announces the latest milestone in addressing the skills needs of the scientific computing community with the skills they require to effectively exploit HPC systems - over 2000 attendees have now benefitted from NAG's highly rated HPC training courses under the UK's national Computational Support and Engineering (CSE) Support Service. Read full press release ...

Tutorial on HPC procurement: Room was packed!

New Services to underpin and innovate your numerical computing revealed by NAG
NAG revealed new services to enable users of numerical computing to benefit from NAG's 4 decades of experience and expertise in numerical solutions. NAG has delivered proven solutions in this area through its trusted Numerical Libraries for over 4 decades. In response to customer interest, NAG is now launching its Numerical Services to deliver the expertise and experience behind these highly reputed Libraries directly to developers and users of numerical computing applications - whether they use NAG products or not. This follows the success of NAG's HPC Services business and enables customer access to the full range of NAG experts - numerical analysts, computer scientists, mathematicians, algorithm developers, software engineers, and more. Read full press release ...

NAG staff member on Scientific Computing Sound Byte: Nechelle Dismer on Seeing Two Sides of the Conference

More HPC Innovation Awards for NAG at SC13
NAG has been awarded another two HPC Innovation Excellence Awards at SC13, Denver. The awards recognised NAG's HPC software innovations on two projects, CABARET and INCOMPACT 3D, undertaken by NAG's Computational Science and Engineering (CSE) Service, part of the UK's HECToR national supercomputing service. These two projects recognised by the HPC Innovation Awards at the world's largest supercomputing conference are among over 50 similarly successful application performance improvement projects within NAG's HECToR CSE Service. NAG is delighted to have its world class HPC Software Services acknowledged in this way for the second time. Read full press release ...

HPC Quiz: How Do Your SC13 Credentials Stack Up? (by @hpcnotes for HPC Wire)

NAG to broaden 64-bit ARMv8-A ecosystem
NAG announces a new technical collaboration with ARM, the world's leading semiconductor IP supplier. NAG's highly skilled team of HPC experts, numerical analysts and computer scientists will ensure the algorithms in the NAG Numerical Library and the facilities of the NAG FORTRAN Compiler are available for use on ARM's 64-bit ARMv8-A architecture-based platforms. By working with NAG, ARM is greatly enhancing the strong HPC infrastructure for ARMv8-A architecture through the enablement of numerical computation at its release. Read full press release ...

And looking to the future: SC14 will be in New Orleans, SC15 in Austin and SC16 in Salt Lake City. See you there!

Tuesday, 15 October 2013

Implied Volatility using Python's Pandas Library

Python has some nice packages such as numpy, scipy, and matplotlib for numerical computing and data visualization. Another package that deserves a mention that we have seen increasingly is Python's pandas library. Pandas has fast and efficient data analysis tools to store and process large amounts of data. Additionally, pandas has numpy and ctypes built into it which allow easy integration with NAG's nag4py package.

Below is an example using nag4py and the pandas library to calculate the implied volatility of options prices. All the code below can be downloaded to calculate your own implied volatility surface for data on the Chicago Board of Options Exchange website.

Background on Implied Volatility

The famous Black Scholes formula for pricing a Call/Put option on a stock is a function of 6 variables; Underlying Price, Interest Rate, Dividends, Strike Price, Time-to-Expiration, and Volatility. Note that for a given option contract we can observe the Underlying Price, Interest Rate, and Dividend Rate. In addition, the options contract specifies the Strike Price and Time-to-Expiration.

Thus the one variable we have to tweak is the volatility. We can then ask the question: For what volatility* does the Black Scholes equation price equal the market price.

F(volatility*)=Market Option Price

This volatility* is then denoted as the implied volatility observed in the market. Since the Black Scholes equation is a continuous function of volatility on (0, 1) we can use a NAG root finder to locate such volatility*. In fact, we will use a couple NAG functions; nag_zero_cont_func_brent will find the root using Brent's Algorithm, nag_bsm_price will calculate the theoretical option price, nag_2d_cheb_fit_lines will perform a least squares Chebyshev fit to the volatility surface, and nag_2d_cheb_eval will evaluate the surface at intermediate points.

Running the Script

We have created an example script that will run on data from the Chicago Board of Options Exchange (CBOE) website. The CBOE provides options data in downloadable format here.

The program uses the pandas package to easily store and manipulate the data via DataFrames. In the script, you'll notice pandas takes care of the many data processing functions and then calls the NAG Library for more complex analysis. The program will automatically read in the options data, calculate implied volatility for the call and put options, and plot the volatility curves and surface. The above code can be run as follows (given that you have pandas, matplotlib, nag4py, and ctypes):

python QuoteData.dat


I ran the program on Apple Options data (SYM AAPL). Below is the output, generated in 10 seconds with roughly 500,000 calls to the NAG Library:

Implied Volatility for AAPL (APPLE INC),500.3691,+4.3291, Oct 15 2013 @ 15:25 ET,Bid,500.32,Ask,500.48,Size,3x1,Vol,10041069,
Calculating Implied Vol of Calls...
Calculated Implied Vol for 15800 Calls
Calculating Implied Vol of Puts...
Calculated Implied Vol for 15800 Puts
Plotting Volatility Curves/Surface

Figure 1: Volatility Curves (Click to Enlarge)
In the above picture we see some nice volatility skews/smiles that level out as the option expiration date increases. This occurs when investors demand higher premiums (and thus volatility) for deep in and out of the money option prices. As a result, implied volatility curves are oftentimes upward sloping, downward sloping, or U-shaped.

Below is the volatility surface (plotting Strike, Expiration, and Implied Volatility from Figure 1 on the same graph).

Figure 2: Volatility Surface (Click to Enlarge)
When performing the Chebyshev surface fit, two required inputs are k and l; the degrees of the polynomials in the x and y directions. I choose k = 3 and l = 3 for the above graph, but these can vary among stocks. It is best to first examine the volatility curves for various expiration dates (Figure 1) and then choose k and l appropriately.

So go ahead, download any options data the CBOE provides and plot your own volatility surface! Many thanks to John Morrissey and Chris Seymour for code suggestions.

Update: Download data during CBOE trading hours to ensure the graphs are not null.

Wednesday, 25 September 2013

NAG Toolbox for MATLAB® documentation for the Mark 24 release

A look at the documentation features in the new, Mark 24, release of the NAG Toolbox for MATLAB®.

MATLAB Documentation Help Center

Recent versions of MATLAB have a new documentation system known as the Help Center. This, however is restricted to products from MathWorks; third party extension toolboxes such as our NAG Toolbox for MATLAB® are now documented in a separate help application for Supplemental Software. This has necessitated changes in the way the NAG Toolbox documentation is packaged and installed, and we have taken the opportunity to update the system generally. This post explores some of the new features.

Installation Directory

Previous releases of the Toolbox have, by default, installed in the toolbox directory of the MATLAB installation on the user's machine. Due to changes in the MATLAB system this does not work for releases after R2012a. So by default the new Mark 24 release of the NAG Toolbox installs under a top level NAG directory in a similar location to our other Library products. The location may be changed as an option on installation, any location may be used that is not in the MATLAB tree. Note that this documentation takes up considerably more space than the previous version as previously it was distributed as a jar file (which is a zip compressed archive) but MATLAB no longer supports jar archive help files.

Rendering Technology

Earlier MATLAB releases used a java based HTML rendering application. This had the benefit that it was consistent across all MATLAB platforms but it was a relatively old technology and its support for more modern CSS and JavaScript features was somewhat variable. This meant that the display of mathematics was particularly tricky, and the usual MathML display in our library products was downgraded in our Toolbox documentation to whatever combination of nested tables and font changes worked. The current MATLAB releases use a new rendering agent (based on JxBrowser) that is a Java wrapper (as the MATLAB GUI is all Java based) around a modern Web browsing shared library. On Windows it uses the system installation of mshtml (so is essentially Internet Explorer) on Macintosh it uses the system webkit library (so is essentially Safari) and on Linux it is shipped with a gecko library (so is essentially Firefox). This means that all modern “HTML5” techniques (HTML, JavaScript and CSS) may be used, but does mean that the same browser differences that complicate website development now apply also to MATLAB help. In particular we have now chosen to use the MathJax JavaScript library and SVG to render the mathematics. The difference may be seen in the following example (from c06faf)

Old Rendering

k = 1/(sqrt(n))xj × exp(i(2πjk)/n),  k = 0,1,,n1.
j = 0

New Rendering

While the old rendering was legible, hopefully most people will find the new rendering a great improvement, especially if there are nested superscripts and square roots etc. The new documents in fact contain both sets of markup, defaulting to the old one, and the new MathJax/SVG rendering is only enabled if a sufficiently new browser is detected. Mainly this will only affect Windows users where Internet Explorer 9 or higher is required for SVG support.

Finding the Documentation

Several people using the previous Toolbox have had difficulty finding the Supplemental Software help. Initially on installing MATLAB there is no access to it in the MATLAB GUI, however after the NAG (or other third party) Toolbox is installed, MATLAB detects this and adds a new link at the bottom of the home screen of the main Help Center (that can be reached from the ? button or choosing help from the menu or pressing the F1 key):

The Supplemental Software help looks familiar to users of previous MATLAB releases, with a folding table of contents and search box in the left pane and the documentation in the right. From R2013a (at least) the rendering uses the same JxBowser based rendering as the main Help Center. (The first version of the supplemental help browser in 2012a was essentially a copy of the old help system.) The search box here is particularly important as sadly there is no command-line search available as the standard docsearch command is now limited to MathWorks products

Documentation for 32 and 64 bit Versions

Previous releases of the Toolbox have had completely separate documentation for the implementations using 32 or 64 bit integer types. Partly as a result of MathWorks discontinuing support for 32 bit Linux, and partly as the int64 type now admits all the same operations as the int32 type, we have now combined the documentation. There is a link at the top of each page to toggle descriptions between int32, int64 and the nag_int type that works on either platform. This is initially set to int32 or int64 depending on the implementation. Unfortunately testing showed that in some browser configurations the CSS to switch between the types interfered with cutting and pasting the example code so in this release the link does not affect the examples which are always int32 or int64 depending on the implementation. Note the displayed results are always from the 64 bit Linux implementation.

Related to this the PDF version of the documentation is now only produced for the 64 bit version as the changes for int32 are entirely trivial. (Just replace int64 by int32 throughout.)

Access to the Examples

In previous releases, the example programs for each routine have only been available as part of the documentation. For simple examples they could be cut and pasted directly into the MATLAB command window, however syntactic restrictions on MATLAB function definitions meant that any examples defining functions had to be first copied to a .m file before being executed as you can not define functions at the top level MATLAB command prompt.

In this release all example programs are provided as executable .m files as well as appearing in the HTML documentation. They are written to use nag_int so they work in 32 and 64 bit implementations. The most convenient way to access these is a link Open in the MATLAB editor that is just above each example documentation. Clicking on this link (if the MATLAB help or web browser is being used) will open the MATLAB editor, clicking on the green arrow run button will then execute the example in the MATLAB Command Window.

The following screen-shot highlights the process of linking from the documentation in the Supplemental help browser, to the MATLAB editor, to running the program and producing results in the MATLAB Output window and additional MATLAB plots.

screenshot of example running

Additional Utility Functions


nag_doc name opens the documentation for the function name in the MATLAB web browser. name may be the long or short name of a function in the NAG Toolbox. If name is omitted, the browser is opened at the start page of the NAG Toolbox documentation.

Unfortunately it is not possible to directly open the help browser at a function document, however the web browser offers the same facilities apart from the search facility.

If HTML documentation for a command is not found, then the output from the help function is returned.

nag_demo uses the standard demo function to open the demo browser for the NAG Toolbox.

This also forms a convenient command-line command to open the Supplemental Software Browser as the Demo Browser is the same application and you may navigate from there to the documentation just using the table of contents pane.

As previously noted the MATLAB docsearch is no longer available to search the NAG Toolbox documentation. The full text index is still generated and is used by the search box in the Supplemental Software GUI, so this is the fastest way to search the documentation. Unfortunately the Java classes for this search are not exposed by MATLAB so it is not currently possible to make a nag_docsearch command-line version of this.

The standard MATLAB lookfor command may be used. This only searches the first line of the ASCII comments in the help files rather than the full documentation, and it does not benefit from a pre-indexed search database, however it can be useful. For example if you are looking for a function for solving LP optimisation problems

lookfor ' LP '

Produces the output

e04mf                          - : LP problem (dense)
e04nk                          - : LP or QP problem
e04nq                          - : LP or QP problem
                                  (suitable for sparse problems)
h02bb                          - : Integer LP problem (dense)
h02bv                          - : Print IP or LP solutions with
                                  user-specified names for rows and columns
h02ce                          - : Integer LP or QP problem
                                  (sparse), using e04nk
nag_mip_ilp_dense              - : Integer LP problem (dense)
nag_mip_ilp_print              - : Print IP or LP solutions with
                                  user-specified names for rows and columns
nag_mip_iqp_sparse             - : Integer LP or QP problem
                                  (sparse), using e04nk
nag_opt_lp_solve               - : LP problem (dense)
nag_opt_qpconvex1_sparse_solve - : LP or QP problem 
nag_opt_qpconvex2_sparse_solve - : LP or QP problem
                                  (suitable for sparse problems)

Where each function name is a link to its ascii help text, for example the first link produces:

e04mf: LP problem (dense)
 [istate, x, iter, obj, ax, clamda, lwsav, iwsav, rwsav, ifail] =
                 e04mf(a, bl, bu, cvec, istate, x, lwsav,
                         iwsav, rwsav, 'n', n, 'nclin', nclin)
Further documentation is available in the NAG Toolbox help files.

The link at the end of the help text of each function uses the nag_doc command described above to show the help in the MATLAB web browser.

Documentation on the NAG Web Site

One advantage of the new documentation format is that it is designed to work in a standard web browser. The old documentation did not work well in standard browsers as it had so many features that were tuned to the MATLAB help system. So for this release we have the HTML as well as the PDF version of the documentation available from our website. The JavaScript used on the pages hides the MATLAB specific links such as Open in the MATLAB Editor when the pages are read via the Web.

Wednesday, 18 September 2013

How do I know I'm getting the right answer?

Many recent developments in numerical algorithms concern improvements in performance and the exploitation of parallel architectures. However it's important not to lose sight of one crucial point: first and foremost, algorithms must be accurate. This begs the question, how do we know whether a routine is giving us the right answer? I'm asking this in the context of the matrix function routines I've been writing (these are found in chapter F01 of the NAG Library), but I'm sure you’ll agree that it’s an important question for all numerical software developers to consider.
First, it's important to note that the term "the right answer", is a little unfair. Given a matrix A stored in floating point, there is no guarantee that, for example, its square root A1/2 can be represented exactly in floating point arithmetic. In addition, rounding errors introduced early on can propagate through the computation so that their effects are magnified. We'd like to be able to take into account this sort of behaviour when we test our software.
For a matrix function f(A), the computed solution F can be written as F=f(A)+∆f, where ∆f is known as the forward error. In some cases it may be possible to compute the forward error, by obtaining f(A) via some other means (for example, analytically, or by using extended-precision arithmetic). Usually we are interested in the norm of ∆f , which we scale by the norm of f(A) to form a relative forward error. As an example, consider the following matrix, whose exponential is known explicitly:
The matrix A was used as a test matrix in the 2009 version of the scaling and squaring algorithm for the matrix exponential [2]. With this algorithm, we obtain a relative forward error of 2.5x10-16. If we compute the exponential of A using the 2005 version of the scaling and squaring algorithm [1], then the relative forward error is about 3.0x10-12. Clearly the newer algorithm is giving a better result, but how do we decide whether these forward errors are small enough?
To try to answer this, it's useful to consider the backward error. We assume that the computed solution can also be written as F=f(A+∆A). Whereas the forward error tells me how far from the actual solution I am, the backward error ∆A tells me what problem I have actually solved.
A useful rule of thumb states that the forward error is approximately bounded by the product of the backward error and the condition number of the problem. Uncertainties in the input data, together with the limitations of floating point arithmetic, mean that a relative backward error in the region of unit roundoff is the best that we should expect. Thus if we can estimate the condition number, then we can get an idea of what size of forward error is acceptable. If a problem is poorly conditioned, then the forward error could be very large, even though the backward error is small and the algorithm is performing stably. At NAG we've been developing routines for estimating condition numbers of matrix functions, based on algorithms developed at the University of Manchester.

Returning to our matrix A, we find that the condition number of the exponential is very large: 1.6x1015. The product of the condition number and unit roundoff is about  1.7x10-1so it looks like both algorithms were actually performing quite well.
The discussion above assumes that we are somehow able to compute forward errors, but this isn't always possible. One approach we been using at NAG is to test if certain matrix function identities hold. For example
are generalizations of well-known scalar identities. Of course, we are now back to our original problem: if I find that the computed residual in my identity is of the order of 10-14, is that acceptable or is it too large? We're currently working on some error analysis for such identities to answer this question, and hope to publish a paper on this soon.
So can I answer my original question; how do I know I'm getting the right answer? Well, I hope I've persuaded you that in numerical analysis, this is a thornier issue than you might have thought. One thing is clear though: testing a new library routine takes far longer than writing it in the first place!

[1] N.J. Higham The scaling and squaring method for the matrix exponential revisited, SIAM J. Matrix. Anal. Appl., 26 (2005), pp. 1179-1193.
[2] A.H. Al-Mohy and N.J. Higham A new scaling and squaring algorithm for the matrix exponential, SIAM J. Matrix. Anal. Appl., 26 (2009), pp. 970-989.

Friday, 30 August 2013

All the performance comes from parallel processing

I often use this slide to show why all software has to be aware of parallel processing now.

In short, if your software does not exploit parallel processing techniques, then your code is limited to less than 2% of the potential performance of the processor. And this is just for a single processor - it is even more critical if the code has to run on a cluster or a supercomputer.

This is why NAG provides help to users and developers: training in parallel programming techniques; software development services to parallelize and scale your applications; advice and consulting; and our libraries - parallelized and tuned for a range of processors and co-processors.

Thursday, 29 August 2013

A nag4py update

In a previous post “A quick and dirty way to use the NAG C Library Mark 23 from Python” my colleagues described a method for parsing the NAG C library headers to create Python interfaces using the ctypes modules. Since then we have written a parser in Python to do this, modularised the package into chapters and polished the examples a little. We now have packages for Windows, Mac and Linux available to download.

Python is a great language for getting results quickly. It's often referred to as a batteries included language due to it’s large standard library that includes modules for almost anything you may need. Add that to it’s super clean syntax and the Python Package Index and you can see why it’s so popular. Thanks to packages like Numpy, Scipy and Matplotlib, Python has grown significantly in the area of scientific computing and is gaining ground as the FOSS alternative to MATLAB ®.

Using higher level languages for prototyping such as Python, Matlab or VBA is an excellent way to initially model a problem. Our customers find it extremely beneficial to be able to call the same NAG Library when moving into the production phase and still get same results as the underlying code hasn't changed.

Python e04ccc example

The benefit of parsing the C header files and using gccxml to generate the Python interfaces is all 1700+ NAG Library routines can be called from Python automatically. The drawback with this approach is you don’t get the most pythonic interfaces. One solution is to wrap the ctypes interface with a pure Python interface for the functions you frequently use. Take the simple function g01eac for calculating the one or two tail probability for the standard normal distribution, it’s ctypes specification is as follows:

g01eac.restype = c_double
g01eac.argtypes = [Nag_TailProbability, c_double, POINTER(NagError)]

Thursday, 25 July 2013

Jenkins comes to NAG

We build a lot of Fortran and C code as part of our Libraries and Compiler, and we have over 40 years experience in software engineering and in numerical analysis and algorithms. However, not being a large organization means that it can be hard to keep up with the vanguard of new SE methodologies and frameworks. There are some approaches though that seem a no brainer to have as part of the workflow in a modern software company. Convergent evolution can lead to their development and adoption without any prompting from the greater community.

For decades we've followed and benefited from many of the principles of continuous integration (for example, version control and build automation---but no longer with SCCS or shell scripts!) and we've been running a self-hosted and self-maintained manager for automatic builds since the early 2000s. But we began to find the system showing its age and straining at the seams. The build manager uses dumb ssh-ing to launch each build on a slave, so there is no communication channel between a build and the driver. The build indexer has become quite unmaintainable and is, shall we say, rustic: it resembles a web page from 1995.

NAG Library automatic build implementation reports

We don't have the resources to pump any significant improvements into these systems, so a couple of years ago we had a look around for replacements.

Buildbot is nice. We liked the visualizations it comes with out of the box, and some of us know Python so we felt confident we'd be able to set up the software. There's no way of configuring by a GUI though, which restricts for us the audience of potential maintainers.

Hudson seemed too Java focused to fit easily into our model.

Friday, 28 June 2013

A quick but dirty way to use the NAG C Library Mark 23 from Python

I get requests about using NAG from python from time to time, and NAG strives to make its algorithms as accessible as possible from a wide variety of languages and platforms. Python is no exception.

While we don’t yet provide a fully documented python API replete with examples and so on, there is a quick but dirty way to get NAG C Library python bindings covering every library algorithm, struct, and enum.

My starting point was the old PyNAG, which is excellent as far as it goes. It is also for the older Mark 8 library -- which none of you out there in userland should be using.

PyNAG also looks a bit labor-intensive to produce for the whole library, which would entirely disrupt my Maynard G. Krebs Cycle. There has to be better way, I thought.

And there is! At least on Linux. For this exercise I used our NAG C Library Mark 23 implementation for Linux and gcc, LP64 model, hereafter abbreviated as cll6a23dgl.

Here is the method that I used: 

Step 1 -- Get the Tools

For this exercise we'll need two tools. The first is gccxml, the extension to gcc that produces an xml description of the gcc compiler's internal representation of a compiled code. The second tool is the python extension ctypeslib. Both are available from the Ubuntu repositories (for example.)


Step 2 -- Collect the C Headers

Make a file "nag4py.h" that includes all of the relevant headers:

echo "#include <nag.h>" > nag4py.h

find /opt/NAG/cll6a23dgl/include -name "nag[a-z][0-9][0-9].h" | sort | sed -e "s/^.*\//#include </" | sed -e "s/$/>/" >> nag4py.h

echo "#include <nags.h>" >> nag4py.h


Step 3 -- Produce the XML

h2xml -I. -I/opt/NAG/cll6a23dgl/include nag4py.h -o nag4py.h.xml


Step 4 -- Produce the Python

xml2py nag4py.h.xml -o -l/opt/NAG/cll6a23dgl/lib/


Step 5 (Optional) -- edits for clear C library concordance / compatibility with PyNAG

Edit the top of to read:

from ctypes import *


#human edit

E04_DEFAULT = None




#end human edit


and edit the bottom of to read:

           'Nag_HashTable', 'Nag_SparseNsym_CheckData',

#human edit


#end human edit


Then with very minor modifcations the PyNAG examples will run. For many the only change needed is to change the import, from “PyNAG” to “nag4py”.

Another necessary change is that Numpy arrays must be passed slightly differently than with PyNAG, e.g.

g01alc(n, x.ctypes.data_as(POINTER(c_double)), …


instead of

g01alc(n, x.ctypes.data_as(c_void_p), …

Nag C Library enum types are translated nicely and may be used like one normally would, e.g.


and callback function typing is taken care of for us, e.g.

def py_obj_func(n,xc,objf,comm):

     objf[0] = exp(xc[0]) * (xc[0] * 4.0 * (xc[0]+xc[1]) + …


c_obj_func = NAG_E04CCC_FUN(py_obj_func)

While the module generated in this way is immediately usable on Linux, it is specific to the local installation. To get it working on anything else a small amount of work remains to be done. First, we have to remove the extraneous debris from gcc that finds its way into Second, we must omit some symbols exposed by the NAG C Library on Linux that are not exposed by all NAG C Library Mark 23 implementations.

Once that’s done, we get a module suitable for Linux, Windows, and OS X, available here. Updated PyNAG examples are here.

End Notes:

On Linux, if you’re instead using the NAG C Library ILP64 implementation CLL6A23DH, use this alternative module.

On Windows, I tested with the 64-bit Anaconda python distribution, which comes with all of the python bits needed to run the examples, including the graphical Rosenbrock example. An applicable NAG C Library implementation is CLW6I23DA. I found I had to add the folder containing the NAG DLLs to the system PATH environment variable.

On OS X, I tested with python 2.7.5 from A suitable NAG C Library implementation is CLMI623DG. In order to run the Rosenbrock example, I also needed to install numpy-1.7.1, matplotlib-1.2.1, and XQuartz , since I’m running Snow Leopard.

In all cases I’ve assumed the NAG C Library has been installed in the default location. If on your system it’s installed elsewhere, you’ll need to edit accordingly.

Thanks to my colleague Brian Spector for testing on OS X Lion and for some suggested example changes. Thanks also to my colleague Mike Hooper, whose gentle questions suborned my inner Krebs.

Jul 17 Update: Some example cleanup, included three new graphical examples, and changed the module name.

Thursday, 30 May 2013

ISC'13 Preview - Supercomputing in Leipzig

ISC'13 - the summer's big international conference for the world of supercomputing - is only a few weeks away. As usual, I will be attending, along with several of my NAG colleagues.

This year, ISC will be in Leipzig - a new location. That means as well as the #HPC [do I need hashtags in a blog post???] fun of the conference itself, we will all have to learn the foibles of a new host city. Which bar will replace the cornerstone that was the Radisson Blu in Hamburg? How quickly can we learn the layout of the conference center itself?

More to the point, what will be the big #HPC [there I go again] news stories and themes to watch out for in Leipzig? I have already been interviewed by InsideHPC as an early ISC'13 preview, and have written a blog for the official ISC blog (comparing #HPC and #F1) - but here are some further thoughts.

Take the MIC

Well, as far as big #HPC [getting silly now] news goes, I'm glad I didn't try to write this blog last week! I think the stand out news story of ISC'13 is pretty obvious (barring another dramatic reveal) - the excellent expose by HPC Wire of China's MIC (sorry Xeon Phi) based 50+ petaflops supercomputer. Yes, at the ISC'13 issue of the Top500 list, China's Tianhe-2 supercomputer will smash aside the USA's Titan supercomputer at ORNL to take the crown as the world's fastest supercomputer - by a huge margin.

Indeed, in a painful turn of events, Titan may be dumped out of the Top500 leadership position before it has even passed it's acceptance tests! I'll reserve further comment on the Tianhe-2 story for a upcoming blog post at my own HPC blog (

I believe Tianhe-2 will not be the only Xeon Phi news story at ISC'13 - there will probably be a few other Xeon Phi based supercomputers to be announced or updated, along with several product announcements etc (including some from NAG).

Wednesday, 15 May 2013

Calling the NAG C Library from kdb+

Kx Systems was founded in 1993 to address a simple problem: the inability of traditional relational database technology to keep up with escalating volumes of data. This problem is even more acute today, where data records, especially in the financial industry, number in the trillions. To provide users to fast data, the founders of Kx have developed kdb+ and vector-processing programming language 'Q'.

Kdb+ has the ability to quickly process data and for more complex analysis it has access to other programming languages. To call the NAG C Library from kdb+, a wrapper must be placed around the NAG routine, taking care when passing data types. The wrapped functions must then be compiled into an external library. Below is an example for calling a NAG regression routine:

#include <nag.h>
#include <nagg02.h>
#include <string.h>
#include <nag_stdlib.h>
#define KXVER
3          // Using version 3, this must come before #include "k.h"
#include"k.h"              // kx header file containing K structures

K regression(K x, K y)          //The two arrays we will input from kdb+
       double *wt=0;
       NagError fail;            
       K kout;                         //Array that we will return to kdb+
       kout=ktn(KF,7);            //Allocate kout - it will have 7 elements. KF indicates
                                          // it is an array of floating point numbers
       F*out=kF(kout);            //Set pointer to kout - comes after kout is allocated

       F*(x_input)=kF(x);         //Need to set pointers to input arrays into NAG
       nag_simple_linear_regression( Nag_AboutMean, (Integer) x->n, x_input, y_input, wt, &out[0], &out[1], &out[2], &out[3], &out[4], &out[5], &out[6], &fail );

       if(fail.code != NE_NOERROR)
              return r0(kout),krr(ss(fail.message));    //Free memory and return any
                                                                         //NAG fail to Q
       return kout;

In production level code, we should really check array bounds and data types. If you would like, you can add the following checks:

     if(x->n != y->n) return krr("data not same length");
     if(x->t != -KF || y->t != -KF) return krr("input data wrong type");

We must then compile the above code into a sharable library. On Linux the command is:
gcc -shared -fPIC nag_regression.c -I/[NAG_Directory]/include  [NAG_Directory]/lib/ -o

Before we start the Q language, make sure the library can be located:

Finally, we load the library in Q and test the NAG example program:
q)nag_reg:`nag_regression 2:(`regression;2)
q)nag_reg[1.0 0.0 4.0 7.5 2.5 0.0 10.0 5.0;20.0 15.5 28.3 45.0 24.5 10.0 99.0 31.2]
7.598166   7.090489   6.685763   1.322359   .8273433   965.2454   6

Outputs from the regression are the regression coefficients, the standard errors, the R-squared, sum of squares, and degrees of freedom (which all agree with the NAG example).

Interested in other NAG routines in kdb+? Email and let us know! Many thanks to the support team at Kx Systems.

Thursday, 28 March 2013

Evaluation of non-linear expressions using Excel

NAG provides VB headers for all the routines in the library allowing them to be called easily from Microsoft Excel given a little knowledge of Visual Basic. Along with the headers, some Excel examples are distributed with the product. You may also find demo worksheets for a variety of routines on the NAG web site.

Combining the ease of use of Excel and the power of NAG routines is a great way of creating simple, flexible interfaces for solving difficult problems. Typically you write VBA code to handle the reading and writing of data to the Excel worksheet and the passing of parameters to the NAG routines. Once this VBA code has been written the problem parameters can be easily changed and tweaked from the worksheet without having to alter any code again.

Routines which take only simple parameters such as scalar values, vectors and matrices can have all their arguments input via the Excel worksheet however for routines that require non-linear expression such as those routines with callback functions it isn’t so straightforward. One method is to simply write the callback in VBA however you lose the interactivity gained from using Excel as every time you want to change the problem you would have to edit the VBA code. This isn’t a big deal but there is another way!

The VBA function evaluate() can be used for just this purpose and, along with a few tricks, allows  natural input of non-linear expressions via the worksheet. The demo for the NAG optimization routine e04jcf takes advantage of this method. As you can see the objective function is expressed naturally on the sheet along with the simple input parameters required for the routine e04jcf:

Those with a keen eye will have picked up on the fact that the objective function is described in terms of X1…X4 but the initial estimation is given in cells B11…B14. The trick is that once the initial estimate has been read from the worksheet and stored in a VBA array, it gets copied back to the sheet in cells X1…X4 before the NAG routine e04jcf is called. Of course you could describe the objective function in terms of which-ever cells you like. It should be noted that this is not the most efficient method due to the extra copying of data that is required so it may not be suitable if your problem is large.

This technique can be exploited with many other routines in the NAG Library such as root finding, quadrature, systems of non-linear equations and many more. Once a worksheet has been created, it can be used by anyone who is familiar with Excel even if they do not know VBA.

Tip: You can hide column X by right-clicking the column header and selecting hide or set the font colour to white if you don’t want the intermediate values displayed.

The demo for e04jcf can be downloaded from our Excel page.

Friday, 15 March 2013

3 Years Later

Do you remember those episodes of Friends that consisted entirely of flashbacks - presumably the cast were too busy with other duties that week so couldn't film, or the writers too busy to think up a real story ...

The NAG Blog has been running for just over 3 years now. In that time we have had 118 posts on a wide variety of numerical computing topics including HPC, .NET, customer engagement, GPUs, optimization, and much more.

The most popular posts (pageviews) have often been the most technical ones - including:

On the HPC theme, there have been posts discussing factors of 1000x in performance, describing supercomputing to friends, family and politicians, and the relationship between supercomputers and ice cubes.

I hope you have found our blog to be a nice mix of useful content, occasional fun, and topical thoughts. Do you have a favourite NAG Blog post - or more importantly - is there a topic you would like one of our numerical computing experts to write about? From talking to customers we know that you’d like to see more technical articles which we are always working on, but we also would like to see more contributors to the blog. So if you have something to say about numerical algorithms or HPC that relates to NAG and the work that we do, get in touch and maybe we can collaborate.

Our other social networks – the NAG Linkedin Group and @NAGTalk on Twitter have grown to be fairly engaged spaces. Our aim with all our groups is to create a friendly space where users and staff can readily exchange ideas, innovate and share relevant stories. The Linkedin Group is a vetted group which means that we keep it free from spam and recruiters. Do join if you want to connect with other NAG users, collaborators and developers.

Wednesday, 27 February 2013

NAG Routines in Different Precisions

Written by David Sayers, NAG Principal Technical Consultant

How would you like to call a NAG routine and have the precision it computes in determined by the precision of the arguments? So, for example, your program might contain:
r = s15adf(x,ifail)
If x were single precision then a single precision would be returned to r. If x were double or quadruple precision then the corresponding double or quadruple computations would be performed.
I can imagine a number of scenarios where this facility might be useful, but I invite your comments on how common these might be:
  1. Single precision might be useful as once again single precision calculation becomes faster than double precision. So the user might want to trade accuracy for speed.
  2. Secondly, single precision used to be the natural interface to many graphics packages. I am not sure whether this is still the case. Would it be useful to have the capability for this reason?
  3. Double precision, the precision now offered, is generally ‘about right’ for most computations, but sometimes extra precision is needed for the trickiest calculation. Under these circumstances quadruple precision might be necessary. How common is this requirement amongst our users?
  4. If a computation is made with two different precisions and the resulting answers are broadly similar then a certain level of confidence might be gained from these computations. If on the other hand the results are very different this may indicate that the original problem was ill-conditioned. The ability to do this test easily might be something our users want. Do you think this is so?
From the developers point of view modern Fortran makes it easy to provide these facilities:

Module dks1

      Interface dks
        Module Procedure dks_1, dks_2, dks_3
      End Interface dks

      Subroutine dks_1(x)
        Real (Kind=1) :: x

        Print *, 'single precision'
      End Subroutine dks_1

      Subroutine dks_2(x)
        Real (Kind=2) :: x

        Print *, 'double precision'
      End Subroutine dks_2

      Subroutine dks_3(x)
        Real (Kind=3) :: x

        Print *, 'quadruple precision'
      End Subroutine dks_3

    End Module dks1

    Program main
      Use dks1
      Real (Kind=1) :: x1
      Real (Kind=2) :: x2
      Real (Kind=3) :: x3

      Call dks(x1)
      Call dks(x2)
      Call dks(x3)
      Stop 'ok'
    End Program main

(Here the Kind convention is the natural one for the NAG Compiler; others might use a ‘Byte convention’ i.e. the kind number matches the number of bytes of storage. Thus some compilers use ‘Real (Kind=4)’ to denote single precision and ‘Real (Kind = 8)’ to denote double precision.
My implementation colleagues will point out that instead of checking over 3000 results per implementation they would have over 9000 if we were to provide this capability. How appreciative would our users be if we were to do this? I am sure that I wouldn't manage to convince my colleagues to take on this extra work without evidence that our customers would like it. So if you think this is worthwhile then please let us know.