Wednesday, 25 September 2013

NAG Toolbox for MATLAB® documentation for the Mark 24 release

A look at the documentation features in the new, Mark 24, release of the NAG Toolbox for MATLAB®.

MATLAB Documentation Help Center

Recent versions of MATLAB have a new documentation system known as the Help Center. This, however is restricted to products from MathWorks; third party extension toolboxes such as our NAG Toolbox for MATLAB® are now documented in a separate help application for Supplemental Software. This has necessitated changes in the way the NAG Toolbox documentation is packaged and installed, and we have taken the opportunity to update the system generally. This post explores some of the new features.

Installation Directory

Previous releases of the Toolbox have, by default, installed in the toolbox directory of the MATLAB installation on the user's machine. Due to changes in the MATLAB system this does not work for releases after R2012a. So by default the new Mark 24 release of the NAG Toolbox installs under a top level NAG directory in a similar location to our other Library products. The location may be changed as an option on installation, any location may be used that is not in the MATLAB tree. Note that this documentation takes up considerably more space than the previous version as previously it was distributed as a jar file (which is a zip compressed archive) but MATLAB no longer supports jar archive help files.

Rendering Technology

Earlier MATLAB releases used a java based HTML rendering application. This had the benefit that it was consistent across all MATLAB platforms but it was a relatively old technology and its support for more modern CSS and JavaScript features was somewhat variable. This meant that the display of mathematics was particularly tricky, and the usual MathML display in our library products was downgraded in our Toolbox documentation to whatever combination of nested tables and font changes worked. The current MATLAB releases use a new rendering agent (based on JxBrowser) that is a Java wrapper (as the MATLAB GUI is all Java based) around a modern Web browsing shared library. On Windows it uses the system installation of mshtml (so is essentially Internet Explorer) on Macintosh it uses the system webkit library (so is essentially Safari) and on Linux it is shipped with a gecko library (so is essentially Firefox). This means that all modern “HTML5” techniques (HTML, JavaScript and CSS) may be used, but does mean that the same browser differences that complicate website development now apply also to MATLAB help. In particular we have now chosen to use the MathJax JavaScript library and SVG to render the mathematics. The difference may be seen in the following example (from c06faf)

Old Rendering

 n − 1 ẑk = 1/(sqrt(n)) ∑ xj × exp( − i(2πjk)/n),  k = 0,1, … ,n − 1. j = 0

New Rendering

While the old rendering was legible, hopefully most people will find the new rendering a great improvement, especially if there are nested superscripts and square roots etc. The new documents in fact contain both sets of markup, defaulting to the old one, and the new MathJax/SVG rendering is only enabled if a sufficiently new browser is detected. Mainly this will only affect Windows users where Internet Explorer 9 or higher is required for SVG support.

Finding the Documentation

Several people using the previous Toolbox have had difficulty finding the Supplemental Software help. Initially on installing MATLAB there is no access to it in the MATLAB GUI, however after the NAG (or other third party) Toolbox is installed, MATLAB detects this and adds a new link at the bottom of the home screen of the main Help Center (that can be reached from the ? button or choosing help from the menu or pressing the F1 key):

The Supplemental Software help looks familiar to users of previous MATLAB releases, with a folding table of contents and search box in the left pane and the documentation in the right. From R2013a (at least) the rendering uses the same JxBowser based rendering as the main Help Center. (The first version of the supplemental help browser in 2012a was essentially a copy of the old help system.) The search box here is particularly important as sadly there is no command-line search available as the standard docsearch command is now limited to MathWorks products

Documentation for 32 and 64 bit Versions

Previous releases of the Toolbox have had completely separate documentation for the implementations using 32 or 64 bit integer types. Partly as a result of MathWorks discontinuing support for 32 bit Linux, and partly as the int64 type now admits all the same operations as the int32 type, we have now combined the documentation. There is a link at the top of each page to toggle descriptions between int32, int64 and the nag_int type that works on either platform. This is initially set to int32 or int64 depending on the implementation. Unfortunately testing showed that in some browser configurations the CSS to switch between the types interfered with cutting and pasting the example code so in this release the link does not affect the examples which are always int32 or int64 depending on the implementation. Note the displayed results are always from the 64 bit Linux implementation.

Related to this the PDF version of the documentation is now only produced for the 64 bit version as the changes for int32 are entirely trivial. (Just replace int64 by int32 throughout.)

In previous releases, the example programs for each routine have only been available as part of the documentation. For simple examples they could be cut and pasted directly into the MATLAB command window, however syntactic restrictions on MATLAB function definitions meant that any examples defining functions had to be first copied to a .m file before being executed as you can not define functions at the top level MATLAB command prompt.

In this release all example programs are provided as executable .m files as well as appearing in the HTML documentation. They are written to use nag_int so they work in 32 and 64 bit implementations. The most convenient way to access these is a link Open in the MATLAB editor that is just above each example documentation. Clicking on this link (if the MATLAB help or web browser is being used) will open the MATLAB editor, clicking on the green arrow run button will then execute the example in the MATLAB Command Window.

The following screen-shot highlights the process of linking from the documentation in the Supplemental help browser, to the MATLAB editor, to running the program and producing results in the MATLAB Output window and additional MATLAB plots.

nag_doc

nag_doc name opens the documentation for the function name in the MATLAB web browser. name may be the long or short name of a function in the NAG Toolbox. If name is omitted, the browser is opened at the start page of the NAG Toolbox documentation.

Unfortunately it is not possible to directly open the help browser at a function document, however the web browser offers the same facilities apart from the search facility.

If HTML documentation for a command is not found, then the output from the help function is returned.

nag_demo
nag_demo uses the standard demo function to open the demo browser for the NAG Toolbox.

This also forms a convenient command-line command to open the Supplemental Software Browser as the Demo Browser is the same application and you may navigate from there to the documentation just using the table of contents pane.

As previously noted the MATLAB docsearch is no longer available to search the NAG Toolbox documentation. The full text index is still generated and is used by the search box in the Supplemental Software GUI, so this is the fastest way to search the documentation. Unfortunately the Java classes for this search are not exposed by MATLAB so it is not currently possible to make a nag_docsearch command-line version of this.

The standard MATLAB lookfor command may be used. This only searches the first line of the ASCII comments in the help files rather than the full documentation, and it does not benefit from a pre-indexed search database, however it can be useful. For example if you are looking for a function for solving LP optimisation problems

lookfor ' LP '


Produces the output

e04mf                          - : LP problem (dense)
e04nk                          - : LP or QP problem
(sparse)
e04nq                          - : LP or QP problem
(suitable for sparse problems)
h02bb                          - : Integer LP problem (dense)
h02bv                          - : Print IP or LP solutions with
user-specified names for rows and columns
h02ce                          - : Integer LP or QP problem
(sparse), using e04nk
nag_mip_ilp_dense              - : Integer LP problem (dense)
nag_mip_ilp_print              - : Print IP or LP solutions with
user-specified names for rows and columns
nag_mip_iqp_sparse             - : Integer LP or QP problem
(sparse), using e04nk
nag_opt_lp_solve               - : LP problem (dense)
nag_opt_qpconvex1_sparse_solve - : LP or QP problem
(sparse)
nag_opt_qpconvex2_sparse_solve - : LP or QP problem
(suitable for sparse problems)


Where each function name is a link to its ascii help text, for example the first link produces:

e04mf: LP problem (dense)

Syntax:

[istate, x, iter, obj, ax, clamda, lwsav, iwsav, rwsav, ifail] =
e04mf(a, bl, bu, cvec, istate, x, lwsav,
iwsav, rwsav, 'n', n, 'nclin', nclin)

Further documentation is available in the NAG Toolbox help files.


The link at the end of the help text of each function uses the nag_doc command described above to show the help in the MATLAB web browser.

Documentation on the NAG Web Site

One advantage of the new documentation format is that it is designed to work in a standard web browser. The old documentation did not work well in standard browsers as it had so many features that were tuned to the MATLAB help system. So for this release we have the HTML as well as the PDF version of the documentation available from our website. The JavaScript used on the pages hides the MATLAB specific links such as Open in the MATLAB Editor when the pages are read via the Web.

Wednesday, 18 September 2013

How do I know I'm getting the right answer?

Many recent developments in numerical algorithms concern improvements in performance and the exploitation of parallel architectures. However it's important not to lose sight of one crucial point: first and foremost, algorithms must be accurate. This begs the question, how do we know whether a routine is giving us the right answer? I'm asking this in the context of the matrix function routines I've been writing (these are found in chapter F01 of the NAG Library), but I'm sure you’ll agree that it’s an important question for all numerical software developers to consider.

First, it's important to note that the term "the right answer", is a little unfair. Given a matrix A stored in floating point, there is no guarantee that, for example, its square root A1/2 can be represented exactly in floating point arithmetic. In addition, rounding errors introduced early on can propagate through the computation so that their effects are magnified. We'd like to be able to take into account this sort of behaviour when we test our software.

For a matrix function f(A), the computed solution F can be written as F=f(A)+∆f, where ∆f is known as the forward error. In some cases it may be possible to compute the forward error, by obtaining f(A) via some other means (for example, analytically, or by using extended-precision arithmetic). Usually we are interested in the norm of ∆f , which we scale by the norm of f(A) to form a relative forward error. As an example, consider the following matrix, whose exponential is known explicitly:

The matrix A was used as a test matrix in the 2009 version of the scaling and squaring algorithm for the matrix exponential [2]. With this algorithm, we obtain a relative forward error of 2.5x10-16. If we compute the exponential﻿ of A using the 2005 version of the scaling and squaring algorithm [1], then the relative forward error is about 3.0x10-12. Clearly the newer algorithm is giving a better result, but how do we decide whether these forward errors are small enough?

To try to answer this, it's useful to consider the backward error. We assume that the computed solution can also be written as F=f(A+∆A). Whereas the forward error tells me how far from the actual solution I am, the backward error ∆A tells me what problem I have actually solved.

A useful rule of thumb states that the forward error is approximately bounded by the product of the backward error and the condition number of the problem. Uncertainties in the input data, together with the limitations of floating point arithmetic, mean that a relative backward error in the region of unit roundoff is the best that we should expect. Thus if we can estimate the condition number, then we can get an idea of what size of forward error is acceptable. If a problem is poorly conditioned, then the forward error could be very large, even though the backward error is small and the algorithm is performing stably. At NAG we've been developing routines for estimating condition numbers of matrix functions, based on algorithms developed at the University of Manchester.

Returning to our matrix A, we find that the condition number of the exponential is very large: 1.6x1015. The product of the condition number and unit roundoff is about  1.7x10-1so it looks like both algorithms were actually performing quite well.

The discussion above assumes that we are somehow able to compute forward errors, but this isn't always possible. One approach we been using at NAG is to test if certain matrix function identities hold. For example
exp(log(A))=A,
sin2A+cos2A=I,

are generalizations of well-known scalar identities. Of course, we are now back to our original problem: if I find that the computed residual in my identity is of the order of 10-14, is that acceptable or is it too large? We're currently working on some error analysis for such identities to answer this question, and hope to publish a paper on this soon.

So can I answer my original question; how do I know I'm getting the right answer? Well, I hope I've persuaded you that in numerical analysis, this is a thornier issue than you might have thought. One thing is clear though: testing a new library routine takes far longer than writing it in the first place!

[1] N.J. Higham The scaling and squaring method for the matrix exponential revisited, SIAM J. Matrix. Anal. Appl., 26 (2005), pp. 1179-1193.
[2] A.H. Al-Mohy and N.J. Higham A new scaling and squaring algorithm for the matrix exponential, SIAM J. Matrix. Anal. Appl., 26 (2009), pp. 970-989.