Thursday, 1 December 2011

Coffee and Filters

Given the origins of NAG and our mission, it’s natural for us to take the “long view” in giving back to the communities in which we operate. In the US, we’ve just finished our second year as sponsor of the DemandTec Retail Challenge (DTRC) scholarship competition for high school students in the Chicago area. DemandTec is one of our earliest and strongest software company partners who incorporate NAG components into their software products, providing demand management software for major retailers around the world.

The DTRC puts 2-person teams of high school seniors in the role of category managers for a retail store in a a 2-week computer simulation where each day represents a week in the “real world”. The students are given many weeks of data showing the price, inventory, unit sales, promotions used and the profit earned on each product they are managing. In the contest, they are responsible for two brands of coffee, one brand of tea and coffee filters. During the competition, they must analyze prior results to set the new price of each item, decide whether to run promotions and decide how much inventory to purchase. The simulation creates an interaction both with consumers and with the other teams in the competition. Ultimately, the three teams with the highest profit at the end of two weeks advance to the regional finals here in Chicago.

At the Chicago Regional finals held November 10th, teams from Glenbard West High School and Wheaton Warrenville South High School gave presentations of their data analysis, strategy formulation and how they worked as a team as they adapted their strategy in the course of the contest. The judges for the contest were experienced professionals in retail analytics from DemandTec and other local companies. Our winners were team “Price Lords” consisting of Peter Ericksen and Greg Grabarek. Each of them received $2500 toward their colleges expenses next year from NAG and our co-sponsor software company Informatica. Our third sponsor was the Network of Executive Women (NEW), which supports the education and advancement of women in leadership roles.

This contest is a labor of love for those of us involved. We have cultivated relationships with teachers of mathematics and statistics at local schools and have made presentations on the contest to a number of high school classes. It not only engages us with the students and teachers but it also helps us educate the next generation of applied scientists who might one day be NAG users. We wish Peter and Greg good luck as they compete in the national semi-finals in early December and are already making plans or next year’s contest.

Wednesday, 23 November 2011

Calling NAG routines from R

R is a widely-used environment for statistical computing and data analysis. It is one of the modern implementations of the S programming language (i.e. much code written in S will run unaltered in R) although its underlying semantics are derived from Scheme. R is Free Software.

The capabilities of R can be extended through the use of add-on packages and a large number of these are available from the Comprehensive R Archive Network (CRAN). Some users have expressed an interest in calling NAG Library routines from within R; accordingly, we have recently created a package which provides access to some of NAG's numerical functionality. More specifically, the NAGFWrappers R package contains the local and global optimization chapters of the NAG Fortran Library, together with a few nearest correlation matrix solvers and some other simpler routines. The package incorporates documentation in the so-called Rdoc style that will automatically produce help pages within the R system (see figure below), and also in HTML and PDF - for example, here is the full list of NAG routines that are contained in the package.


For completeness, and to help R users further, we have also published more general instructions about how to use the R extension mechanisms to access any NAG routine from within R.

The original version of NAGFWrappers has been available since mid-2011; we have just updated it to use Mark 23 of the Fortran Library, and are releasing R binary packages for Windows 32 bit and Windows 64 bit, along with the R source package which can be used on other platforms (for example, we have built and run it on 64 bit Linux).

It should perhaps be noted that this is a preview release of the package, which is aimed at obtaining user feedback. Although it has been built and run on the platforms mentioned above, it is not a NAG product. We are keen to receive user feedback, and will respond to technical queries and problem reports via support@nag.co.uk so that we can further refine this package and make it still more useful to the R community.

Tuesday, 1 November 2011

SC11 diary catch up

I posted here a week or two ago about my diary leading up to the year's biggest supercomputing event - SC11 in Seattle. I though it would be handy to give a quick summary of the diary entries so far for those who haven't been reading along.

If you recall, I said: "On my twitter stream (@hpcnotes), I described it as: "the world of #HPC in one week & one place but hard to find time for all of technical program + exhibition + customer/partner/supplier meetings + social meetings + sleep!" To follow the news about SC11 on twitter, follow @supercomputing and/or the #SC11 hashtag."

"Any hope of "live" blogging or actively tweeting during the week of SC11 itself is almost nil - the week is just too busy with the day job. Even simply consuming the relevant news, comment and gossip is a challenge."

"So instead I am going to try to write a diary of the lead up to SC11."

If you've missed them, here are the 8 SC11 blogs so far:

  1. "The big HPC event of the year - lots of news, people & meetings. Busy week."
  2. "schedule certainty, locations, spare time & hard work"
  3. "SC11 news deluge, the missing HPC world"
  4. "how do you do it?"
  5. "navigation, rope and choosing wisely"
  6. "fog and sports events"
  7. "not everyone will be in Seattle"
  8. "ppt, professionals, preparation & precipitation"
Along the way, I have briefly alluded to a few things NAG will be doing at SC11. One of my colleagues will be along shortly to post here about our activities at SC11, but in the meantime, plan to visit us on booth 2622, or get in touch to arrange a conversation.


Thursday, 20 October 2011

SC11 will be here soon

The SC11 conference, or just "supercomputing", will be held in Seattle this November. For many in the high performance computing community, including NAG, SC is the big event of the year. Certainly it is the one that attracts the most press (and press releases), the most attendees, the biggest exhibition, and absorbs the most amount of time in planning before we even get there. It is the event where we get to meet with many of our customers, most of our potential suppliers, and many friends and collaborators.

SC11 will generate a lot of news, articles and other media in the coming weeks. You can keep up with SC11 on twitter by following @supercomputing and/or the #SC11 hashtag. NAG's twitter team (@NAGtalk) will also keep you up to date with NAG's activities and stories at SC11 and beyond.

The NAG team will also post here on The NAG Blog over the coming days and weeks with news about what NAG will be doing at SC11.


I will be adding to all this with a series of posts on my personal blog - a diary of the lead up and planning for SC11 - the first entry was posted today.

See you in Seattle.

Thursday, 6 October 2011

Calling routines from the NAG Fortran and C libraries from within LabVIEW

This is a follow-on from my colleague Sorin Serban's recent post, in which he looked at invoking methods from the NAG Library for .NET from within the LabVIEW programming environment. The motivation for this work was the realization that many LabVIEW users want to enhance their applications by making use of some of NAG's numerical routines. Recently, I've been using our Fortran and C libraries to do the same thing, and some of this work is discussed here.

The mechanism used within LabVIEW to call a routine from either of these libraries is different from that used in the case of the .NET Library. This is because that library is a so-called .NET assembly, whilst the C and Fortran libraries are (on the Windows platform) dynamic link libraries (DLLs). The .NET assembly uses the common language runtime (CLR) and the .NET framework to manage assembly functions, and to export information about classes, methods, properties and events. The practical implication of this is that when a method from the NAG Library for .NET is loaded into LabVIEW, information about its function arguments is automatically loaded as well. This is not the case with a DLL, as we shall see.

Thursday, 8 September 2011

NAG communications

We know that it can be hard for you to keep track of all the new stuff that is happening at NAG, such as the regular Library updates, new ways to use routines from various computing environments, events and conferences, training days, talks at customer sites, collaborations with numerical specialists, work on new technology platforms, and so on. The list of new activities and developments can seem endless, but we try to directly communicate the information that seems likely to be of most interest to you.

For example:

  • If you are a NAG user, we let you know when a new release of the Library is available for your system, so that you have the quickest access to new routines.
  • When we know your field of interest, we send you information about events and highlight relevant examples in order to help you get the most from NAG.
  • We periodically release brief case studies that contain novel ways of using NAG routines; some of these might help trigger new ideas for your problem solving.

However, one of the best ways to keep up with all the information is via NAGnews. This e-newsletter contains articles on new product development and technical advancements along with developer tips and hints and event information. More than 7,000 subscribers already receive NAGNews, which is distributed approximately every 6 weeks. Registration is free of charge, and NAG will not share your details with any third party.

NAGNews has been going for some time – in fact, the next issue will be the hundredth. Why not take a look here if you’d like to consider asking to receive a copy?

Monday, 22 August 2011

How do you use NAG in Excel?

For years NAG has considered Excel users as a group with certain needs for good numerical algorithms. Nowadays Excel is the industry-standard for spreadsheets, but it is not a statistical package nor is it a numerical analysis tool. That’s why Excel users come to NAG and ask how our accurate, robust and efficient NAG Library can be utilised in Excel.

There are a few ways in which the Library can be called in Excel:

  • xls/xlsm spreadsheet and VBA

The most popular, widely used way: NAG and Excel page describes how to do it with the help of technical papers and downloadable examples. Basically you create a new spreadsheet and add VBA code to it. NAG has created VBA Declaration Statements which provide the bridge between NAG and Excel- the user just has to add the declaration file to a VBA module and the NAG DLL becomes visible to Excel & VBA. From this moment on you can create macros that will, for example, read data from an external data file, process it in VBA and then display the results on the spreadsheet. Or you can create a UDF (User Defined Function) that is called straight from the spreadsheet, but inside uses NAG to compute some statistics, find a zero of a function, or find a value of a European call option using Black-Scholes Model or Heston’s Stochastic Volatility Model. This is the great thing about using NAG in conjunction with Excel- you can do it in many ways!

  • xla/xlam add-in and VBA

Very similar to the previous point. The difference is that the VBA code for the macros and UDFs is in an Excel add-in. As a result the user’s data and the user’s code may be in two separate files. So the user needs to load the xla add-in while using his xls file to have the functions available. http://www.cpearson.com/excel/createaddin.aspx nicely describes how an xla add-in works and how to create it.

  • xll add-in

An xll is a different type of add-in, it actually allows you to use your C/C++, Fortran, or VB code from Excel. This is actually a Dynamic Link Library (DLL) file that is compiled and accessible via Excel API. The benefit of having an xll is that it’s compiled and will be faster than using VBA. However it requires very cautious programming due to memory issues and is generally harder to program.

  • VSTO

Visual Studio Tools for Office is a set of development tools that allows creating Visual Studio projects in C# or VB.NET in the form of Excel add-ins that extend Excel’s features or customise the user interface. Since this is a .NET environment, it’s advisable to use the NAG Library for .NET.

We look forward to hearing from our users and anyone that is interested in using Excel in conjunction with NAG. How do you use Excel? What example programs do you think we should have on our website? Currently we have many examples on using NAG via VBA, but not many on the other methods.

Tell us what you would find useful!

Friday, 19 August 2011

What is this HPC thing?

I’m sure something like this is familiar to many readers of this blog. The focus here is HPC, but there is a similar story for mathematicians, numerical software engineeers, etc.

You've just met an old acquaintance. Or a family member is curious. Or at social events (when social means talking to real people not twitter/facebook). We see that question coming. We panic. Then the family/friend/stranger, asks it. We freeze. How to reply? Can I get a meaningful, ideally interesting, answer out before they get bored? What if I fail to get the message across correctly? Oops, this pause before answering has gone on too long. Now they are looking at me strangely. They are thinking the answer is embarrassing or weird. This is not a good start.

The question? “What do you do then?” Followed by: “Oh! So what exactly is supercomputing then?

The problem is that it usually takes a several slides or a few minutes of explanation to give a decent overview to a scientist who is new to HPC. Your questioner is almost certainly not a scientist, and maybe even thinks using email, facebook or twitter is a big computing achievement.

So, where do you start? The big computer? The science it enables? The money involved? You want to quickly convey the scale, the enormous range of user disciplines, the specialist nature, the benefits to everyday life. You want to distinguish it from laptops, tablet computers, and corporate servers. You want to avoid mentioning nuclear weapons labs. You’d even like to come across as cool, but it’s way too late for that.

Friday, 5 August 2011

Seeing the good for the trees

We put a lot of effort into trying to ensure that our numerical libraries are available on the platforms that are most popular with our users. For each library, this leads to a proliferation of implementations, each of which is targeted at a specific combination of compute processor, operating system and compiler. The details of current implementations are on our website - for example, here is the list for the NAG Fortran Library, which also includes download links for each implementation. Although this list - which currently features forty-nine implementations - is an impressive array (and the fruits of a great deal of careful work on the part of our implementation team), its presentation could perhaps be viewed as being tricky to navigate by those who are searching for the appropriate implementation for their particular system.


Thursday, 14 July 2011

Types in The NAG Toolbox for Matlab®

I was recently asked why the NAG Toolbox for Matlab requires users to be explicit about the types of parameters passed to NAG routines, for example:

[vr, info] = f08nf(int64(1), int64(4), vr, tau);

and similarly why output parameters like info in this example have class int64 as opposed to double.

The original reason for this was efficiency. The NAG Toolbox consists of wrappers to NAG Library routines and there are always issues in transforming data into the correct format when calling between different languages. As well as the top-level call (like the one shown here), we also have to transform data when calling Matlab functions in "call backs", for example when evaluating the objective function during the solution of an optimisation problem. In such cases there may be many thousands of calls and so we want to make the data transformation as efficient as possible. It was for this reason that we decided to put the onus on the user to do any coercions and checks on the validity of those coercions. When we beta tested our original prototype the feedback we got was that this wasn't a big deal.

Tuesday, 12 July 2011

Why migrate from legacy systems?

I was tempted by a colleague into posting an answer to a question on LinkedIn. I thought it might be useful to reproduce that here for a potentially different audience ...

Q: (from LinkedIn)
"Why is it important to migrate from legacy systems? Is there anyone who can simplify the migration process?"

A: (my answer, there are others too)

Why migrate from legacy systems? There are two aspects to consider - protection and opportunity.

Friday, 8 July 2011

Using NAG .NET methods in LabVIEW

Recently we were asked by users at a few European universities about how to call the NAG Library from within the popular LabVIEW programming environment. They're interested in doing this because they wanted to supplement the functionality of their LabVIEW applications using some of the numerical routines offered by NAG. We already have some preliminary material on our website which describes a few aspects of using the NAG Fortran Library in LabVIEW, but alternative approaches (which could be viewed by some as reflecting more modern programming paradigms such as object orientation) are also possible. Specifically, we’ve been taking a look at calling routines from the NAG Library for .NET from LabVIEW, and we describe some of this work here.

Monday, 27 June 2011

ISC11 Review

Several of NAG's staff attended the International Supercomputing Conference (ISC) in Hamburg, Germany last week. We took part in several aspects of the show ("Talk to us in Hamburg"). I also posted a ISC11 preview with comments on I would be looking out for. So, what were my impressions after the show?

First, as ever, ISC is a (the?) great mid-season HPC event - breathing renewed life into the HPC community half-way around the world and half-way around the year from the biggest show of the HPC calandar ("SC" in USA each November).

In my preview, I said I was watching out for three battles:

Thursday, 23 June 2011

Storing Banded Matrices for Speed

Introduction

A real n-by-n matrix A will, in general, be defined by its n2 elements that would need to be stored for computations to be performed using A. If n is very large, this can amount to a lot of storage.


Wednesday, 15 June 2011

Talk to us in Hamburg - ISC11

NAG will be at the International Supercomputing Conference (ISC) in Hamburg, Germany next week (June 19-23). Come and see us at booth number 830 in the main exhibition hall. We will be demonstrating how our product, services and training can help your HPC applications.

Products on show will include the NAG Library for SMP & Multicore, the NAG Parallel Library, numerical routines for GPUs, the NAG Toolbox for MATLAB®.

We will also be able to tell you more about how NAG’s HPC training courses, impartial technology advice and HPC software engineering services are helping HPC users around the world.

We will have the latest news and code performance success stories from the Computational Science and Engineering Support Service provided by NAG as part of the UK’s HECToR National Supercomputing Service.

NAG is even contributing to the technical program. We are joint hosts (with Intel and RRZE Erlangen) of BoF session 7: “Data‐Parallelism/SIMD for HPC in the Future” on Wednesday. And NAG’s Andrew Jones is one of the four panel members on the conference closing “Analyst Crossfire”, hosted by Addison Snell of InterSect360 on Thursday.

Finally, NAG is expanding our HPC team in the USA and are looking for HPC programmers or CSE experts to be based at a customer site in Houston – so please come and talk to us if you are interested or can recommend anyone.

Tuesday, 14 June 2011

Now hear this

Recently I’ve been doing some work producing a web-based seminar (or webinar, to use the fashionable neologism) which demonstrates some aspects of NAG’s optimization routines and their use in the finance industry (specifically for the optimization of financial portfolios). We originally scheduled a live broadcast of the seminar using the GoToWebinar package, which works very well; we’ve also used Webex’s offering, with similar results. Unfortunately, in spite of a lot of preparation and rehearsal, some technical problems (unconnected with the package) arose during the broadcast which rendered the recording of the session useless. Reasoning that better results might be obtained through simplification (our broadcast was a joint effort between two sites, which contributed to the problems), we decided to try just recording the seminar and put it up as a video.

And so it was last week that I found myself talking to my computer in an isolated room whilst deftly manipulating slides and demos. It was just as if I was giving a real live seminar - albeit one that nobody had turned up to (come to think of it, that wasn’t the first time such a thing had happened). I used CamStudio 2.0 to record the session (an earlier attempt using the more up-to-date CamStudio 2.6b failed on this Windows 7 machine for some unknown reason) and Windows Live Movie Maker to do some very minimal editing of the resultant video file. The results can be seen and heard here and here (there's also some explanatory text about the seminar on this page for the curious). I was quite pleased with the way they turned out, although I’d be interested to hear from anyone who knows of a tool that can automatically remove the large number of um’s and err’s from the soundtrack.

Thursday, 2 June 2011

.NET examples: C++ and using Object fields

Recently we had a question from a user of the NAG Library for .NET. He wondered whether it was possible to use methods from objects rather than static methods as callback parameters to the NAG methods. The library was designed such that this should be possible, but we were made aware that there were not many examples of this usage in the documentation. Also the user was coding in C++/CLR and currently we have no examples of this, all examples being C#, F# or VB.Net.

We took the existing C# example for the optimisation routine e04uc and modified it in two ways. Firstly converting to C++/CLR, and then modifying the example so that the objective function being minimised is a method of an object (of locally declared class NagDelegate) and to demonstrate two ways in which data may be passed between successive calls to this callback parameter.

Monday, 23 May 2011

Programming gnuplot

One of the side effects of having the word 'visualisation' in your job title is that people expect you to know something about the subject. Accordingly, every now and again I get contacted by one of my colleagues who needs to draw a graph, or reformat an image, or incorporate a visualisation into a document, and is - usually - too busy with other (probably more important) activities to spend a lot of time figuring out how to make a graphics package behave itself and do the right thing.

Wednesday, 4 May 2011

Good Business Analytics Conference

I have not yet written a note about the good experience I had at the 2011 INFORMS Conference on Business Analytics and Operations Research in Chicago, where NAG had a booth with demonstrations of our library routines. ....I guess 'now' is as good a time as any to do so. The conference was very well attended, up around 30% over last year by many accounts, and included a wide variety of analytical types; academics and practitioners with many different specialities. It was a good opportunity to understand how the best analytical techniques are applied in different environments. During the three days my colleagues and I managed to talk with lots of people. Some were users of NAG routines and some who did not know us at all. It was a pleasure to meet all of you. My note is just to say thanks to fellow attendees, as well as the organizers, for a stimulating 3 days – hope to speak with many of you again soon

Monday, 11 April 2011

Aging Fortran Codes - A Crisis in Slow Motion

Fortran isn't dead but some of the critical scientific codes written in it will be on life-support soon if research sponsoring organizations don't react soon. It was just a little over four years years ago that we read about the death of John Backus, the leader of the IBM that invented the Fortran language in 1954. It was 54 years ago that the first Fortran compiler was released commercially.

In that span of time millions of lines of Fortran code have been written in Fortran IV, Fortran 66 and 77 as well as variants such as Watfor. In listening to managers of codes at DOE, NASA universities and private companies (including independent software vendors) over the past several years, I am constantly amazed at how much code is in the early versions of Fortran.

People readily acknowledge the amount of code and the concern about the aging "expertise base" (i.e., people) who are charged with  maintaining and improving the code. They recognize that much of this code is "mission-critical" to research in physics, chemistry, aerospace and alternative energy and worry about the long certification process for updating modernized versions of the code. And there the "disconnect" starts.

When I ask how much resource they are putting into modernizing the codes to preserve the code and help it cope in a multi/many-core world the answer I often get is, effectively, "that's not one of our budget priorities right now".

This quite unsatisfying situation suggests two responses. For us at NAG who get called upon to port, modernize and scale codes as part of our work with HECToR users and other customers it suggests that we need to continue to find more cost-effective tools and processes to convert, modernize and scale codes. The bigger challenge is for organizations such as those above and other research sponsoring organizations such as the NSF in the US and various research councils in the UK to not only recognize the issue but to find the resources to support the effort before we lose the base of talent and institutional "memory" needed to do the job.

Monday, 4 April 2011

Turning off logic

Everybody knows about the two halves of our brain. One half deals with logical thought, facts etc. while the other deals with feelings, imagination and other things. They are linked at the base of the brain by the corpus collosum which is the sole place that they communicate with each other. The two halves work together as they take in information from the world through all of your senses, painting a "complete" picture of what is going on around you. For example the left side of your brain deals with the names of objects, whereas the right side deals with the objects function, so that when presented with a computer you are able to name it and describe what it does.

Artists and programmers are probably at two extremes of the spectrum when it comes down to which parts of our brain we use for our work. Artists almost certainly use much of their right brain, whereas I imagine that programmers use a lot of their left brains.

Whatever parts of our brain we use get exercised and will be become more adept at what they do. So for programmers their logic should improve; for artists their artistic skills should improve also. But what happens to those parts of the brain that we don't use? Are they getting worse? Better? Staying the same? What would life be like if we made more of an effort to spread the work load over our whole brain?

Monday, 21 March 2011

Sticking to coding guidelines

Here at NAG we have a lot of rules governing how we write our code. These include things like which constructs we're allowed to use (not all compilers support complete specifications of languages such as Fortran 2003), how we allocate and deallocate memory (we insist on explicit deallocation in the same program scope as the original allocation to avoid memory leaks), how we name and declare variables etc. For many of these we have tools which can verify that the rules are being adhered to, some of which are based on our own compiler. For the rest, however, we rely on an internal peer review process where a colleague checks our work independently to ensure that it conforms to our standards.

Recently a case arose where a relatively new colleague was writing a rather complicated piece of code and he came up against a problem. Being a clever chap he came up with an elegant solution which worked perfectly in the context that he was working in and required only a minimal change to the user interface of his routine. Strictly speaking that change was a subtle deviation from our standards since it involved some arrays being declared with explicit lengths rather than being assumed size. What he didn't realise was that, when this code came to be incorporated into some of our other products, his solution would cause problems with the way the parameters were being used there, requiring a lot of effort to work around.

Luckily he was paired with one of our most eagle-eyed peer reviewers who queried the fact that these arrays did not conform to our normal practice. This in turn led to the original problem and the issues with applying his solution in our other products being aired. In the short term a fix conforming with our standards is being implemented which will work correctly in all our products, and in the longer term we will adopt an improved mechanism, better to address his original problem.

One lesson from this is that, particularly in a multi-developer environment like ours, its important to stick to agreed standards since its not always clear what the consequences of breaking them will be. But we all make mistakes and it would be nice if we had tools which could validate stylistic programming rules alongside the more mechanistic ones. It might be possible to do this with formal specification languages but that seems like using a sledgehammer to crack a nut. We already have tools which check the consistency of our documentation and our software, e.g. to ensure that the types and dimensions of parameters match. Maybe we should develop this idea further to include more complicated checks.

Friday, 18 March 2011

Performance and Results

What's in a catch phrase?

As you will hopefully know, NAG's strapline is "Results Matter. Trust NAG".

What matters to you, our customers, is results. Correct results that you can rely on. Our strapline invites you to trust NAG - our people and our software products - to deliver that for you.

When I joined NAG to help develop the High Performance Computing (HPC) services and consulting business, one of the early discussions raised the possibility of using a new version of this strapline for our HPC business, reflecting the performance emphasis of the increased HPC activity. Probably the best suggestion was "Performance Matters. Trust NAG." Close second was "Productivity Matters. Trust NAG."

Thursday, 17 March 2011

The Addictive Allure of Supercomputing

The European Medical Device Technology (EMDT) magazine interviewed me recently. InsideHPC also has pointed to the interview here.

The interview discusses false hopes of users: "Computers will always get faster – I just have to wait for the next processor and my application will run faster."

We still see this so often - managers, researchers, programmers even - all waiting for the silver bullet that will make multicore processors run their application faster with no extra effort from them. There is nothing now or coming soon that will do that except for a few special cases. Getting performance from multicore processors means evolving your code for parallel processing. Tools and parallelized library plugins can help - but in many cases they won't be a substitute for re-writing key parts of the code using multithreading or similar techniques.

Of course, NAG can help ...

Thursday, 10 March 2011

NAG out and about

The NAG website has a section called "Meet our experts - NAG out and about", which gives a list of upcoming events worldwide that NAG experts will be attending or presenting at.

The page also notes: "We regularly organise and participate in conferences, seminars and training days with our customers and partners. If you would like to talk to us about hosting a NAG seminar at your organisation or any training requirements you might have email us at sales@nag.co.uk".

In my own focus of high performance computing (HPC), I have previously written (for ZDNet UK) about some key supercomputing events. For those of you interested in meeting up with HPC experts (especially from NAG!), I have set up a survey of HPC events - please let us know which events you plan to attend in 2011 - and see which events other readers of The NAG Blog are attending.

Tuesday, 15 February 2011

Calling NAG routines from IDL

The collection of numerical algorithms contained in the NAG Library has been a valuable and widely-used resource for many years, and it's something that we're continually updating and maintaining lest the expression "has been" becomes an uncomfortably relevant adjective. To this end, we're - for example - developing and testing new routines for the next Mark of the Library, have just released a new version of the Library for the .NET environment, and are continuing to try and make our algorithms accessible from as many languages and packages as possible. Besides .NET, these include MATLAB®, Excel, Maple, Java, Python and many others (some of which are listed on this page). In spite of our best efforts in this area however, there will always be environments that we've yet to cover fully.

Fortunately, our users - besides possessing a commendably high level of discernment in their selection of numerical libraries - are talented and resourceful individuals. For example, we've recently been in contact with a group at Culham Centre for Fusion Energy, which is the UK's national fusion research laboratory. They're users of the popular IDL programming language and visualization system, and were keen to enhance IDL's functionality by calling NAG Library routines from within that environment. They found that they were able to do this by writing a C wrapper around each Fortran routine call, and then passing the definition of the wrapper routine to IDL via its Dynamically Loadable Module (DLM) packaging method. We highlight their work - which they've been kind enough to share with us - using the E02BEF routine as an example; this calculates the cubic spline approximation to a set of data points (and has previously formed the basis of one of the demos of the NAG Toolbox for MATLAB®)).

We first give the wrapper function, which checks its input parameters and then calls the NAG routine. The function prototypes for all the routines in the NAG Fortran Library are contained in the header file nagmk22.h; this is part of the Fortran Library distribution, and makes it easier to call any of the routines from within C or C++. We note in passing that the wrapper is only required because we're using the NAG Fortran Library. IDL is written in C, so DLMs must also be written in C; if the NAG C Library was being used instead, this code would be simpler.

//---------------------------------------------------------------
void IDL_CDECL idl_e02bef(int argc, IDL_VPTR argv[], char *argk){
// IDL wrapper function for NAG routine E02BEF.
//
// Arguments: 
//
//  0 start  char*   input  "C" or "W": cold or warm start
//  1 NP     int     input  number of data points
//  2 x      double* input  independent (abscissa) values      
//  3 y      double* input  dependent (ordinate) values
//  4 w      double* input  weights
//  5 s      double  input  smoothing factor 
//  6 nest   int     input  over-estimate of the no. of knots
//  7 n      int     input  prior no. of knots for a warm start
//                   output total number of spline knots      
//  8 lambda double* input  prior knot positions
//                   output knot positions
//  9 c      double* output spline coefficients
// 10 fp     double  output sum of square weighted residuals 
// 11 wrk    double* input  prior work array for a warm start
//                   output work array
// 12 lwrk   int     input  size of work array: 4*NP+16*nest+41
// 13 iwrk   int*    input  prior integer work array (warm start)
//                   output integer work array
// 14 ifail  int     input  must be set to 0, -1 or 1
//                   output 0 unless an error occurred
//
//      call E02BEF(start, NP, x, y, w, s, nest, n, 
//                  lamda, c, fp, wrk, lwrk, iwrk, ifail) 
//
//---------------------------------------------------------------
   char start[2];
   int i, NP, nest, lwrk, n, *iwrk, ifail, length_start;
   double s, fp, *x, *y, *w, *lambda, *c, *wrk;
   
// Import all arguments, and check their values.
   
// First, the input parameters only.      
   IDL_ENSURE_STRING(argv[0]);      
   IDL_ENSURE_SCALAR(argv[0]);
   strncpy(start, IDL_STRING_STR(&argv[0]->value.str), 1);
   start[1] = '\0';
   if(strlen(IDL_STRING_STR(&argv[0]->value.str)) != 1) 
      fprintf(stderr,"Start String Length Incorrect\n");

   IDL_ENSURE_SCALAR(argv[1]);
   NP = (int)IDL_LongScalar(argv[1]); 
   
   IDL_ENSURE_ARRAY(argv[2]);
   x = (double *)argv[2]->value.arr->data;   
   if(argv[2]->value.arr->n_elts < NP) 
      fprintf(stderr,"X Array Too Small\n");
      
   IDL_ENSURE_ARRAY(argv[3]);
   y = (double *)argv[3]->value.arr->data;   
   if(argv[3]->value.arr->n_elts < NP) 
      fprintf(stderr,"Y Array Too Small\n");  
    
   IDL_ENSURE_ARRAY(argv[4]);
   w = (double *)argv[4]->value.arr->data;   
   if(argv[4]->value.arr->n_elts < NP) 
      fprintf(stderr,"W Array Too Small\n");  
   
   IDL_ENSURE_SCALAR(argv[5]);
   s = (double)IDL_DoubleScalar(argv[5]); 
   
   IDL_ENSURE_SCALAR(argv[6]); 
   nest = (int)IDL_LongScalar(argv[6]); 
   
   IDL_ENSURE_SCALAR(argv[12]); 
   lwrk = (int)IDL_LongScalar(argv[12]); 
   
// Second, the input & output parameters.  
   IDL_ENSURE_SCALAR(argv[7]);
   IDL_EXCLUDE_EXPR(argv[7]);      
   n = (int)IDL_LongScalar(argv[7]);
        
   IDL_ENSURE_ARRAY(argv[8]);
   IDL_EXCLUDE_EXPR(argv[8]);
   lambda = (double *)argv[8]->value.arr->data;   
   if(argv[8]->value.arr->n_elts < nest) 
      fprintf(stderr,"Lambda Array Too Small\n");
           
   IDL_ENSURE_ARRAY(argv[9]);
   IDL_EXCLUDE_EXPR(argv[9]);
   c = (double *)argv[9]->value.arr->data;
   if(argv[9]->value.arr->n_elts < nest) 
      fprintf(stderr,"C Array Too Small\n");
   
   IDL_ENSURE_SCALAR(argv[10]);
   IDL_EXCLUDE_EXPR(argv[10]); 
   fp = (double)IDL_DoubleScalar(argv[10]);
   
   IDL_ENSURE_ARRAY(argv[11]);
   IDL_EXCLUDE_EXPR(argv[11]);
   wrk = (double *)argv[11]->value.arr->data; 
   if(argv[11]->value.arr->n_elts < lwrk) 
      fprintf(stderr,"Work Array Too Small\n");
   
   IDL_ENSURE_ARRAY(argv[13]);
   IDL_EXCLUDE_EXPR(argv[13]);
   iwrk = (int *)argv[13]->value.arr->data;
   if(argv[13]->value.arr->n_elts < lwrk) 
      fprintf(stderr,"IWork Array Too Small\n");
   
   IDL_ENSURE_SCALAR(argv[14]);
   IDL_EXCLUDE_EXPR(argv[14]);      
   ifail = (int)IDL_LongScalar(argv[14]);        

//---------------------------------------------------------------
// Call the NAG routine.  It was written in Fortran; here, we 
// assume we're running under Linux or Unix, where most 
// compilers convert the name to lower case and prepend an 
// underscore.  The function prototype is defined in nagmk22.h 
// (above).
//
// Because one of the parameters is a character string (and 
// we're calling Fortran from C), we also have to pass its 
// length.  Under this calling convention, this goes at the end 
// of the argument list.
   length_start = strlen(start); 
   e02bef_(start, &NP, x, y, w, &s, &nest, &n, lambda, c, &fp, 
      wrk, &lwrk, iwrk, &ifail, length_start); 

// Check for unsuccessful call of NAG routine (see documentation)
// and output warning if necessary.
   if(ifail != 0) 
      fprintf(stderr,
      "Warning: e02bef returned with ifail = %d\n", ifail);
                            
//---------------------------------------------------------------
// Return the scalar data to IDL (we don't have to explicitly 
// return the arrays, as they're automatically passed by 
// pointer/reference).
   IDL_StoreScalar(argv[7],  IDL_TYP_LONG,   (IDL_ALLTYPES*)&n);
   IDL_StoreScalar(argv[10], IDL_TYP_DOUBLE, (IDL_ALLTYPES*)&fp);
   IDL_StoreScalar(argv[14], IDL_TYP_LONG, 
      (IDL_ALLTYPES*)&ifail);
}

Each NAG routine that we want to use would require a separate wrapper, constructed along similar lines to those illustrated in this example.

Here's the rest of the code (which should precede the specification of the wrapper function in our file - which could be called, say, idlnag.c). Along with the NAG function prototypes, we also include idl_export.h, which defines the IDL API. Next, after defining a useful macro and the prototype for the wrapper function, we set up the DLM by first defining a structure idl_nagprocedures that will be used to pass information about our routine to IDL, and then using that in the specification of a function called IDL_Load. Every DLM must have this function, which is called when the DLM is first loaded; it calls IDL_SysRtnAdd to load the routine definition, and then defines the exit handler (which does nothing in our case).

#include <stdio.h>
#include <string.h>
#include "idl_export.h" // IDL API Header
#include "nagmk22.h"    // NAG C/Fortran Prototypes

// Useful macro.  
#define ARRLEN(arr) (sizeof(arr)/sizeof(arr[0]))

//---------------------------------------------------------------
// Procedure prototypes - one for each routine in this module.
extern void IDL_CDECL idl_e02bef(int argc, 
   IDL_VPTR argv[], char *argk);

//---------------------------------------------------------------
// Structure used to pass information about our routine to IDL.  
// Other routines would require similar entries in this table.
static IDL_SYSFUN_DEF2 idlnag_procedures[] = {
   {(IDL_FUN_RET) idl_e02bef, // name of routine in here.
   "IDL_E02BEF",              // name of the IDL procedure.
   15,            // Min no. of arguments for routine.
   15,            // Max "   "      "      "     " 
   0,             // Further options which we don't use.
   0              // Reserved for IDL use - always set to 0.
   },
};

//---------------------------------------------------------------
// This function is called when IDL is shut down.
void idlnag_exit_handler(void) {
   // Nothing to do.
}

//---------------------------------------------------------------
int IDL_Load(void) {
// This function is called when IDL loads this module.  

// Call IDL_SysRtnAdd to register the routine contained in this 
// module.  Note that this routine is what IDL calls a 
// procedure - i.e. it doesn't return a value (cf. what IDL 
// calls a function, which does).  This is specified by setting 
// the second parameter in the call to FALSE (it's TRUE for 
// functions). 
   if (!IDL_SysRtnAdd(idlnag_procedures, FALSE, 
      ARRLEN(idlnag_procedures))) return IDL_FALSE;

// Register the exit handler.
   IDL_ExitRegister(idlnag_exit_handler);

   return(IDL_TRUE);
}

idlnag.c must then be compiled and linked into a shared library (called idlnag.so on Unix) which should be placed in a location on the directory path specified by the IDL_DLM_PATH environment variable. The other file that must be in that location is the so-called DLM module description file. In our example, its contents would be:

MODULE idlnag                    # Name of the DLM
DESCRIPTION IDL Interface to NAG # Optional one-line description
VERSION 1.0                      # Optional version number
PROCEDURE IDL_E02BEF 15 15 # Routine name, min, max no. of args

Here, the name of the DLM must be the same as the name of the shared library we've created, whilst the routine name and the minimum and maximum number of arguments must be the same as that specified in the IDL_SYSFUN_DEF2 structure - i.e. idlnag_procedures in our case.

Having defined the new function in this fashion, it will be loaded the next time that IDL is started, and it can be invoked within IDL as

IDL_E02BEF, start, NP, x, y, w, s, nest, n, lambda, c, fp, $
            wrk, lwrk, iwrk, ifail

where start is an IDL variable of type string, NP is a long, x, y and w are double arrays of size NP, s is a double, nest and n are longs, lambda and c are double arrays of size nest, fp is a double, lwrk is a long, wrk is a double array of size lwrk, iwrk is a long array of size nest and ifail is a long. The next step would be to call E02BBF, which evaluates the spline curve using the parameters calculated by E02BEF - specifically, the knot positions lambda(nest), and the spline coefficients c(nest) - see this demo for an illustration (albeit using a different package).

A more complete description of how to call external routines from within IDL is available here. That account also illustrates the use of IDL keywords to modify the behaviour of the function; their use has been eschewed in this example in order to retain the exact correspondence between the argument list for the NAG Fortran Library routine and the way in which it's called in IDL. We've also opted for a simple error-handling mechanism in the wrapper; alternatives which are better-integrated with IDL are possible. Finally, we note that more information on IDL DLMs is available from the IDL documentation, which can be found in various locations (such as here, for example).

Wednesday, 9 February 2011

Wandering Precision

If you call the same numerical library routine twice with exactly the same input data, you expect to get identical results - don't you?

Well, it all depends. If you're calling a routine that somehow uses random numbers, for example a solver based on a monte carlo method, then the results might be different - but then you'd probably accept that (assuming that you'd been warned in the documentation).

But what if you're calling a numerical routine that in theory contains no randomness, but you still get slightly different results? And what if those slightly different results cause you problems - for example if you depend on repeatability for testing purposes? Then you might waste a lot of time hunting for the cause.

Well, this problem has turned up more than once recently for us at NAG, and in the cases we've looked at it has turned out to be a problem with the way the machine does its arithmetic - or rather, in the way that the compiler tells the machine to do its arithmetic.

Let me give you an example. Suppose you want to compute the inner product of two vectors x and y, each of length n. Simple - you just multiply each element of x by the corresponding element of y, then add all the products together to get the answer. No apparent randomness there. What could go wrong?

Well, if the hardware you are running on is equipped with Streaming SIMD Extension (SSE) instructions - and almost all modern hardware is - here's what.

SSE instructions enable low-level parallelism of floating-point arithmetic operations. For example, you can hold four single precision numbers at the same time in a 128-bit register, and operate on them all at the same time. This leads to massive time savings when working on large amounts of data.

But this may come at a price. Efficient use of SSE instructions can sometimes depend on exactly how the memory used to store vectors x and y is aligned. If it's aligned nicely - by which I mean, in the inner product example, that the addresses of the first elements of the arrays x and y are multiples of 16 bytes - then that's good. The hardware can efficiently move numbers from memory into registers to work on them, using instructions that depend on that alignment. So for our inner product, with a good optimizing compiler, we'd load numbers four at a time, multiply them together four at a time, and accumulate the results as we go along into our final result.

But if the memory is not nicely aligned - and there's a good chance it may not be - the compiler needs to generate a different code path to deal with the situation. Here the result will take longer to get because the numbers have to be accumulated one at a time. At run time, the code checks whether it can take the fast path or not, and works appropriately.

The problem is that by messing with the order of the accumulations, you are quite possibly changing the final result, simply due to rounding differences when working with finite precision computer arithmetic.

Instead of getting

  s = x1*y1 + x2*y2 + x3*y3 + ...
you get
  s = (x1*y1 + x5*y5 + x9*y9 + x13*y13) +
      (x2*y2 + x6*y6 + x10*y10 + x14*y14) + ...

Chances are that the result will be just as accurate either way - but it's different by a tiny amount. And if that tiny difference leads to a different decision made by the code that called the inner product routine, the difference can be magnified.

In some cases, for example when solving a mathematical optimization problem using a local minimizer, the final result can be completely different - and yet still valid - because a local minimizer might converge to any one of several different minimum points, if the function does not have a unique minimum.

And all because of how a piece of memory happens to be aligned. If you allocated the memory yourself you might be able to do something about it, but on the other hand the alignment might be completely outside your control. NAG users have reported cases where running a program from the command line would give consistently different results from running the same program by clicking on its name in Windows Explorer.

Is there anything we can do about problems like this? The fact is that NAG users want their code to run fast, so using SSE instructions makes sense. We can lobby the compiler writers to ask them to be careful how they optimize the code they produce, but they won't necessarily take notice. Compiler writers could also help by making their memory allocation routines only return 16-byte aligned memory, specifically to help with the use of SSE instructions. In the past, though, I had no success trying to convince the gfortran compiler folks to do that. In any case, even if they did, it wouldn't always help - if the first element of a single or double precision array is aligned on a 16 byte boundary, the second element will definitely not be, so if you want to operate on a vector starting at that location you've no chance.

We could choose not to use SSE instructions at all. But, apart from efficiency reasons, the "legacy" extended precision 80-bit register arithmetic which started in the 1980s with the 8087 numeric co-processor had its own "wobbling precision" problems for numerical code.

As new hardware with AVX instructions and 256 bit registers comes along, even more numerical work can be done in parallel. So - it seems that for the foreseeable future we're just going to have to live with this, and try to minimize the effects on NAG users by means of documentation.

Thursday, 27 January 2011

The joy of creating new solvers...

It’s an exciting time at the moment with a whole range of new routines being completed ready for the next Library release. The development group here are a highly motivated set of hard working folks, many with post-doc experience in a plethora of different areas of mathematics and computer science. While bringing their own individual knowledge to the code, they also get together as a well orchestrated team as all the methods are cross checked and fine tuned and documented for inclusion as new additions to the NAG library.

The next Library release will include more optimization routines, additional regression methods, extended wavelet functions, new surface fitting routines, further ODE solvers, enhanced random number generators... to mention only some of the content. So, particularly at the moment, it’s a real pleasure to have conversations with the individual developers and hear how some of the subtle design decisions that are made during the development process provide such large benefits, in usability, accuracy and performance, to users of our routines. For example one of the new optimization routines provides a new approach to solving bound constrained problems without requiring derivatives – making this solver very efficient for a range of large dimensional problems.

All these recent conversations have reminded me that we certainly have the expertise and the right attitude to help you. So do get in touch, at any time, if you want to discuss which of the hundreds of NAG routines you can use to solve your specific problem.

Friday, 21 January 2011

True Customer Engagement (when social is not so social)

About a year ago NAG embarked on a new journey into the realms of social media as a new way of engaging with our customers and potential customers. For many years we’d relied on communicating via the website, email, telephone and letter (all of which we still do of course), but we felt social media outlets could provide a great new way of improving communication.

First we created the NAG Blog. The vision for the blog is to provide an outlet for NAG staff to share their thoughts with the outside world in an informal way. We also created a space for NAG on Twitter and a group on Linkedin. So far, we’ve been pleased with how the groups have grown with followers and readers, but feel what might be lacking with all three places is true interaction with our audience.

The statistics show that people are reading and following, yet there's little two way dialogue. I’ve been trying to understand why this is, I've concluded that it's probably down to a host of reasons. Here are my thoughts:

1. Maybe our audience don’t have time or inclination to communicate with us through social media?

2. Or, maybe we’re not asking the right questions of our audience, or in fact any questions?

3. Maybe our audience still prefer more traditional methods of communication?

During 2011 we’ll be thinking of ways to further enhance dialogue with our customers. We really do want to hear how you’re using our software and learn about your successes and failures. This helps us grow and improve as an organisation, not only by making product offerings more customer centric, but improving our customer service.

Please do get in touch if you have ideas or thoughts of your own on this. I’d love to hear from you.

Friday, 14 January 2011

NAG Life Service Recognition Award

You don’t often feel like singing about being 40 years old except when you are a software company with such a rich history and as many good friends, colleagues and controbutors to remember. So it would seem very fitting in NAG’s 40th anniversary year to create the NAG Life Service Recognition Award. The award aims to recognise valuable contribution to the company over an extended period of time. An invitation went out last week to all members of NAG to submit nominations for this award. The award will be made at AGM35 this September. It’s a shame that we have lost touch with quite a few good friends, colleagues and contributors, I would love to hear from anyone who is interested in what we are up to now.