Thursday, 24 May 2012
I will be attending, along with several of my NAG colleagues. Will you be attending? What will you be looking to learn? I will be listening out for these five key topics.
GPU vs MIC vs Other
As at ISC'11 last year (and SC11), I think there will be a strong fight for attention in the key area of manycore/GPU devices - and a matching search for evidence of real progress. So far the loudest voice has been NVidia and CUDA, especially following NVidia's successful GTC event recently. However, interest in Intel's MIC (Knights Corner) is strong and growing - MIC has often been a big discussion topic in workshops, conferences and meetings over the last year. As the MIC product launch gets closer, people will be making obvious comparisons with NVidia's Kepler announced at the GTC.
What about others - will anyone else develop a strong voice in this manycore world? AMD Fusion? ARM? DSP-based products? Will we talk with the same energy about the software issues of manycore, or just the hardware choices?
What is happening with Exascale?
The quest to attain exascale computing rumbles on. I'd expect exascale to take a big share of the debate and agenda at ISC'12. What is happening with the exascale programs around the world? How are the budgets in the weak global economy affecting the exascale ambition? How are the various national pride (I mean national competitiveness) efforts towards exascale progressing? Will the software challenges get the level of investment needed? What new technologies will emerge to be studied as candidates to solve part of the problems getting to exascale?
Is exascale so old hat - do we need to move to discussion of zettaFLOPS to be trendy now? Will zettaFLOPS be impossible (Sterling) or inevitable (Barr)? Maybe we should spare some discussion for making multi-petaFLOPS work properly first?
Monday, 21 May 2012
I was recently speaking to a colleague about my first couple projects here at NAG. The first project was learning to call the Library from Python using c-types (thanks to Mike Croucher’s blog which helped immensely). Next, was a project using the Java Native Interface (JNI), which I had difficulty using. After hearing the above two pieces of information, my colleague recommended I look into Java Native Access (JNA) as it was very similar to c-types in Python. Thus began a brief love affair! I say ‘love affair’ because my experience the JNA was a bit of a roller coaster of highs and lows. In the beginning, the JNA and I got along great. As time went on, I was left sitting at the computer screen wondering what to do next, hoping for the JNA to fix things.
Background of JNI
NAG already has a thorough technical report on our website for calling the NAG Library using the Java Native Interface. This includes creating header files, compiling java files, compiling the interface library, and running the program. Seems like lots of work, even for simple functions. I was hoping the JNA would be easier.
First date with the JNA
To start using the JNA you just need to go to download the .jar file from https://github.com/twall/jna. Download the file and then move it to the directory you will be working in. Unzip it to create a com folder and you’re done! You can now start using it. Whenever you need to use a package from the JNA, just import it at the top of your Java file.
My first impression of the JNA started off on a high note. I found it extremely easy to start using the interface. Looking at code for calling the Bessel function (routine s17acc):
Friday, 11 May 2012
The NAG Library contains a variety of optimization routines (for both local and global minimization) - along with, of course, a wide range of solvers for other types of problems in analytics (such as statistical analysis, correlation and regression modelling and time series analysis) and in a variety of other numerical areas. At the conference, we presented the results of some consultancy work performed for a client who was using NAG routines to solve a large-scale constrained optimization problem arising from activities such as price promotion (see abstract 4 on this list for more details).
The remainder of the conference consisted of a couple of plenary talks (from Google and eBay), an entertaining panel discussion on the perennial topic of Big Data, and a collection of contributed talks which were arranged in fifteen parallel sessions on topics like The Analytics Process, Decision Analytics, Analytics Around Us, etc. I found the standard of the presentations to be very high; a few personal highlights were: