Enable innovation and efficiency in product design and manufacture by using more powerful simulations. Apply more complex models to better understand and predict the behaviour of the world around us. Process datasets faster and with more advance analyses to extract more reliable and previously hidden insights and opportunities.
All ambitions that will probably resonate with those seeking scientific advances, commercial innovation, industrial growth and more cost-effective research. Underpinning all of the above is the use of more powerful computing methods and technologies. Faster and more capable computers - but equally important - more advanced and better performing algorithms and software implementations.
It's a pretty convincing story for those who take the time to listen - whether business leaders, governments, or research funders. Even in these challenging economic times, it has led to investments from industry and governments for this reason - the potential return is well documented and significant. It is even enticing enough to interest the media and the public - especially when we use emotive descriptions like "world's fastest supercomputer", "international competitiveness in digital economy", "personal supercomputing", and so on.
And it is this last thought that cause me to diverge from the grand theme to explore names and attention. I will come back to the main theme later (a future blog), as it is both important and timely. But on to my side topic.
Anybody using, selling or funding the technologies and methods described above will know what I am talking about. But the names and labels applied to it can vary significantly across the diverse audience. High performance computing (HPC), supercomputing, computational science and engineering, technical computing, advanced computer modelling, advanced research computing, etc. The range of names/labels and the diversity of the audience involved mean that what is a common everyday term for many (e.g. HPC) is an unrecognised meaningless acronym to others - even though they are doing "HPC".
This can create a barrier to engaging politicians, companies that could benefit, the media, and people in search of solutions for their day-to-day modelling/simulation/data processing challenges.
Most of us who see this as part of our daily life use the terms HPC or supercomputing. How do these stack up with the wider world? Let's turn to Google Trends as an arbitrary tool of statistics.
The following graph shows the search popularity of these terms ("supercomputer" and "HPC") over the last few years. Clearly "HPC" is a more common keyword.
[Plot 1: blue = supercomputer, red = HPC]
But what if we add in that term that is so often considered just a buzzword by seasoned HPC professionals - "cloud computing"?
[Plot 2: blue = supercomputer, red = HPC, orange = cloud computing]
We see that in the last few years "cloud computing" has soared above the traditional names in usage. This means a wider audience - and thus more possibilities for that ambitious opening paragraph of mine.
Adding some more technical terms ("parallel computing", "multicore") hardly register in comparative popularity.
[Plot 3: blue = supercomputer, red = HPC, orange = cloud computing, green = parallel computing, purple = multicore]
Interestingly, adding a domain specific term ("CFD") tracks the popularity of HPC rather than cloud computing.
[Plot 4: blue = CFD, red = HPC, orange = cloud computing]
You can play the same game with key technologies of the supercomputing world - e.g. [MPI, OpenMP, CUDA, OpenCL, Fortran] - and discover more interesting trends, but as this blog is already getting long - that is for another day.
I'll just leave you with this one, which might be interpreted as speaking volumes to the challenges faced in delivering the promise of my opening paragraph - [computer, software, programmer, algorithm].
[Plot 5: red = computer, blue = software, orange = programmer, green = algorithm]
What interesting related trends can you find and analyze?