tag:blogger.com,1999:blog-1352688022620133175.post3143836634888950053..comments2014-04-22T05:30:46.611+01:00Comments on The NAG Blog: NAG Routines in Different PrecisionsKatie O'Harehttp://www.blogger.com/profile/09366741271809330805noreply@blogger.comBlogger4125tag:blogger.com,1999:blog-1352688022620133175.post-83604074726636709832013-04-18T17:40:32.358+01:002013-04-18T17:40:32.358+01:00Hi Simon,
I remember your need for quadruple prec...Hi Simon,<br /><br />I remember your need for quadruple precision and this partly motivated this blog. I was attempting to gauge the interest in ‘alternative precision’ versions of the library. <br /><br />You are of course correct to say that once again the pendulum has swung in favour of computing with ‘mixed precisions’. At the moment only a few Nag routines that reflect LAPACK software take advantage of this. I am sure that Nag would be interested in getting a little more detail about the specific routine. In particular it would be interesting to see whether a double precision implementation would be speeded up by having an auxiliary using REAL rather than DOUBLE PRECISION arithmetic and whether there are any major drawbacks to this.<br /><br />We are emerging from a period when DOUBLE PRECISION and REAL arithmetic had similar costs, so most NAG library code is DOUBLE PRECISION. Now however I agree with you that we should consider very carefully whether mixing the precisions can give worthwhile improvements. Our code base is so large that I can’t imagine this happening with old code on a wholesale basis unless we get specific pointers from users such as yourself, so I appreciate your comments. I am sure that new materials will use mixed precisions wherever it is appropriate to do so.<br /><br />We will be grateful for any information you can send us.<br /><br />Many Thanks,<br />David<br />daviid sayershttp://www.nag.co.uknoreply@blogger.comtag:blogger.com,1999:blog-1352688022620133175.post-39995306646700880942013-04-18T16:28:28.479+01:002013-04-18T16:28:28.479+01:00I can comment on item 3, and indirectly on 4:
3. ...I can comment on item 3, and indirectly on 4:<br /><br />3. I have been using routine E04UCF in quadruple precision for about 15 years with great success. The routine finds the minimum of a function of many variables. The reason for the need for "REAL*16" is that the values of the variables (and their contributions to the objective function) can differ my many, many orders of magnitude. <br /><br />I've never had the feeling that I am one among many users with similar requirements, although I do see occasional queries about the use of extended precision on the forums of the compiler manufacturers (specifically Intel).<br /><br />4. I recently had a need for a "mixed precision" calculation for a reason that might also apply to other users and problems: when using E04UCF in quadruple precision to solve a problem with many variables (>3000) the program was slowed to a crawl in a service routine that appeared only to be manipulating elements of large arrays. <br /><br />The routine could not be parallelised, but reverting to double precision in this service routine appeared to give good (and much quicker) results. Fortunately this problem needed only a few extra digits of precision to the normal 16 or so attained in double precision, and not the 32 of quadruple precision. Otherwise the approach might not have worked.<br /><br />I did not investigate the exact reason for the slowdown (much greater than the usual factor of 5 to 10 from running in quadruple precision). However, the fact that I encountered the problem does suggest that there may be more to creating a library of routines to run in either double or quadruple precisions, at the choice of the user, than just making use of the interface example in the blog entry.<br /><br />Simon Cleggnoreply@blogger.comtag:blogger.com,1999:blog-1352688022620133175.post-89262958379104849472013-03-07T11:31:51.294Z2013-03-07T11:31:51.294ZWhen I first started at NAG, many years ago now, s...When I first started at NAG, many years ago now, single precision was appreciably quicker than double precision computation and we produced two implementations for the VAX VMS system. The single precision version had names ending in ‘E’ and the ‘standard precision’ was double and so had routines ending in ‘F’. This was OK because the VAX arithmetic was pretty good. At the behest of two companies, one working in aircraft design, the other in the nuclear industry, we produced implementations for the IBM with its arithmetic. I never felt very confident about those single precision routines…<br /><br />As chips evolved the superior performance of single precision, at the cost of numerical precision, diminished and there was little demand for single precision. Chips and architectures have now changed. As you point out GPU computing is an example where single precision has again become popular. The pendulum is beginning to swing back perhaps? My BLOG was trying to ascertain what interest there might be in these alternative precisions.<br /><br />As you may be aware, my colleagues who work on the NAG Library and LAPACK have already acknowledged this change and Mark 23 of our library already contains mixed-precision Cholesky solvers, where single precision calculations is performed for its speed and combined with double precision to get the greater accuracy. We may well see this trend continue.<br /><br />As your remark users might feel that single precision is ‘good enough’ when it isn’t. If both precisions were available, they could test this belief and incidentally get useful information on the conditioning of their problem. I would still recommend double precision as the ‘standard precision’ though and I would expect our example programs to remain unaltered, shepherding users down this route.<br /><br />I get the occasional request for quadruple precision. My tendency is to be slightly sceptical, but there are occasions when it would be useful. <br /><br />The point of my BLOG was to assess the likely demand for either or both of an extra single or double precision facility within the library. If demand still remains limited then I can’t see NAG undertaking the extra work, but for anyone who has a genuine need we could produce a bespoke version of a routine on a consultancy basis at a very reasonable cost. Very little of our source code would need to change; only the verification of the results would be an issue.<br /><br />Genuine multiple precision is much more of a challenge and I have experienced little demand for this. We might extend our A02 chapter on arithmetic to provide multiple precision arithmetic though. Would this be of interest?<br />David Sayershttp://www.nag.co.uk/about/dsayers.aspnoreply@blogger.comtag:blogger.com,1999:blog-1352688022620133175.post-88691758375528641582013-03-04T21:31:50.685Z2013-03-04T21:31:50.685ZThanks to the huge difference in speed between sin...Thanks to the huge difference in speed between single and double precision calculations in accelerators such as GPUs, the use of single precision has become more prevalent recently. <br /><br />What worries me, however, is that an awful lot of people seem to say 'Oh, I'm sure single precision will be fine' without spending the time to properly check. This leads to the very real possibility of them getting the wrong answer very VERY quickly.<br /><br />A great middle ground are mixed precision methods (e.g. http://www.mpi-inf.mpg.de/~strzodka/projects/double/) where one uses single precision where such is 'good enough' before switching to double precision to finish the calculation off. If done correctly, the result will be accurate to double precision but with a nice speed boost.<br /><br />Quadruple precision is also increasing popular and I get a few requests a year asking about it. The best solution in MATLAB is The Multiprecision Computing Toolbox which has very fast quad precision routines http://www.advanpix.com/2013/01/20/fast-quadruple-precision-computations/<br /><br />In short, I think its worth it to include both single and quadruple precision in the library. Of course, if you could throw in arbitary precision, we'd be in heaven but you'd have even more test cases to worry about!Michael Croucherhttp://www.walkingrandomly.comnoreply@blogger.com