Beta testing and feedback

It is easy to see the value of software Beta releases – they provide the opportunity for potential users to get new functions early and they give the development team excellent feedback about what needs to be altered to advance from a potentially good product to an excellent product that meets users’ needs precisely. However, perhaps like many aspects of life, the way to gather most benefit from a Beta program is to listen really hard to what Beta users are saying. This can be time consuming and quite difficult. The current Beta program that we are running is for the NAG Library for .NET. We have received excellent, detailed feedback that covers issues from logging exceptions from Excel to looking at how to further tune some routines for particular environments (all things that the NAG developers can cover). While this is great, and let me take a moment to thank those in the Beta program for their feedback so far, the question remains: are we listening hard enough? So, for the first time, we are planning to use an on-line survey tool (called Survey Monkey) to try to draw out more comments. It will be interesting to know how it is received, because what any Beta program is looking for the ‘testers’ to think of one more thing that was noticed about the release but has not yet been mentioned. So the ‘any other comments?’ question may be most important. I’ll be sending the questionnaire to the test sites in a few weeks and I’m looking forward to the feedback. However it’s certain that I’ll want to have a few relaxed chats by phone to really uncover the most valuable details. After all it’s the unsolicited comments from a Beta test that tend to help create the very best products and perhaps this feedback can only happen if enough time is given to the process. Thanks again to all test sites for their help.


Popular posts from this blog

Implied Volatility using Python's Pandas Library

C++ wrappers for the NAG C Library

ParaView, VTK files and endianness