Recently a case arose where a relatively new colleague was writing a rather complicated piece of code and he came up against a problem. Being a clever chap he came up with an elegant solution which worked perfectly in the context that he was working in and required only a minimal change to the user interface of his routine. Strictly speaking that change was a subtle deviation from our standards since it involved some arrays being declared with explicit lengths rather than being assumed size. What he didn't realise was that, when this code came to be incorporated into some of our other products, his solution would cause problems with the way the parameters were being used there, requiring a lot of effort to work around.
Luckily he was paired with one of our most eagle-eyed peer reviewers who queried the fact that these arrays did not conform to our normal practice. This in turn led to the original problem and the issues with applying his solution in our other products being aired. In the short term a fix conforming with our standards is being implemented which will work correctly in all our products, and in the longer term we will adopt an improved mechanism, better to address his original problem.
One lesson from this is that, particularly in a multi-developer environment like ours, its important to stick to agreed standards since its not always clear what the consequences of breaking them will be. But we all make mistakes and it would be nice if we had tools which could validate stylistic programming rules alongside the more mechanistic ones. It might be possible to do this with formal specification languages but that seems like using a sledgehammer to crack a nut. We already have tools which check the consistency of our documentation and our software, e.g. to ensure that the types and dimensions of parameters match. Maybe we should develop this idea further to include more complicated checks.