September 29, 2015 by
Filed under: Governance, Risk and Compliance
I have been trained as a mechanical engineer. I clearly remember my first day at university, a mere 17 years old, with an impressively speaking professor in front of us, beard, tree-piece suit, pipe, the required deep voice, the works. I have two distinct memories from that session. First of all, he told us to look to the left and look the right, and to remember that just one of us was going to make it to the end. How's that for a welcome? The second thing he told us is that he was going to turn us into quantitative people. With statements like ââ¬Åif you cannot measure it, don't do itââ¬Â and making fun of all the vague sciences (his words), he clearly made an impression. He couldn't stand words like 'roughly', 'approximately', and other equivalents (ââ¬Åwell, do you think the bridge will stand or fall down, approximatelyââ¬Â). And measure and calculate we did. He would throw somebody out, if he would calculate that the train would be running 3,760 km/h. You would need to UNDERSTAND that trains don't run 3,760 km/h.
And here is where lighting struck (confession: took a while!). It is not about the calculation, it is about the understanding. The calculation forces you to understand the problem, and the more complex the problem, the more relevant the understanding, and the less important the actual calculation.
And that's where most go wrong. It's the statistical / mathematical experts creating the models and doing the calculations, leaving all business people gasping in admiration. Statistics always was for the really clever people, they then say. Problem is: numbers are created by experts, and used by non-experts, and that's extremely scary. I've done many process simulations during the first years of BWise, and believe me: it's not about the numbers, it's about understanding. Anything can result from the calculation. Any process simulation of normal complexity has at the very least hundreds if not thousands of parameters, most with stochastic behavior.
Same goes for risk models. And thousands of parameters simply means this: the outcome is what you want to come out. Ask any simulation expert, and he will concur. Of course, scientific nuance demands him to add that this is within certain brackets, and he will add for instance that validation for extrapolating models is always more difficult than for interpolating et cetera (ignore all that), but the essence is that there is plenty of room in the outcome, and this is not about some details. The other day I saw Dutch politicians debating about the effects of their plans on job creation; there was a difference of 50,000 in 2040 between 2 plans. Repeat: in 2040! A fantastic example of experts creating numbers, and non-experts running with it, leaving the poor expert behind mumbling ââ¬ÅYes, but . . .ââ¬Â (nuance doesn't do well on TV). If only one of the journalists had the same professor as we had ;-).
I urge users of risk models to understand the models; if you don't understand them, don't use them. Again, anything can come out, so if you don't understand them, the outcome is irrelevant. Validating a risk model is extremely difficult, and requires tons of data. This data is available in some industries like insurances, but even then extreme care is required.
The other day a journalist asked me if I would expect that banks will include Reputation Risk into their stress testing. My response was that any stress test involving reputation risk that would not bring the organization to complete bankruptcy is wrong. So a simulation model is a waste of time. Maybe, it would make sense if you need some dramatic graphs. Reputation risk can bring you down, anyone, anywhere, anytime. So it is all about being prepared. That is something you need to organize with People, Processes and Systems. And a calculation model is just one of those systems, and probably not the most important one.