Something is rotten in the state of financial risk management, and there should be no surprises that banks' siloed view of risk based on asset class or geography has played a rather significant hand in the dire predicament they now find themselves in. Not only do banks not have an enterprise-wide view of risk across asset classes and geographies, they apparently also find it difficult to stop or prioritise payment flows. Few banks it seems had the ability to stop payments going to ailing investment bank Lehman's Brother as it collapsed.
"How many banks have the ability to say I don't have enough cash in nostro A , but there is plenty in nostro B, so I can re-route payments?", asked an executive from complex event processing vendor, Aleri, during a recent webinar it hosted on liquidity risk management.
Not only are banks' risk management systems siloed, the experts say, they also do not speak to liquidity management and collateral management systems. Liquidity management was traditionally seen as the preserve of a bank's treasury department, but Bob McDowall, a research director with TowerGroup in Europe, says that has to change.
McDowall said forthcoming regulation in the wake of the current crisis meant that banks would need to develop the capability to measure and manage liquidity risk on an enterprise-wide basis. Aleri says complex event processing is one technology that can help pull together disparate sources of information together without having to connect to the different silos within banks.
"At any time, banks need to be able to take a view as to what their risks and liabilities are up-to-the minute, not at the end of the day or periodically throughout the day," said McDowall.
He anticipates that banks will need to move from real-time to "predictive" risk management based on analysis of prices and behavioural patterns.
The national financial regulators are also going to have to pull their socks up it seems, as McDowall says that in order to monitor how well banks are managing liquidity risk, they will need to take a more "forensic" approach to risk management and build systems that enable them to share information with one another.
According to Tony White, managing director, product and R&D, Wall Street Systems, "next generation" liquidity management systems will need to provide a quick overview of everything and be tied to front office systems. They cannot be product agnostic as they will need to understand the product if banks want to combine collateral and cash. Liquidity management policies will also need to be reflected in these systems and stress testing of different scenarios will need to be done in minutes not months.
Sounds like banks are going to have their hands full over the next few months, but one wonders how many banks will actually achieve a truly enterprise-wide view of their risk, given that risk management projects have tended not to receive that much support from senior banking executives.
2 comments:
One of the interesting paradoxes of the current focus on failed risk management is the increasing realisation that more information has lead to less clarity, more reports and dashboards have lead to less transparency, and more business intelligence and analytics solutions have lead to poorer decision support. But why?
Perhaps the answer lies in the misconception that if something is complex and expensive then it must inherently be clever. If £10million worth of software and hardware were required to produce the numbers and reports, then they must be worth £10million of well-defined decision support. The truth of this is that there is still a vast chasm between the information that the business needs to manage itself effectively and the information that is supplied by the underlying systems.
This is, in part, due to the fact that there are no clearly defined standards that banks are working towards in terms of risk reporting. Each bank has its own view championed by key individuals tasked to drive specific initiatives forward within their institution. The resulting analysis methodology is, therefore, not created in a classic ‘double blind’ test scenario. For example, the test should be designed without any notion of the expected results so as not to bias the test methodology. In banking, quite the opposite appears to be true. Risk analytics and reports seem to be designed to produce results that are within pre-defined, expected ranges.
So how can these issues be overcome? Part of the solution might be in learning the fundamental lesson that SOX taught us: you can't provide consistent transparency and accountability to the regulator until you've created the supporting control culture that provides the same level of transparency and accountability internally. While most still shudder at the way in which SOX was implemented, the majority (according to the BCS Financial Control Survey of April 2008) felt that the discipline that SOX instilled and the cultural changes it brought about were all in line with best business practice. The difference between SOX and the Enterprise Wide Risk Management view that people are chasing now, is that SOX was ‘inch wide, mile deep’ but this new global integrated view of the world is "mile wide, and possibly mile deep". So before institutions embark on major data aggregation and warehousing projects, they really need to ask whether the business framework they have in place instils a control culture that actively promotes transparency and accountability across the enterprise.
The second part of the solution is in how to bridge the gap between where the critical control information resides and the decision makers who rely on it. Perhaps there should be less emphasis on multi-million pound projects to integrate disparate systems in a carpet-bombing attempt to provide ALL information in a central place. As one Global Head of Risk Management once said to me:
"In risk management, information is like a fast-flowing river. Only the river is polluted. I already know it’s polluted so showing me pictures of plastic bags and oil slicks doesn't help me. If you can show me where they are being dumped in the river then I can do something about it."
There should be more focus on leveraging technology to empower employees to raise issues and communicate that they have spotted the risk-equivalent of fly-tipping. There should be more emphasis on formalising the justification for a manager to ignore that issue. There should be more emphasis on ensuring that numbers are never presented without the colour and context that the line staff add with their subjective opinions. This people-based approach to operational risk and control management is quite possibly the missing link in the Enterprise Wide Risk Management puzzle. Additionally, the cost and complexity of implementing this type of framework is orders of magnitude lower than the complex system integration and data warehousing projects currently attempting to deliver all the pollution in a single place. By federating responsibility and accountability across the organisation, the ‘mile wide, mile deep’ coverage becomes much more attainable with each person contributing numbers and opinion from their function. The benefit of co-located or off-shored teams is less shrouded by the fear of increased risk.
So in summary, maybe the answer lies in going back to basics – in not trying to devise more fancy algorithms to reduce the world to a single number, in embracing good basic business practices, and using this to harness the information in the hearts and minds of your workforce. Perhaps it’s time to keep it simple.
Wow your article is interesting about risk
Post a Comment