Thursday, June 12, 2008

Key Themes at SIFMA 2008

One could be forgiven for thinking that guest blogger Richard Muirhead of Tideway Systems is in Las Vegas sampling the technological delights and the latest must-have gadgets that are going to wow future generations of teenagers. Instead, he is at the SIFMA Technology Management Conference in New York (the final day no less) where it appears the future of trading resembles a PlayStation video game, and the gamers, Facebookers and tech-savvy teenagers of today are likely to be the future traders of tomorrow. Should we be worried?

Day three at the conference, and several key themes have emerged – this year’s most urgent issues for financial services' IT can be grouped under the following umbrellas:

Extreme Agility
If we now look at the rising expectations of the Facebook and Grand Theft Auto generation - and the fact that in a few short years they will be running the derivatives desk, and from there the bank - this pace of progress simply will not do!

They understand the rate of new features that their favourite websites deliver and have seen the demonstrations of the new handbag rental websites launched on the Amagoogle compute cloud with one dainty tap on the haptic keyboard on their iCommunicator.


Not So Much Efficiency as Survival!

Another thing we heard was there is a more black and white issue that demands attention and a new approach.

First data centres ran out of space, then they ran out of cooling, and now they are running out of power. For each $1 of spend on hardware and software, a further 50 cents is spent to power/cool them. For the 16 million servers across 7000 data centres in the US, that amounts to 350 billion KWhrs - or around 2% of all the electricity in the US.


In California, the sixth largest economy on the planet were it to be a stand alone country, they were recently 345 MW short of a rolling blackout. The average power consumption of new build data centres is 1000 MW so they were four data centres away from lights out.

Virtualisation allows a shift from coping with client estimated demands and the documented but inaccurate or irrelevant power consumption (and thermal output figures for the many infrastructure components required for their operation) - to an intelligent forecast of the non-functional requirements that a given application will place on a virtualised slice of the environment.

The difference could be between the 6000W max power draw documented for safety reasons where actual draw is around 2500W....so that is how cooling should be engineered. All of these allow for dramatic increases in energy, space and hardware efficiency - and virtualisation also means lower certification overhead for different hardware types to boot.

Data Centres Are For Life, Not Just For Christmas

These creatures stick around, sometimes grow into monsters and take a lot of care. Many large organisations have tens or hundreds of data centres....and many would like to consolidate them to single digits. It’s just not that easy. You can't just fire all the teams and you need some carefully engineered data centre redundancy for availability and indeed compliance. But you also need low-latency for trading apps; SaaS apps; or proximate data centres to support large file transfers around development environments, since the world is not yet fully wired with OC192’s.


The imperative for tomorrow’s data centre is to waste no software licenses; drive utilisation of the server estate from 10% up to 60%; keep within space and power constraints; all while ensuring you can quickly put apps into production for a given workforce up through automation.

Whatever the people are saying about the new build data centres, within a decade the contents will be obsolete. But once we know which data centres need to be kept and where the economics on a typical data centre build are, they can be improved by 150m on 350m by making that shift from 10% to 60%.

Complexity Beyond A Single Man’s Ken
Concatenation of behaviours that distributed applications and now virtualisation depends upon can lead to enormous systemic unpredictability. Soon we will be going from seven physical networks per server to one network with virtualised network I/O, where these then become software configurable.

Everything will be virtualised: NAS: load balancer ; LAN; SAN. So then everything can be software provisioned. Ports and servers are dead. As VMs allow application workloads to migrate freely around the estate and the configuration of the application infrastructure shifts into software at all layers, then the policies for network/storage configuration, Q0S and encryption need to match the application and also move with the application.

So the initiatives break down into Consolidate; Virtualise; Automate. But the biggest problem in all of this will be the silos that people currently work in. Shifting people from bragging about their deep abilities with a particular technology, product or vendor or the vast number of ports and servers under their management to the high levels of data center utilisation; extreme application availability and high velocity of application improvement, and all so that we can beat our highest score on Grand Theft Auto or make a (bigger) bonus this year.

No comments: