In the run up this year’s itSMF UK conference ITSM14, I chatted with Simon Durbin about his upcoming session entitled “Don’t Let SIAM Cloud Your Judgement”.
Q. Hi Simon, can you give a quick intro to your session at ITSM14?
I am going to be demystifying some of the hype that surrounds SIAM. As with any new management or technology ‘trend’ there is always of lot of fear, uncertainty and doubt as people grapple to understand what is really new and unique and what is simply the re-badging of familiar tried and tested concepts.
If you peel away the layers SIAM is actually rooted in some very well established management disciplines, but with the continued evolution of sourcing and service delivery models (such as Cloud) we need to re-frame and adapt these techniques to the realities of our modern complex, multi-sourced, mixed-sourced world.
Q. What impact does SIAM have on an organisation?
One of the greatest impacts that SIAM can bring is control. This is achieved by focusing on robust processes, clearly delineated roles and responsibilities between internal customers, internal functions and service providers; strong governance, all underpinned with quality data and information flows. All too often service providers give clients the ‘run around’ because they know more about your business than you do. SIAM establishes the mechanism to manage the complex interactions between supply and demand for IT services.
Q. What are likely to be the potential pitfalls and/or benefits an organisation may experience with implementing SIAM as a framework?
One of the big pitfalls with SIAM is to try and bite off more than you can chew. As with any process or service improvement initiative, focus and prioritisation is essential. Identify where the biggest pain points are and the critical business drivers and objectives. Align your SIAM efforts to business goals and addressing the pain. Pick your battles and don’t try to boil the ocean (apologies for the overused clichés!)
Simon Durbin is a Director with Information Services Group (ISG) and leads the SIAM practice in the UK, working as a key member of the global ISG SIAM team. He has more than 25 years’ experience in IT service and supplier management working as both a practitioner and consultant. Simon advises both public and private sector clients, across many industry sectors, on Service Integration strategy, operating model design, sourcing strategies and transformational change management
Simon’s session is on day two of ITSM14 and featured within the Managing Complexities track. To find out more or to book your conference place please visit itSMF UK
We don’t have enough resource to execute all desired improvements.
We choose the wrong unit of work for improvements.
What are the desired business outcomes?
We must focus on what is needed. To understand the word ‘needed’ we go back to the desired business outcomes. Then we can make a list of the improvement outputs that will deliver those outcomes, and hence the pieces of work we need to do.
Even then we will find that the list can be daunting, and some sort of ruthless expediency will have to be applied to choose what does and doesn’t get done.
How will you resource the improvements?
The other challenge will be resourcing the improvements, no matter how ruthlessly we cut down the list. Almost all of us work in an environment of shrinking budgets and desperate shortages of every resource: time , people and money. One way to address this is to do some of the work as part of BAU.
These are all aspects of my public-domain improvement planning method, Tipu:
Alignment to business outcomes
Ruthless decision making
Doing much of the work as part of our day jobs
Let me give you two more premises that build on the first four and take us to the heart of how I approached service improvement with Tipu.
Fifth premise: Improvement is part of a professional’s day job
Railroads work this way. Process improvements evolve over time on the job. The only time they have a formal process improvement project is for a major review: e.g. a safety campaign with experts checking the practices for safety risks; or a cost-cutting drive with time-and-motion analysts squeezing out efficiencies (we call it Lean these days). Most of the time, middle managers and line workers talk and decide a better way as part of their day jobs, often locally and often passed on as unwritten lore. Nobody in head office knows how each industrial track is switched (the wagons shuffled around: loads in, empties out). The old hands teach it to the newcomers.
Most improvement is not a project. Improvement is normal behaviour for professionals: to devote a certain percentage of our time to improving the systems we work with. We should all expect that things will be better next year. We should all expect that we will make a difference and leave systems better than we found them. Improvement is part of business as usual.
As a culture, IT doesn’t take kindly to ad-hoc, local grass-roots, unmanaged improvements. We need to get over that – we don’t have good alternatives if we are going to make progress.
Sixth premise: Software and hardware have to be near-perfect. Practices and processes don’t.
The tolerances for the gap between wheels or rails are specified in fractions of a millimetre on high-speed track. Even slow freight lines must be correct to a few millimetres, over the thousands of kilometres of a line. And no the standard 4’8.5” gauge has nothing to do with Roman chariots. It was one of many gauges in use for mine carts when George Stephenson started building railways, but his first employer happened to use 4’8”. Sorry to spoil a good story about horse’s butts and space shuttles.
Contrast the accuracy of the technology with the practices used to operate a railroad. In the USA, freight train arrival times cannot be predicted to the nearest half-day. (Let’s not get into a cultural debate by contrasting this with say Japanese railroads. To some, the USA looks sloppy. They say it is flexible.) Often US railroads need to drive out a new crew to take over a train because the current crew have done their legally-limited 12 hours. Train watchers will tell you that two different crews may well switch a location (shuffle the wagons about) differently. Compared to their technology, railroads’ practices are loose. Just like us.
In recent years railroad practices have been tightened for greater efficiency (the New Zealand Railways carry more freight now with about 11,000 staff than they once did with 55,000) and especially for greater human safety. But practices are still not “to the nearest millimetre” by any means.
Perfection is impossible
We operate with limited resources and information in an imperfect world. It is impossible for an organisation to improve all practices to an excellent level in a useful time. Therefore it is essential to make the hard decisions about which ones we address. Equally it is impossible – or at least not practical – to produce the perfect solution for each one. In the real world we do what we can and move on. Good enough is near enough except in clearly identified situations where Best is essential for business reasons. Best Practice frameworks not a blueprint: they are a comparison reference or benchmark to show what would be achieved with unlimited resources in unlimited time – they are aspirational.
Some progress is better than nothing. If we try to take a formalised project-managed approach to service improvement, the outcome for the few aspects addressed by the projects will be a good complete solution… eventually, when the projects end, if the money holds. Unfortunately, the outcome for the many aspects of service delivery not included in the projects’ scope is likely to be nothing. Most organisations don’t have enough funds, people or time to do a formal project-based improvement of every aspect of service management. Aim to address a wider scope than projects can – done less formally, less completely, and less perfectly than a project would.
We can do this by making improvements as we go, at our day jobs in BAU. We will discuss this ‘relaxed’ approach more fully in future.
We need an improvement programme to manage the improvements we choose to make. That programme should encompass both projects and BAU improvements.
Project management is a mature discipline
The management of projects is a mature discipline: see Prince2 and Managing Successful Programmes and Management of Portfolios and Portfolio Programme and Project Office, to name just the four bodies of knowledge from the UK Cabinet Office.
What we are not so mature about is managing improvements as part of BAU.
The public-domain Tipu method focuses on improving the creation and operation of services, not the actual service systems themselves. The former is what BAU improvements should focus on. i.e. Tipu improves the way services are delivered, not the functionality of the service (although it could conceivably be used for that too).
Service owners need to take responsibility for improvements
The improvement of the actual services themselves – their quality and functionality – is the domain of the owners of the services: our IT customers. They make those decisions to improve and they should fund them, generally as projects.
On the other hand, decisions about improving the practices we use to acquire/build and operate the IT machinery of services can be taken within IT: they are practices under our control, our authority, our accountability. They are areas that we are expected to improve as part of our day jobs, as part of business as usual.
We’ll get into the nitty-gritty of how to do that next time.
Problem, risk, change , CSI, service portfolio, projects: they all make changes to services. How they inter-relate is not well defined or understood. We will try to make the model clearer and simpler.
Problem and Risk and Improvement
In this series of articles, we have been talking about an ethanol train derailment in the USA as a case study for our discussions of service management. The US National Transport Safety Board wrote a huge report about the disaster, trying to identify every single factor that contributed and to recommend improvements. The NTSB were not doing Problem Management at Cherry Valley. The crews cleaning up the mess and rebuilding the track were doing problem management. The local authorities repairing the water reservoir that burst were doing problem management. The NTSB was doing risk management and driving service improvement.
Arguably, fixing procedures which were broken was also problem management. The local dispatcher failed to tell the train crew of a severe weather warning as he was supposed to do, which would have required the crew to slow down and watch out. So training and prompts could be considered problem management.
But somewhere there is a line where problem management ends and improvement begins, in particular what ITIL calls continual service improvement or CSI.
In the Cherry Valley incident, the police and railroad could have communicated better with each other. Was the procedure broken? No, it was just not as effective as it could be. The type of tank cars approved for ethanol transportation were not required to have double bulkheads on the ends to reduce the chance of them getting punctured. Fixing that is not problem management, it is improving the safety of the tank cars. I don’t think improving that communications procedure or the tank car design is problem management, otherwise if you follow that thinking to its logical conclusion then every improvement is problem management.
A distinction between risks and problems
But wait: unreliable communications procedure and the single-skinned tank cars are also risks. A number of thinkers, including Jan van Bon, argue that risk and problem management are the same thing. I think there is a useful distinction: a problem is something that is known to be broken, that will definitely cause service interruptions if not fixed; a “clear and present danger”. Risk management is something much broader, of which problems are a subset. The existence of a distinct problem management practice gives that practice the focus it needs to address the immediate and certain risks.
(Risk is an essential practice that ITIL – strangely – does not even recognise as a distinct practice; the 2011 edition of ITIL’s Continual Service Improvement book attempts to plug this hole. COBIT does include risk management, big time. USMBOK does too, though in its own distinctive way it lumps risk management under Customer services; I disagree: there are risks to our business too that don’t affect the customer.)
So risk management and problem management aren’t the same thing. Risk management and improvement aren’t the same thing either. CSI is about improving the value (quality) as well as reducing the risks.
To summarise all that: problem management is part of risk management which is part of service improvement.
Service Portfolio and Change
Now for another piece of the puzzle. Service Portfolio practice is about deciding on new services, improvements to services, and retirement of services. Portfolio decisions are – or should be – driven by business strategy: where we want to get to, how we want to approach getting there, what bounds we put on doing that.
Portfolio decisions should be made by balancing value and risk. Value is benefits minus costs. There is a negative benefit and a set of risks associated with the impact on existing services of building a new service: there is the impact of the project dragging people and resources away from production, and the ongoing impact of increased complexity, the draining of shared resources etc…. So portfolio decisions need to be made holistically, in the context of both the planned and live services. And in the context of retired services too: “tell me again why we are planning to build a new service that looks remarkably like the one we killed off last year?”. A lot of improvement is about capturing the learnings of the past.
Portfolio management is a powerful technique that is applied at mulltiple levels. Project and Programme Portfolio Management is all the rage right now, but it only tells part of the story. Managing projects in programmes and programmes in portfolios only manages the changes that we have committed to make; it doesn’t look at those changes in the context of existing live services as well. When we allocate resources across projects in PPM we are not looking at the impact on business-as-usual (BAU); we are not doling out resources across projects and BAU froma single pool. That is what a service portfolio gives us: the truly holistic picture of all the effort in our organisation across change and BAU.
A balancing act
Service portfolio management is a superset of organisational change management. Portfolio decisions are – or should be – decisions about what changes go ahead for new services and what changes are allowed to update existing services, often balancing them off against each other and against the demands of keeping the production services running. “Sure the new service is strategic, but the risk of not patching this production server is more urgent and we can’t do both at once because they conflict, so this new service must wait until the next change window”. “Yes, the upgrade to Windows 13 is overdue, but we don’t have enough people or money to do it right now because the new payments system must go live”. “No, we simply cannot take on another programme of work right now: BAU will crumble if we try to build this new service before we finish some of these other major works”.
Or in railroad terms: “The upgrade to the aging track through Cherry Valley must wait another year because all available funds are ear-marked for a new container terminal on the West Coast to increase the China trade”. “The NTSB will lynch us if we don’t do something about Cherry Valley quickly. Halve the order for the new double-stack container cars”.
Change is service improvement
Everything we change is service improvement. Why else would we do it? If we define improvement as increasing value or reducing risk, then everything we change should be to improve the services to our customers, either directly or indirectly.
Therefore our improvement programme should manage and prioritise all change. Change management and service improvement planning are one and the same.
So organisational change management is CSI. They are looking at the beast from different angles, but it is the same animal. In generally accepted thinking, organisational change practice tends to be concerned with the big chunky changes and CSI tends to be focused more on the incremental changes. But try to find the demarcation between the two. You can’t decide on major change without understanding the total workload of changes large and small. You can’t plan a programme of improvement work for only minor improvements without considering what major projects are planned or happening.
In summary, change/CSI is one part of service portfolio management which also considers delivery of BAU live services. A railroad will stop doing minor sleeper (tie) replacements and other track maintenance when they know they are going to completely re-lay or re-locate the track in the near future. After decades of retreat, railroads in the USA are investing in infrastructure to meet a coming boom (China trade, ethanol madness, looming shortage of truckers); but they better beware not to draw too much money away from delivering on existing commitments, and not to disrupt traffic too much with major works.
Simplifying service change
ITIL as it is today seems to have a messy complicated story about change. We have a whole bunch of different practices all changing our services, from Service Portfolio to Change Management to Problem Management to CSI. How they relate to each other is not entirely clear, and how they interact with risk management or project management is undefined.
There are common misconceptions about these practices. CSI is often thought of as “twiddling the knobs”, fine-tuning services after they go live. Portfolio management is often thought of as being limited to deciding what new services we need. Risk management is seen as just auditing and keeping a list. Change Management can mean anything from production change control to organisational transformation depending on who you talk to.
It is confusing to many. If you agree with the arguments in this article then we can start to simplify and clarify the model:
I have added in Availability, Capacity, Continuity, Incident and Service Level Management practices as sources of requirements for improvement. These are the feedback mechanisms from operations. In addition the strategy, portfolio and request practices are sources of new improvements. I’ve also placed the operational change and release practices in context as well.
These are merely the thoughts of this author. I can’t map them directly to any model I recall, but I am old and forgetful. If readers can make the connection, please comment below.
Next time we will look at the author’s approach to CSI, known as Tipu.