How to conduct an ITSM assessment that actually means something

ITIL (Information Technology Infrastructure Library), a standard framework for managing the lifecycle of IT Services, is sweeping the U.S.   Based on a 2011 analysis of 23 ITIL studies, Rob England concluded that the compound annual growth in ITIL adoption was 20%± and that ITIL training attendance increased at a compound annual rate of 30% for the past ten years.  Despite this apparent surge of adoption, enterprises continue to struggle with ITIL’s daunting framework.

Recognizing the confusion inherent in ITIL alignment, numerous vendors have created “ITSM assessments” with varying degrees of complexity and debatable value.  These assessments draw upon frameworks such as ITIL, CMMI-SVC, Cobit and, occasionally, BiSL or more specific constructs such as KCS and IAITAM.  Where does one begin?  What is most important?  Where will improvement deliver the best payback?  How can one ensure that all phases of implementation share a common and scalable foundation?

Fundamental Assessment Approach
Figure 1: Fundamental Assessment Approach

All assessments follow a pretty basic formula:

  1. Determine and document the current state of ITSM in the organization.
  2. Determine and document the desired state of ITSM in the organization.
  3. Establish a practical path from current to desired state (roadmap).

Simply stated, the objective is to successfully execute the ITSM roadmap, thereby achieving a heightened level of service that meets the needs of the business.  But don’t let those vendors through the door just yet because this is where ITSM initiatives go sideways.

Current state, desired state and roadmap mean nothing without first establishing scope and methodology.  How comprehensive should the assessment be?  Does it need to be repeatable?  Which processes and functions should be targeted?  Should it be survey-based?  Who should participate?

Rather than seeking input from the ever so eager and friendly salespeople, one can follow a simple three-step exercise to determine scope and methodology.  These steps, described in the following sections, may save you millions of dollars.  I have seen dozens of large enterprises fail to take these steps with an estimated average loss of $1.25M.  For smaller enterprises ($500M – $1B in revenue), the waste is closer to about $450,000.  The bulk of this amount is the cost of failed projects.  In some instances those losses exceeded $10M (usually involving CMDB implementations).

Three Steps to a Meaningful ITSM Assessment

Though these steps are simple, they are by no means easy.  For best results, one should solicit the participation of both IT and business stakeholders.  If the answer comes easily, keep asking the question because easy answers are almost always wrong.  Consider using a professional facilitator, preferably someone with deep, practical knowledge of ITIL and a solid foundation in COBIT and CMMI-SVC.

So, the three steps are really three questions:

  1. Why do you need an ITSM Assessment?
  2. What do you need to know?
  3. How do you gain that knowledge?

Step 1:  WHY Do You Need an ITSM Assessment?

IT Service Management aligns the delivery of IT services with the needs of the enterprise.  Thus, any examination of ITSM is in the context of the business.  If one needs an ITSM assessment, the business must be experiencing pain related to service delivery.

  1. Identify service delivery pain points.
  2. Map each pain point to one or more business services.
  3. Assign a broad business value to the resolution of each pain point (e.g. High, Medium, Low).  Divide these values into hard savings (dollars, staff optimization), soft savings (efficiency, effectiveness), and compliance (regulatory, audit, etc.).
  4. Map each pain point to a process or process area.

There should now be a list of processes with associated pain points.  How well can the business bear the pain over the next few years?  With this preliminary analysis, one should be able to create a prioritized list of processes that require attention.

For now, there is no need to worry about process dependencies.  For instance, someone may suggest that a CMDB is required for further improvements to Event Management.  Leave those types of issues for the assessment itself.

Step 2: WHAT Do You Need to Know?

 

Four Assessment Needs
Figure 2: Four Assessment Needs

Now that the organization understands why an assessment is required (of if an assessment is required), it can identify, at least in broad terms, the information required for such an assessment.

Referring the chart in Figure 2, IT management need only ask four questions to determine the needs of an assessment.

Is ISO/IEC 20000 Certification Required?

If the organization requires ISO/IEC 20000 certification, a Registered Certification Body (four listed in the U.S.) must provide a standardized audit, process improvement recommendations, and certification.  For most enterprises, this is a major investment spanning considerable time.

Does Repeated Benchmarking Provide Value?

Does the organization really need a score for each ITIL process?  Will the assessment be repeated on a frequent and regular basis?  Will these scores affect performance awards?  Will the results be prescriptive or actionable and will those prescribed actions significantly benefit the business?

The sales pitch for an ITSM assessment usually includes an ITIL axiom like, “You can’t manage what you don’t measure” (a meme often incorrectly attributed to Deming or Drucker).  One must ask if scores are the best measure of a process?  To what extent do process maturity scores drive improvements?  Not much.  Each process has its own set of Critical Success Factors, Key Performance Indicators and metrics.  These are far more detailed and effective data points than an assessment score.  Ah, but what about the big picture?  Again, ITIL and COBIT provide far more effective metrics for governance and improvement on a macro level.

That said, there are some pretty impressive assessments available, some with administrative functions and audience differentiation baked into the interface.  However, one should build a business case and measure, through CSFs and KPIs, the value of such assessments to the business.

Do you need an ITSM Strategy and Framework?

Does the organization already have an intelligent strategy for its ITSM framework?  Is there a frequently refreshed roadmap for ITSM improvement?  For most enterprises, the honest answer to this is no.  Numerous Fortune 500 enterprises have implemented and “optimized” processes without strategy, roadmap, or framework.  The good news is that they keep consultants like me busy.

To build an ITSM strategy, an organization needs enough information on each process to prioritize those processes as pieces of the overall service workflow.

To gauge the priority of each process, we focus on three factors:

  • Business value of the process – the extent to which the process enables the business to generate revenue.
  • Maturity gap between current and desired state – small, medium or large gap (scores not really required).
  • Order of precedence – is the process a prerequisite for improvement of another process?

To complete the strategic roadmap, one will also need high-level information on ITSM-related tools, integration architecture, service catalog, project schedule, service desk, asset management, discovery, organizational model, business objectives, and perceived pain points.

Are You Targeting Specific Processes?

To some extent, everything up to this point is preparation and planning.  When we improve a process, we do that in the context of the lifecycle.  This task requires deep and detailed data on process flows, forms, stakeholders, taxonomy, inputs, outputs, KPIs, governance, tools, and pain points.

As this assessment will be the most prescriptive, it will require the most input from stakeholders.

Step 3:  HOW Do You Gain that Knowledge?

Finally, the organization identifies the assessment parameters based on the data required.  Similar to the previous step, we divide assessments into four types.

ISO/IEC 20000 Certification

The only standardized ITSM assessment is the audit associated with the ISO/IEC 20000 certification (created by itSMF and currently owned and operated by APM Group Ltd.).  The journey to ISO 20k is non-trivial.  As of this writing, 586 organizations have acquired this certification.  The process is basically measure, improve, measure, improve, ………. , measure, certify.  Because the purpose of improvement is certification, this is not the best approach to prescriptive process optimization.

Vendor-Supplied ITSM Assessment

The administration, content, and output of ITSM assessments vary wildly between vendors.  In most cases, the ITSM assessment generates revenue not from the cost of the assessment but from the services required to deliver the recommended improvements.

Rule #1:  “If you don’t know where you’re going, you’ll probably end up somewhere else” (Lawrence J. Peter).   Without a strategy and roadmap, assessments will lead you to a place you would rather not be.

Rule #2:  The assessment matters far less than the assessor.  When seeking guidance on ITSM optimization, one needs wisdom more than data.  A skilled assessor understands this workflow in the context of a broader lifecycle and can expand the analysis to identify bottlenecks that are not obvious from an assessment score.  An example is Release Management.  The Service Desk may complain that release packages are poorly documented and buggy.  Is that the fault of the Release Manager or is it a flaw with the upstream processes that generate the Service Design Package?

Rule #3:  Scores are only useful as benchmarks and benchmarks are only useful when contextually accurate (e.g. relative performance within a market segment).  Despite the appeal of a spider diagram, avoid scored assessments unless compelled for business reasons.  Resources are better spent analyzing and implementing.

Rule #4:  An assessment without implementation is a knick-knack.  Validate the partner’s implementation experience and capability before signing up for any assessments and be prepared to act.

Rule #5:  A free assessment is a sales pitch.

Rule #6:  A survey-based assessment using a continuous sliding scale of respondent perception is a measure of process, attitude, and mood.   So is a two year old child.

Rule #7:  In ITSM assessments, simpler is better.  Once a vendor decides that the assessment needs to produce a repeatable score, the usefulness of that tool will decline rapidly.  If you doubt this, just look under the covers of any assessment tool for the scoring methodology or examine the questions and response choices for adherence to survey best practices.

Strategy and Roadmap Workshops

Enterprise Service Management strategies save money because not having them wastes money.  Without guiding principles, clear ownership, executive sponsorship, and a modular, prioritized roadmap, the ITSM journey falters almost immediately. Service Catalogs and CMDBs make a strategy mandatory.  For those who lack an actionable Service Strategy and Roadmap, this is the first assessment to consider.

An enterprise needs an experienced ITSM facilitator for strategy workshops.  Typically, the assessment team will perform a high-level process assessment, relevant tool analysis, framework architecture integration study, and a handful of half-day workshops where the gathered information is molded into a plan for staged implementation.

Targeted Process Assessments

Organizations know where the pain points are and have a pretty good sense of the underlying factors.  The assessor finds this knowledge scattered across SMEs, Service Desk personnel, business line managers, development teams, project office, and many other areas.  The assessor’s value is in putting these puzzle pieces together to form a picture of the broader flows and critical bottlenecks.  Through the inherited authority of the project sponsor, the assessor dissolves the organizational boundaries that stymy process optimization and, with an understanding of the broader flow, assists in correctly identifying areas where investment would yield the highest return.

For these assessments, look for a consultant who has insightful experience with the targeted process.  An assessment of IT Asset Management, a process poorly covered in ITIL (a footnote in the SACM process), requires a different skill set than an assessment of Release and Deployment Management or Event Management.

The output from a Targeted Process Assessment should be specific, actionable, and detailed.  Expect more than a list of recommendations.  Each recommendation should tie to a gap and have an associated value to the business.  Essentially, IT management should be able to construct an initial business case for each recommended improvement without a lot of extra effort.

Summary

Liam McGlynn
Liam McGlynn

Organizations are investing tens of millions in ITSM assessments.  I have seen stacks of them sitting on the shelves of executives or tucked away in some dark and dusty corner of a cubicle.  Whether these assessments were incompetent or comprehensive, as dust collectors, they have zero value.

How prevalent is the lunacy of useless ITSM assessments?  From my own experience and from conversations with others in the field, vendors are selling a lot of dust collectors.  Nobody wants to be the person who sponsored or managed a high-profile boondoggle.

So the advice is this.

  • Don’t waste time on scores because there are better ways to sell ITSM to the board than a spider diagram.
  • Develop and maintain an ITSM Strategy and Roadmap.  As Yogi Berra once said, “If you don’t know where you’re going, you’ll wind up somewhere else”.
  • Assessing and implementing need to be in close proximity to each other.
  • Get an assessor with wisdom who can facilitate a room full of people.
  • Finally, follow the three steps before you let the vendors into your office.

The journey may have many waypoints but let’s just make it once.

Liam McGlynn is a Managing Consultant at Fruition Partners, a leading cloud systems integrator for service management and a Preferred Partner of ServiceNow.  

How good Change Management can still sink ships

RMS Titanic departing Southampton on 10 April 1912

The sinking of the Titanic has become synonymous with epic failure, brought on by ego and arrogance.

But if you look at the immediate actions of the crew, you’ll find a fairly rapid and well orchestrated response to a (Emergency) Request for Change.

The Titanic Story (in short)

The lookouts were perched high in the crow’s nest scanning for danger. History has it they were without binoculars, doing their best to fulfil their duty as early warning. Captain Edward Smith was a well seasoned and decorated captain with the right experience and background to captain such a mighty ship. Though other ships had reported icebergs in the area, and it’s irrefutable that he was aware of the dangers, his orders were full steam ahead.

When the lookouts first spotted the infamous iceberg, they immediately sounded the bell and notified the bridge. First Officer Murdoch order “hard astarboard!”, signaled full stop, and then full reverse. All executed with speed and practiced precision.

And then they waited anxiously to see if the helm responds in time – if the changes will turn this mighty ship in time to avert disaster. Less than a minute later; impact, and the rest, of course, is history.

The parallels to IT Service Management are helpful in understanding the difference between Change Management and Change Enablement.

Change Management

Traditionally, Change Management focused on quickly and effectively implementing business-required changes without producing negative business impact. (“screw it in, not up”)

Much of the focus of Change Management is on risk analysis and management to avoid adverse impact – “protecting the business”. Change Management typically views success as implementing the requested technical feature or service (application updates, new IT services) without problems.

ITIL defines Change management:

The goal of the change management process is to ensure that standardized methods and procedures are used for efficient and prompt handling of all changes, in order to minimize the impact of change-related incidents upon service quality, and consequently improve the day-to-day operations of the organization.

Let’s take an example we’re all familiar with. I’d hazard a guess that most IT organizations have upgraded their mail servers recently. In the process, most of us defined the desired result of the change to be successfully upgrading to Exchange xx with minimal user impact. It was most likely justified by increased security, supportability, and new features.

How many upgrade efforts were driven and measured by the enhanced capability and improved productivity of business users? Would we even know how to measure that, and if we did, would we see that as our responsibility, as a Critical Success Factor for Change Management. Or are we more likely to view that as “their concern”. Ours being the successful implementation of the technical requirements, leaving the business with some-assembly-required to produce value.

In the case of the Titanic, there was an immediate need to change course. Using established systems and processes, they quickly implemented the needed change. It was implemented with precision, and the change was, by traditional measure, successful.

But the outcome was far from successful, by any reasonable measure. And yet, IT organizations the world over defend their contribution by declaring they have successfully implemented the technical part of the change, as requested, with no negative impact. Success.

Change Enablement

But we all know the end of the Titanic story. Disaster. Failure. Even though the ship’s Change Management processes quickly and effectively implemented the desired change, the result was catastrophic. It didn’t achieve the desired outcome.

Change Enablement, by contrast, seeks to ensure the business actually realizes the desired outcomes – the results the business envisioned when changes are requested of IT. It evaluates and identifies additional success parameters, and establishes transition plans to ensure the desired outcomes are achieved.

Change Enablement includes organizational Management of Change required to achieve the desired business result.

Senior leadership (including IT) is charged with ensuring the resources under their care are aligned with the business objectives of the organization. If they are not, leadership is not fulfilling it’s obligation to the stakeholders, for which they will be held accountable. (Governance)

Implementation of needed changes is but a minor component. Change Enablement focuses on the entire organization’s capability to achieve outcomes. It includes people (the right skills, knowledge and experience), processes (working together as a system to maximize effectiveness, and directly aligned with the business), technology, relationships and organizational structures required for success. Everything from viability of the business’s long range planning strategy, the formation of effective tactical plans to achieve, and the organizational capability to deliver.

Change Enablement needs traditional Change Management, but is laser focused on the larger whole. And in the end, it’s the business results that count. Like the Titanic, the IT crew is on the same ship, and if the ship sinks, it’s bad for us all. It’s not like we’re safely on another (IT) ship. We are, quite literally, all in the same boat together.

For Titanic, Change Enablement would include investments in better early warning systems – night vision, radar, GPS, etc. Improvements in real time analysis and controls for determining appropriate speed for given conditions. Analyzing the ship’s design and engineer improvements to the ship’s ability to more quickly change course.

The road ahead for IT organizations is an even greater role in enabling the business to meet the ever increasing demands on their organization. ‘Change Enablement’ is no longer the high minded double speak of elite management consultants. IT can no longer faithfully implement changes in isolation and declare success.

If you think about it, Change management, as described, is essentially playing to not-fail. Whereas Change Enablement is playing to win. Business requires an IT who can help them win the larger battle – Change Enablers who help deliver meaningful business results.

What a great time for IT Service Management!

Image credit

Everything is improvement

Traditionally Continual Service Improvement (CSI) is too often thought of as the last bit we put in place when formalising ITSM.  In fact, we need to start with CSI, and we need to plan a whole portfolio of improvements encompassing formal projects, planned changes, and improvements done as part of business-as-usual (BAU) operations.  And the ITIL ‘process’ is the wrong unit of work for those improvements, despite what The Books tell you. Work with me here as I take you through a series of premises to reach these conclusions and see where it takes us.

In my last article, I said service portfolio management is a superset of organisational change management.  Service portfolio decisions are decisions about what new services go ahead and what changes are allowed to update existing services, often balancing them off against each other and against the demands of keeping the production services running.  Everything we change is service improvement. Why else would we do it?  If we define improvement as increasing value or reducing risk, then everything we change should be to improve the services to our customers, either directly or indirectly.
Therefore our improvement programme should manage and prioritise all change.  Change management and service improvement planning are one and the same.

Everything is improvement

First premise: Everything we change is service improvement

Look at a recent Union Pacific Railroad quarterly earnings report.  (The other US mega-railroad, BNSF, is now the personal train-set of Warren Buffett – that’s a real man’s toy – but luckily UP is still publicly listed and tell us what they are up to).

I don’t think UP management let one group decide to get into the fracking materials business and allowed another to decide to double track the Sunset Route.  Governors and executive management have an overall figure in mind for capital spend.   They allocate that money across both new services and infrastructure upgrades.

They manage the new and existing services as a portfolio.  If the new fracking sand traffic requires purchase of a thousand new covered hoppers then the El Paso Intermodal Yard expansion may have to wait.  Or maybe they borrow the money for the hoppers against the expected revenues because the rail-yard expansion can’t wait.  Or they squeeze operational budgets.  Either way the decisions are taken holistically: offsetting new services against BAU and balancing each change against the others.

Our improvement programme should manage and prioritise all change, including changes to introduce or upgrade (or retire) services, and changes to improve BAU operations.  Change management and service portfolio management are both aspects of the same improvement planning activity.  Service portfolio management makes the decisions; change management works out the details and puts them into effect.

It is all one portfolio

Second premise: Improvement planning comes first

Our CSI plan is the FIRST thing we put together, not some afterthought we put in place after an ‘improvement’ project or – shudder – ‘ITIL Implementation’ project.
UP don’t rush off and do $3.6 billion in capital improvements then start planning the minor improvements later.  Nor do they allow their regular track maintenance teams to spend any more than essential on the parts of the Sunset Route that are going to be torn up and double tracked in the next few years.  They run down infrastructure that they know is going to be replaced.  So the BAU improvements have to be planned in conjunction with major improvement projects.  It is all one portfolio, even if separate teams manage the sub-portfolios.  Sure miscommunications happen in the real world, but the intent is to prevent waste, duplication, shortages and conflicts.

Welcome to the real world

Third premise: we don’t have enough resource to execute all desired improvements

In the perfect world all trains would be controlled by automated systems that flawlessly controlled them, eliminating human error, running trains so close they were within sight of each other for maximum track utilisation, and never ever crashing or derailing a train.  Every few years governments legislate towards this, because political correctness says it is not enough to be one of the safest modes of transport around: not even one person may be allowed to die, ever.  The airlines can tell a similar story.   This irrational decision-making forces railroads to spend billions that otherwise would be allocated to better trackwork, new lines, or upgraded rolling stock and locos.  The analogy with – say – CMDB is a strong one: never mind all the other clearly more important projects, IT people can’t bear the idea of imperfect data or uncertain answers.
Even if our portfolio decision-making were rational, we can’t do everything we’d like to, in any organisation.  Look at a picture of all the practices involved in running IT

You can’t do everything

The meaning of most of these labels should be self-evident.  You can find out more here.  Ask yourself which of those activities (practices, functions, processes…  whatever you want to call them) which of them could use some improvement in your organisation.  I’m betting most of them.
So even without available funds being gobbled up by projects inspired by political correctness, a barmy new boss, or a genuine need in the business, what would be the probability of you getting approval and money for projects to improve all of them?  Even if you work at Google and money is no problem, assuming a mad boss signed off on all of them what chance would you have of actually getting them all done?  Hellooooo!!!

What are we doing wrong?

Fourth premise: there is something very wrong with the way we approach ITSM improvement projects, which causes them to become overly big and complex and disruptive.  This is because we choose the wrong unit of work for improvements.

How to cover everything that needs to be looked at?  The key word there is ‘needs’.  We should understand what are our business goals for service, and derive from those goals what are the required outcomes from service delivery, then focus on improvements that deliver those required outcomes … and nothing else.

One way to improve focus is to work on smaller units than a whole practice.  A major shortcoming of many IT service management projects is that they take the ITIL ‘processes’ as the building blocks of the programme.  ‘We will do Incident first’.  ‘We can’t do Change until we have done Configuration’.  Even some of the official ITIL books promote this thinking.

Put another way, you don’t eat an elephant one leg at a time: you eat it one steak at a time… and one mouthful at a time within the meal.  Especially when the elephant has about 80 legs.

Don’t eat the whole elephant

We must decompose the service management practices into smaller, more achievable units of work, which we assemble Lego-style into a solution to the current need.  The objective is not to eat the elephant, it is to get some good meals out of it.
Or to get back to railroads: the Sunset Route is identified as a critical bottleneck that needs to be improved, so they look at trackwork, yards, dispatching practices, traffic flows, alternate routes, partner and customer agreements…. Every practice of that one part of the business is considered.  Then a programme of improvements is put in place that includes a big capital project like double-tracking as much of it as is essential; but also includes lots of local minor improvements across all practices – not improvements for their own sake, not improvements to every aspect of every practice, just a collection of improvements assembled to relieve the congestion on Sunset.

Make improvement real

So take these four premises and consider the conclusions we can draw from them:

  1. Everything we change is service improvement.
  2. Improvement planning comes first.
  3. We don’t have enough resource to execute all desired improvements.
  4. We choose the wrong unit of work for improvements.

We should begin our strategic planning of operations by putting in place a service improvement programme.  That programme should encompass all change and BAU: i.e. it manages the service portfolio.

The task of “eating 80-plus elephant’s legs” is overwhelming. We can’t improve everything about every aspect of doing IT.   Some sort of expediency and pragmatism is required to make it manageable.  A first step down that road is to stop trying to fix things practice-by-practice, one ITIL “process” at a time.

Focus on needs

We must focus on what is needed.  To understand the word ‘needed’ we go back to the desired business outcomes.  Then we can make a list of the improvement outputs that will deliver those outcomes, and hence the pieces of work we need to do.

Even then we will find that the list can be daunting, and some sort of ruthless expediency will have to be applied to choose what does and doesn’t get done.

The other challenge will be resourcing the improvements, no matter how ruthlessly we cut down the list.  Almost all of us work in an environment of shrinking budgets and desperate shortages of every resource:  time , people and money.  One way to address this– as I’ve already hinted – is to do some of the work as part of BAU.

These are all aspects of my public-domain improvement planning method, Tipu:

  • Alignment to business outcomes
  • Ruthless decision making
  • Doing much of the work as part of our day jobs

More of this in my next article when we look closer at the Tipu approach.

Service Improvement at Cherry Valley

Problem, risk, change , CSI, service portfolio, projects: they all make changes to services.  How they inter-relate is not well defined or understood.  We will try to make the model clearer and simpler.

Problem and Risk and Improvement

The crew was not warned of the severe weather ahead

In this series of articles, we have been talking about an ethanol train derailment in the USA as a case study for our discussions of service management.  The US National Transport Safety Board wrote a huge report about the disaster, trying to identify every single factor that contributed and to recommend improvements.  The NTSB were not doing Problem Management at Cherry Valley.  The crews cleaning up the mess and rebuilding the track were doing problem management.  The local authorities repairing the water reservoir that burst were doing problem management.  The NTSB was doing risk management and driving service improvement.

Arguably, fixing procedures which were broken was also problem management.   The local dispatcher failed to tell the train crew of a severe weather warning as he was supposed to do, which would have required the crew to slow down and watch out.  So training and prompts could be considered problem management.

But somewhere there is a line where problem management ends and improvement begins, in particular what ITIL calls continual service improvement or CSI.

In the Cherry Valley incident, the police and railroad could have communicated better with each other.  Was the procedure broken?  No, it was just not as effective as it could be.  The type of tank cars approved for ethanol transportation were not required to have double bulkheads on the ends to reduce the chance of them getting punctured.  Fixing that is not problem management, it is improving the safety of the tank cars.  I don’t think improving that communications procedure or the tank car design is problem management, otherwise if you follow that thinking to its logical conclusion then every improvement is problem management.

A distinction between risks and problems

But wait: unreliable communications procedure and the single-skinned tank cars are also risks.  A number of thinkers, including Jan van Bon, argue that risk and problem management are the same thing.  I think there is a useful distinction: a problem is something that is known to be broken, that will definitely cause service interruptions if not fixed; a “clear and present danger”.  Risk management is something much broader, of which problems are a subset.  The existence of a distinct problem management practice gives that practice the focus it needs to address the immediate and certain risks.

(Risk is an essential practice that ITIL – strangely – does not even recognise as a distinct practice; the 2011 edition of ITIL’s Continual Service Improvement book attempts to plug this hole.  COBIT does include risk management, big time.  USMBOK does too, though in its own distinctive  way it lumps risk management under Customer services; I disagree: there are risks to our business too that don’t affect the customer.)

So risk management and problem management aren’t the same thing.  Risk management and improvement aren’t the same thing either.  CSI is about improving the value (quality) as well as reducing the risks.

To summarise all that: problem management is part of risk management which is part of service improvement.

Service Portfolio and Change

Now for another piece of the puzzle.  Service Portfolio practice is about deciding on new services, improvements to services, and retirement of services.  Portfolio decisions are – or should be – driven by business strategy: where we want to get to, how we want to approach getting there, what bounds we put on doing that.

Portfolio decisions should be made by balancing value and risk.  Value is benefits  minus  costs.  There is a negative benefit and a set of risks associated with the impact on existing services of building a new service:  there is the impact of the project dragging people and resources away from production, and the ongoing impact of increased complexity, the draining of shared resources etc….  So portfolio decisions need to be made holistically, in the context of both the planned and live services.  And in the context of retired services too: “tell me again why we are planning to build a new service that looks remarkably like the one we killed off last year?”.  A lot of improvement is about capturing the  learnings of the past.

Portfolio management is a powerful technique that is applied at mulltiple levels.  Project and Programme Portfolio Management is all the rage right now, but it only tells part of the story.  Managing projects in programmes and programmes in portfolios only manages the changes that we have committed to make; it doesn’t look at those changes in the context of existing live services as well.  When we allocate resources across projects in PPM we are not looking at the impact on business-as-usual (BAU); we are not doling out resources across projects and BAU froma  single pool.  That is what a service portfolio gives us:  the truly holistic picture of all the effort  in our organisation across change and BAU.

A balancing act

Service portfolio management is a superset of organisational change management.  Portfolio decisions are – or should be – decisions about what changes go ahead for new services and what changes are allowed to update existing services, often balancing them off against each other and against the demands of keeping the production services running.  “Sure the new service is strategic, but the risk of not patching this production server is more urgent and we can’t do both at once because they conflict, so this new service must wait until the next change window”.  “Yes, the upgrade to Windows 13 is overdue, but we don’t have enough people or money to do it right now because the new payments system must go live”.  “No, we simply cannot take on another programme of work right now: BAU will crumble if we try to build this new service before we finish some of these other major works”.

Or in railroad terms: “The upgrade to the aging track through Cherry Valley must wait another year because all available funds are ear-marked for a new container terminal on the West Coast to increase the China trade”.  “The NTSB will lynch us if we don’t do something about Cherry Valley quickly.  Halve the order for the new double-stack container cars”.

Change is service improvement

Everything we change is service improvement. Why else would we do it?  If we define improvement as increasing value or reducing risk, then everything we change should be to improve the services to our customers, either directly or indirectly.

Therefore our improvement programme should manage and prioritise all change.  Change management and service improvement planning are one and the same.

So organisational change management is CSI. They are looking at the beast from different angles, but it is the same animal.  In generally accepted thinking, organisational change practice tends to be concerned with the big chunky changes and CSI tends to be focused more on the incremental changes.  But try to find the demarcation between the two.   You can’t decide on major change without understanding the total workload of changes large and small.  You can’t plan a programme of improvement work for only minor improvements without considering what major projects are planned or happening.

In summary, change/CSI  is  one part of service portfolio management which also considers delivery of BAU live services.  A railroad will stop doing minor sleeper (tie) replacements and other track maintenance when they know they are going to completely re-lay or re-locate the track in the near future.  After decades of retreat, railroads in the USA are investing in infrastructure to meet a coming boom (China trade, ethanol madness, looming shortage of truckers); but they better beware not to draw too much money away from delivering on existing commitments, and not to disrupt traffic too much with major works.

Simplifying service change

ITIL as it is today seems to have a messy complicated story about change.  We have a whole bunch of different practices all changing our services, from  Service Portfolio to Change Management to Problem Management to CSI.  How they relate to each other is not entirely clear, and how they interact with risk management or project management is undefined.

There are common misconceptions about these practices.  CSI is often thought of as “twiddling the knobs”, fine-tuning services after they go live.  Portfolio management is often thought of as being limited to deciding what new services we need.  Risk management is seen as just auditing and keeping a list.  Change Management can mean anything from production change control to organisational transformation depending on who you talk to.

It is confusing to many.  If you agree with the arguments in this article then we can start to simplify and clarify the model:

Rob England: ITSM Model
I have added in Availability, Capacity, Continuity, Incident and Service Level Management practices as sources of requirements for improvement.  These are the feedback mechanisms from operations.  In addition the strategy, portfolio and request practices are sources of new improvements.   I’ve also placed the operational change and release practices in context as well.

These are merely  the thoughts of this author.  I can’t map them directly to any model I recall, but I am old and forgetful.  If readers can make the connection, please comment below.

Next time we will look at the author’s approach to CSI, known as Tipu.

Image credit: © tycoon101 – Fotolia.com

The RBS Glitch – A Wake Up Call?

More than a fortnight (from the last couple of weeks of June) after a “glitch” affected Royal Bank of Scotland (RBS), Natwest and Ulster Bank accounts, the fall-out continues with the manual processing backlog still affecting Ulster Bank customers.

Now, the Online Oxford Dictionary defines a glitch as:
a sudden, usually temporary malfunction or fault of equipment

I don’t think anyone affected would see it in quite the same way.

So when did this all happen?

The first I knew about was a plaintive text from a friend who wanted to check her balance, and could not because:
“My bank’s computers are down”
By the time the evening rolled around, the issue was becoming national news and very clear that this was more than just a simple outage.

On the night of Tuesday 19th June, batch processes to update accounts were not being processed and branches were seeing customer complaints about their balances.

As the week progressed, it became clear that this was no simple ‘glitch’, but the result of some failure somewhere, affecting 17 million customers.

What actually happened?

As most people can appreciate, transactions to and from people’s accounts are typically handled and updated using batch processing technology.

However, that software requires maintenance, and an upgrade to the software had to be backed out, but as part of the back out, it appears that the scheduling queue was deleted.

As a result, inbound payments were not being registered, balances were not being updated correctly, with the obvious knock on effect of funds showing as unavailable for bills to be paid, and so on.

The work to fix the issues meant that all the information that had been wiped had to be re-entered.

Apparently the order of re-establishing accounts were RBS first, then NatWest, and customers at Ulster Bank were still suffering the effects as we moved into early July.

All the while news stories were coming in thick and fast.

The BBC reported of someone who had to remain an extra night in jail as his parole bond could not be verified.

House sales were left in jeopardy as money was not showing as being transferred.

Even if you did not have your main banking with any of the three banks in the RBS group, you were likely to be effected.

If anyone in your payment chain banked with any of those banks, transactions were likely to be affected.

Interestingly enough, I called in to a local branch of the one of the affected banks in the week of the crisis as it was the only day I had to pay in money, and it was utter chaos.

And I called in again this week and casually asked my business account manager how things had been.

The branches had very little information coming to them at the height of the situation.

When your own business manager found their card declined while buying groceries that week, you have to wonder about the credibility of their processes.

Breaking this down, based on what we know

Understandably, RBS has been reticent to provide full details, and there has been plenty of discussion as to the reasons, which we will get to, but let’s start by breaking down the events based on what we know.

  • Batch Processing Software

What we are told is that RBS using CA Technologies CA-7 Batch processing software.

A back-out error was made after a failed update to the software, when the batch schedule was completely deleted.

  •  Incidents Reported

Customers were reporting issues with balance updates to accounts early on in the week commencing 20th June, and soon it became clear that thousands of accounts were affected across the three banks.

Frustratingly some, but not all services were affected – ATMs were still working for small withdrawals, but some online functions were unavailable.

  •  Major Incident

As the days dragged on, and the backlog of transactions grew, the reputation of RBS and Natwest particular came under fire.

By the 21st June, there was still no official fix date, and branches of NatWest were being kept open for customers to be able to get cash.

  •  Change Management

Now we get to the rub.

Initial media leaks pointed to a junior administrator making an error in backing out the software update and wiping the entire schedule, causing the automated batch process to fail.

But what raised eyebrows in the IT industry initially, was the thorny subject of outsourcing.

RBS, (let me stress like MANY companies), has outsourced elements of IT support off-shore.

Some of that has included administration support for their batch processing, but with a group also still in the UK.

Many of these complex systems have unique little quirks.  Teams develop “in-house” knowledge, and knowledge is power.

Initial reports seemed to indicate that the fault lay with the support and administration for the batch processing software, some of which was off-shore.

Lack of familiarity with the system also pointed to perhaps issues in the off-shoring process.

However, in a letter to the Treasury Select Committee, RBS CEO Stephen Hester informed the committee that the maintenance error had occurred within the UK based team.

  •  Documentation

The other factor is human need to have an edge on the competition – after all, knowledge if power.

Where functions are outsourcers, there are two vital elements that must be focussed on (and all to often are either marginalised/ignored due to costs):

1)      Knowledge Transfer

I have worked on many clients where staff who will be supporting the services to be outsourced are brought over to learn (often from the people whose jobs they will be replacing).

Do not underestimate what a very daunting and disturbing experience this will be, for both parties concerned.

2)      Documentation

Even if jobs are not being outsourced, documentation is often the scourge of the technical support team.  It is almost a rite of passage to learn the nuances of complex systems.

Could better processes help?

It is such a negative situation, I think it is worth looking at the effort that went into resolving it.

The issues were made worse by the fact that the team working to resolve the problem could not access the record of transactions that were processed before the batch process failed.

But – the processes do exist for them to manually intervene and recreate the transactions, albeit die to lengthy manual intervention.

Teams worked round the clock to clear the backlog, as batches would need to be reconstructed once they worked out where they failed.

In Ulster Bank’s case, they were dependant on some NatWest systems, so again something somewhere must dictate the order in which to recover, else people would be trying to update accounts all over the place.

Could adherence to processes have prevented the issue in the first place?

Well undoubtedly.  After all, this is not the first time the support teams will have updated their batch software, nor will it have been the first time they have backed out a change.

Will they be reviewing their procedures?

I would like to hope that the support teams on and off shore are collaborating to make sure that processes are understood and that documentation is bang up-to-date.

What can we learn from this?

Apart from maybe putting our money under the mattress, I think this has been a wake up call for many people who, over the years, have put all their faith in the systems that allow us to live our lives.

Not only that, though, but in an environment where quite possibly people have been the target of outsourcing in their own jobs, it was a rude awakening to some of the risks of shifting support for complex integrated systems without effective training, documentation, and more importantly back up support.

Prior to Mr Hester’s written response to the Treasury Select Committee, I had no problem believing that elements such as poor documentation/handover, and a remote unfamiliarity with a system could have resulted in a mistaken wipe of a schedule.

What this proves is that anyone, in ANY part of the world can make a mistake.

The Curious Technologist & The Case of the Analogies

Sometimes technicians, to paraphrase the character of Ian Malcolm, are: “… so preoccupied with whether or not they could, they didn’t stop to think if they should.”

As the new analyst for The ITSM Review, I was presented with the objectives and characteristics of the role – namely that of The Curious Technologist.

As I embark on this odyssey, I want these articles in particular to be a little more anecdotal in nature, as this subject can be as dry as toast (see what I did there?)

Incoming…

I landed in the world of ITIL back in 2005, when bids were looking for my organisation to demonstrate ITIL alignment and revolved around seemingly holy grail of Configuration Management

A simple gallop around potential contacts in the geographic regions, and within the various departments showed that everyone had their own ideas of what Configuration Management.

There was actual configuration setups of machines, to the rigidly adhered to ITIL descriptions in the book.

Welcome… to Jurassic Park!

Perhaps my favourite, certainly for Configuration Management was the ‘Jurassic Park’ principle.

Ask any technical group what their discovery tool does, and you will receive the most complex, macro-ridden spread-sheets with all manner of data widgets that can be scanned.

Trying to change the mind-set of technical folk to focus on configuration item data that is relevant is a challenge.

In the film, as the main protagonist, John Hammond, is smugly announcing his plans to literally unleash recreated dinosaurs on the unsuspecting tourist public, a mathematician specialising in chaos theory sets him straight.

Sparring from the start, the character of Ian Malcolm chides him for taking work that others have done, and just taking that extra (terrifying) step.

Sometimes technicians, to paraphrase the character of Ian Malcolm, are: “… so preoccupied with whether or not they could, they didn’t stop to think if they should.”

Whilst maybe not as (fictionally) fatalistic, this is true when we looked at the depth of scan-able data versus what is actually required to make Configuration Management achievable.

The next logical step was to analyse the list of discovered widgets but to ask two key questions:

  1. How frequently is the data element scanned?
  2. How current is it kept and used as part of another process?

Not surprisingly, a lot of things are scanned once, and never once referred to again, or even updated again.

The linkage with Change Management in particular proved to give us the grounds to define the “highest” common denominator, which is the most typical configuration item to be affected in a change.

And therein lay the basis for our definitions (in this case) on standards.

 “Here in my car, I feel safest of all…”

Perhaps my most constant analogy of all was one that was taught to me as I was preparing for my first billable project.

In moving to a new role recently I was fortunate enough to be working on a different service desk tool, and indeed my late career was often spent moving clients from one tool to another.

There is no real difference in the raison d’être of a tool – it exists to take a ticket from the start of its life-cycle journey to another.

Processes are the fuel that will drive that engine – but essentially a ticket is opened, it is assigned, it is resolved or closed.

Not unlike a car.

I could give anyone of you the keys to my car and with a few moments of familiarisation someone could drive it away.

Simplistic analogy?  Yes.

But it is often a necessary first step in detaching recipients from their emotional attachment to whatever tool is being replaced.

Welcome to… The Curious Technologist…

A lot of these articles may well be anecdotal, but in my years of watching some of the best consultants at practice, the ability to boil down a complex requirement or approach sometimes requires a more simplistic touch.

After all, if the prospect of moving to a new set of tooling meets with barriers straight away, then how will the deployment ever move forward?

Sure, the use of film lines or pop culture may cause me more amusement than my audience, it does bring a mechanism to channel people’s thoughts along a different line, which is vital in the complex environment we often work in.

Image Credit

A Great Free ITSM & ITAM Process Tool (via #Back2ITSM)

Cognizant Process Model
Cognizant Process Model

This is a very cool online tool for anyone in ITAM or ITSM.

COGNIZANT PROCESS MODEL

This great resource was kindly shared by Shane Carlson of Cognizant.

Shane is a founding member of the #Back2ITSM community, whereby ITSM professionals are encouraged to share their expertise for the benefit of others (and therefore develop the industry).

The process model includes the following models:

  • Request Management
  • Incident Management
  • Event Management
  • Problem Management
  • Change Management
  • Configuration Management
  • Release Management
  • Service Level Management
  • Availability Management
  • Capacity Management
  • IT Service Continuity Management
  • Continuity Operations
  • Financial Management for IT Services
  • Asset Management
  • Service Catalog
  • Knowledge Management
  • Information Security Management
  • Security Operations
  • Access Management
  • Portfolio Management
  • Program and Project Management

Each module includes guidance on the following areas:

  • Process Diagram
  • Benefits
  • Controls
  • Goal
  • Metrics
  • Policies
  • Process Team
  • Resources
  • Roles
  • Scope
  • Specification

According to the blurb….

“PathFinder is specifically designed to those:

  • Tasked with designing an IT Process.
  • Seeking validation that a process has been validated in the industry.
  • Looking to increase effectiveness of their current process design.
  • Seeking assistance with the cultural adoption of their IT process.
  • Faced with meeting compliance regulations.”

VIEW THE COGNIZANT PROCESS MODEL

Thanks to Shane for sharing this great free resource.

Yes, free. No registration, no 30 day trial, no salesman will call. Enjoy! If you find it useful please share the link and don’t forget to mention #Back2ITSM.

Interview: Simon Morris, 'Sneaking ITIL into the Business'

Ignoring the obvious may lead to a nasty mess

I found Simon Morris via his remarkably useful ITIL in 140 app. Simon recently joined ServiceNow from a FTSE100 Advertising, Marketing and Communications group. He was Head of Operations and Engineering and part of a team that lead the Shared Services IT organisation through its transition to IT Service Management process implementation. Here, Simon kindly shares his experiences of ITSM at the rock face.

ITSM Review: You state that prior to your ITSM transformation project you were ‘spending the entire time doing break-fix work and working yourselves into the ground with an ever-increasing cycle of work’. Looking back, can you remember any specific examples of what you were doing, that ITSM resolved?

Simon Morris:

Thinking back I can now see that implementing ITSM gave us the outcomes that we expected from the investment we made in time and money, as well as outcomes that we had no idea would be achieved. Because ITIL is such a wide-ranging framework I think it’s very difficult for organisations to truly appreciate how much is involved at the outset of the project.

We certainly had no idea how much effort would be spent overall on IT Service Management, but we able to identify results early on which encouraged us to keep going. By the time I left the organisation we had multiple people dedicated to the practice, and of course ITSM processes affect all engineering staff on a day-to-day basis.

As soon we finished our ITILv3 training we took the approach of selecting processes that we were already following, and adding layers of maturity to bring them into line with best practice.

I guess at the time we didn’t know it, but we started with Continual Service Improvement – looking at existing processes and identifying improvements. One example that I can recall is Configuration Management – with a very complex Infrastructure we previously had issues in identifying the impact of maintenance work or unplanned outages. The Infrastructure had a high rate of change and it felt impossible to keep a grip on how systems interacted, and depended on each other.

Using Change Management we were able to regulate the rate of change, and keep on top of our Configuration data. Identifying the potential impact of an outage on a system was a process that went from hours down to minutes.

Q. What was the tipping point? How did the ITSM movement gather momentum from something far down the to do list to a strategic initiative? 

If I’m completely honest we had to “sneak it in”! We were under huge pressure to improve the level of professionalism, and to increase the credibility of IT, but constructing the business case for a full ITSM transition was very hard. Especially when you factor in the cost of training, certification, toolsets and the amount of time spent on process improvement. As I said, at the point I left the company we had full time headcount dedicated to ITSM, and getting approval for those additional people at the outset would have been impossible.

We were lucky to have some autonomy over the training budget and found a good partner to get a dozen or so engineers qualified to ITILv3 Foundation level. At that point we had enough momentum, and our influence at departmental head level to make the changes we needed to.

One of the outcomes of our “skunkworks” ITIL transition that we didn’t anticipate at the time was a much better financial appreciation of our IT Services. Before the project we were charging our internal business units on a bespoke rate card that didn’t accurately reflect the costs involved in providing the service. Within a year of the training we had built rate cards that both reflected the true cost of the IT Service, but also included long term planning for capacity.

This really commoditised IT Services such as Storage and Backup and we were able to apportion costs accurately to the business units that consumed the services.

Measuring the cost benefit of ITSM is something that I think the industry needs to do better in order to convince leaders that it’s a sensible business decision – I’m absolutely convinced that the improvements we made to our IT recharge model offset a sizeable portion of our initial costs. Plus we introduced benefits that were much harder to measure in a financial sense such as service uptime, reduced incident resolution times and increased credibility.

Q. How did you measure you were on the right track? What specifically were you measuring? How did you quantify success to the boss? 

Referring back to my point that we started by reviewing existing processes that were immature, and then adding layers to them. We didn’t start out with process metrics, but we added that quite early on.

If I had the opportunity to start this process again I’d definitely start with the question of measurements and metrics. Before we introduced ITSM I don’t think we definitively knew where our problems were, although of course we had a good idea about Incident resolution times and customer satisfaction.

Although it’s tempting to jump straight into process improvement I’d encourage organisations at the start of their ITSM journey to spend time building a baseline of where they are today.

Surveys from your customers and users help to gauge the level of satisfaction before you start to make improvements (Of course, this is a hard measurement to take especially if you’ve never asked your users for honest feedback before, I’ve seen some pretty brutal survey responses in my time J)

Some processes are easier to monitor than others – Incident Management comes to mind, as one that is fairly easy to gather metrics on, Event Management is another.

I would also say that having survived the ITIL Foundation course it’s important to go back into the ITIL literature to research how to measure your processes – it’s a subject that ITIL has some good guidance on with Critical Success Factors (CSFs) and Key Performance Indicators (KPIs).

Q. What would you advise to other companies that are currently stuck in the wrong place, ignoring the dog? (See Simon’s analogy here). Is there anything that you learnt on your journey that you would do differently next time? 

Wow, this is a big question.

Business outcomes

My first thought is that IT organisations should remember that our purpose is to deliver an outcome to the business, and your ITSM deployment should reflect this. In the same way that running IT projects with no clear business benefit, or alignment to an overall strategy is a bad idea – we shouldn’t be implementing ITIL just for the sake of doing it.

For every process that you design or improve, the first question should be “What is the business outcome”, closely followed by “How am I going to prove that I delivered this outcome”. An example for Incident Management would be an outcome of “restoring access to IT services within an agreed timeframe”, so the obvious answer to the second question is “to measure the time to resolution.”

By analysing each process in this way you can get a clearer idea of what types of measurement you should take to:

  • Ensure that the process delivers value and
  • Demonstrate that value.

I actually think that you should start designing the process back-to-front. Identify the outcome, then the method of measurement and then work out what the process should be.

Every time I see an Incident Management form with hundreds of different choices for the category (Hardware, Software, Keyboard, Server etc.) I always wonder if the reporting requirements were ever considered. Or did we just add fields for the sake of it.

Tool maturity

Next I would encourage organisations to consider their process maturity and ITSM toolset maturity as 2 different factors. There is a huge amount of choice in the ITSM suite market at the moment (of course I work for a vendor now, so I’m entitled to have a bias!), but organisations should remember that all of vendors offer a toolset and nothing more.

The tool has to support the process that you design, and it’s far too easy to take a great toolset and implement a lousy process. A year into your transition to ITSM you won’t be able to prove the worth of the time and money spent, and you have the risk of the process being devalued or abandoned.

Having a good process will drive the choice of tool, and design decisions on how that tool is configured. Having the right toolset is huge factor in the chances of a successful transition to ITSM. I’ve lived through experiences with legacy, unwieldy ITSM vendors and it makes the task so much harder.

Participation at every level

One of the best choices we made when we transitioned to ITSM was that we trained a cross-section of engineers across the company. Of the initial group of people to go through ITILv3 Foundation training we had engineers from the Service desk, PC and Mac support, Infrastructure, Service Delivery Managers, Asset management staff and departmental heads.

The result was that we had a core of people who were motivated enough to promote the changes we were making all across the IT department at different levels of seniority. Introducing change, and especially changes that measure the performance of teams and individuals will always induce fear and doubt in some people.

Had we limited the ITIL training to just the management team I don’t think we would have had the same successes. My only regret is that our highest level of IT management managed to swerve the training – I’ll send my old boss the link to this interview to remind him of this!

Find the right pace

A transition to ITSM processes is a marathon, not a sprint so it’s important to find the right tempo for your organisation. Rather than throwing an unsustainable amount of resource at process improvement for a short amount of time I’d advise organisations to recognise that they’ll need to reserve effort on a permanent basis to monitor, measure and improve their services.

ITIL burnout is a very real risk.

 

Simon Morris

My last piece of advice is not to feel that you should implement every process on day one. I can’t think of one approach that would be more prone to failure. I’ve read criticism from ITSM pundits that it’s very rare to find a full ITILv3 implementation in the field. I think that says more about the breadth and depth of the ITIL framework than the failings of companies that implement it.

There’s an adage from the Free Software community – “release early, release often” that is great for ITSM process improvements.

By the time that I left my previous organisation we had iterated through 3 versions of Change Management, each time adding more maturity to the process and making incremental improvements.

I’d recommend reading “ITIL Lite, A road map to full or partial ITIL implementation” by Malcolm Fry. He outlines why ITILv3 might not be fully implemented and the reasons make absolute sense:

  • Cost
  • No customer support
  • Time constraints
  • Ownership
  • Running out of steam

IT Service Management is a cultural change, and it’s worth taking the time to alter peoples working habits gradually over time, rather than exposing them to a huge amount of process change quickly.

Q. Lastly, what do you do at ServiceNow?

I work as a developer in the Application Development Team in Richmond, London. We’re responsible for the ITSM and Business process applications that run on our Cloud platform. On a day-to-day basis this means reviewing our core applications (Incident, Problem, Change, CMDB) and looking for improvements based on customer requirements and best practice.

Obviously the recent ITIL 2011 release is interesting as we work our way through the literature and compare it against our toolset. Recently I’ve also been involved in researching how best to integrate Defect Management into our SCRUM product.

The sign that ServiceNow is growing at an amazing rate (we’re currently the second fastest growing tech company in the US) shows that ITSM is being taken seriously by organisations, and they are investing money to get the returns that a successful transition can offer. These should be encouraging signs to organisations that are starting their journey with ITIL.

@simo_morris
Beer and Speech
Photo Credit