One of the things that I got asked about most as a consultant was about what the difference was between Change and Release Management. It should be simple right? We’ve had 3.5 versions of ITIL, the itSMF, even special interest groups dedicated to Service Transition yet there is near universal confusion at the sharp end about WHAT the difference is between Change and Release Management so let’s sort this out once and for all!
For my money; Change Management are guardians – they protect the live environment.
The primary objective of Change Management is to enable beneficial Changes to be made, with minimum disruption to IT Services.
Release Management are like air traffic controllers; they package together bundles of Change into a single Release to reduce periods of downtime and inconvenience to the rest of the business.
The primary objective of Release Management is to ensure that the integrity of the live environment is protected and that the correct components are released.
Bringing out the big guns
Let’s get back to basics and talk ITIL for a second. Both Change and Release Management sit in the Service Transition stage of the ITIL lifecycle so are part of the value stream that delivers effective business change;
Change; The addition, modification or removal of anything that could have an impact on IT services.
Release; Collection of hardware, software, documentation, processes or other components required to implement one or more approved changes to IT Services. The contents of each release are managed, tested and deployed as a single entity.
In other words, Change is about installing, modifying or retiring things safely without setting anything on fire. Release Management is a holistic process that bundles together multiple Changes into a single deployment. So now that we’ve got that sorted; let’s talk about how to make sure Change and Release Management play nicely together.
Have a way of highlighting Releases in your Change Management tool
This ensures that Releases should up in the Change Schedule (CS) and that everyone is aware of any major deployments. This will make sure there are no conflicts or scheduling clashes. Take it from someone who knows there is nothing more uncomfortable than being on site in central London explaining to several different technical teams why a site power down and a major code deployment at the same site can’t go ahead at the same time. #awkward
Separate the roles of Change Manager and Release Manager
Change Management is a governance process, the role of the Change Manager is to review, authorise and schedule the Change. Release Management is an installation process. It works with the support of Change Management to builds, tests and deploy new or updated services into the live environment. Both are equally important so you need subject matter experts for both.
Agree the level of documentation required
A Change is a single record containing:
Approval details & audit trail
Release documentation is much more involved and as a starter for ten will contain:
Back out plan
Ensure the Release Manager is present at CAB
If your Release Manager isn’t attending CAB invite them immediately! It’s really important that the Release Manager is there to explain the Release content and any dependencies, communicate business approval and advise the Service Desk and Problem Management of any defects; working with them to ensure any known errors with any workaround details are raised where appropriate.
By having Change and Release Management working closely together, your effectiveness rates should improve and unforeseen incidents, problems and defects should be reduced. How do you manage Change and Release Management? Let us know in the comments!
*Yes; that’s an Aer Lingus plane; they are the best airline in the world because when the flight attendants clock that you’re a single mum with three over excited kids that are about to go feral at any moment and stress levels that are about to hit DEFCON1 they give you free vodka and cokes throughout the flight.
One of the things that I wish was covered in more detail during the ITIL intermediate training is how to properly impact assess Changes. Change Managers are the guardians of the production environment so making sure that all changes are properly assessed and sanity checked is a key part of service delivery. Asses too low and high risk rangers go through unchallenged, assess too high and you block up the process by examining every change no matter how small as if they could kill your organisation.
Here are the things that I look for when assessing a Change:
Does it highlight the affected services so that it’s easy to identify in any reports?
Is it clear and does it make sense? Sounds basic I know but let’s make it easier for the other people assessing and authorising the Change.
Why are we doing the Change? Remember, this isn’t just about technology, what about business and financial benefits?
What are the risks in carrying out this Change? Has a risk matrix been used to give it a tangible risk score or is it a case of “reboot that critical server in the middle of the day? Be grand”. Imagine explaining to senior management what went wrong if the Change implodes – have you looked at risk mitigation? Using a formal risk categorisation matrix is key here. Don’t just assume technicians know what makes a change low risk. One of the key complaints from the business is that IT does not understand their pain. Creating a change assessment risk matrix IN A REPEATABLE FORMAT should be your first priority as a Change Manager. If you can’t assess the risk of a change in the same way each time, learning from any mistakes, then you’re not doing Change Management. Period.
Does the proposed timing work with the approved Changes already on the Change Schedule (CS)? Has the Change been clash checked so there are no potential conflicts over services or resources?
Look at the proposed start and end times. Are they sensible (i.e. not rebooting a business critical server at 9 o’clock on Monday morning)? Does the implementation window leave time for anything going wrong or needing to roll back the Change?
Are there any special circumstances that need to be considered? I used to work for Virgin Media; we had Change restrictions and freezes on our TV platforms during key times like the Olympics or World Cup to protect our customer’s experience. If you don’t know when your business critical times are then ask! The business will thank you for it.
The Technical Details:
Have all affected services been identified? What about supporting services? Has someone checked the CMS to ensure all dependencies have been accounted for? Have we referenced the Service Catalogue so that business approvers know what they’re authorising?
Technical Teams Affected
Who will support the Change throughout testing and implementation? Will additional support be needed? What about outside support from external suppliers? Has someone checked the contract to ensure any additional costs have been approved?
User Base Affected
Check and check again. The last thing you want to do is deploy a Change to the wrong area of the business.
What do you mean what environments are we covering? Surely the only environment we need to worry out is our production environment right? Let me share the story of my worst day at work, ever. A long time ago and pre-kids, I worked for a large investment bank in London. A so called routine code change to one of the most business critical systems (the market data feed to our trade floors) took longer than expected so instead of updating both the production and DR environments, only the production environment was updated. The implementation team planned on updating the DR environment but got distracted with other operational priorities (i.e. doing the bidding of whichever senior manager shouted the loudest). Fast forward to 6 weeks later, a crisis hits the trading floor, the call is made to invoke DR but we couldn’t because our market data services were out of sync. Cue a hugely stressful 2 hours where the whole IT organisation and its mum desperately scrambled to find a fix and an estimated cost to the business of over $8 million. Moral of the story? If you have a DR environment; keep it in sync with production.
Are there any licensing implications? Don’t forget, changes in the number of people accessing a system, number of CPUs, or (especially) the way in which people work (moving from dev to prod) have huge impacts on licences.
Pre Implementation Testing
How do we make sure the Change will go as planned? Has the Change been properly tested in an appropriate environment? Has the testing been signed off and have all quality requirements been met?
Post Implementation Verification
OK; the Change has gone in, how do we make sure everything is as it should be? Is there any smoke testing we can carry out? This is particularly important in transactional services; I once saw a Change that went in, everything looked grand but when customers tried to log in the next day, they couldn’t make any changes in their online banking session. I’ll spare you the details of the very shouty senior management feedback; let’s just say fun was most definitely not had that day. If at all possible; test that everything is working; the last thing you need is a total inability to support usual processes following a Change.
Does it make sense and does everyone involved know what they are meant to be doing and when. If other teams are involved are they aware and do we have contact details for them? Are there any dodgy areas where we might need check point calls? Do we need additional support in place such as additional on call / shift resource on duty senior manager to mitigate risk? The plan doesn’t have to be fancy; if you need some inspiration I can share some template implementation plans in our members / subscribers area.
Back Out Plan
What happens if something goes wrong during the Change? Do we fix on fail or roll back? Are the Change implementers empowered to make a decision or is escalation needed? In that case; are senior management aware of the Change and will a designated manager / decision maker be available? Can the Change be backed out in the agreed implementation window or do we need more time? If it looks like restoration work will cause the Change to overrun; warn the business sooner rather than later so that they can put any mitigation plans / workarounds in place.
What Early Life Support Is Planned?
What early Life support is planned? Are floorwalkers needed? Are extra team members needed that day to cope with any questions? Have we got defined exit criteria in place?
Is The Service Desk Aware?
Has someone made the Service Desk aware? Have they been given any training if needed? I know it sounds basic but only a couple of months ago; I had to sit down and explain to an engineer why it was a good idea to let the Service Desk know before any Changes went live. Let’s face it; if something goes wrong the Service Desk are going to be at the sharp end of things. And speaking as an ex Service Desk manager (a very long time ago when they were still called Help Desks) there is nothing worse than having to deal with customers suffering from the fallout of a Change that you know nothing about.
Has the Change been comm’ed out properly? Do we have nice templates so Change notification have a consistent look and feel?
If the business are pushing for a Change to be fast tracked with minimum testing can you ask them to formally acknowledge the risk by relaxing any SLA?
The above list isn’t exhaustive but it’s a sensible starting point. There’s lots of guidance out there; ITIL has the 7 R’s of Change Management and COBIT has advice on governance. What do you look for when assessing Changes? Let me know in the comments!
As a former Change Manager I can honestly say that the Change Advisory Board (or CAB) is one of the most important and useful meetings a service orientated organisation can have. It sets out a view of what’s happening to key services over the next week, reviews previous Change activity and looks at CSI so what’s not to like? CAB meetings are all about the people attending them and handled badly your CAB meeting will have all the power of a chocolate teapot so here are our top tips for running them effectively.
Step 1: TCB Power!
A colleague of mine once told me that TCB or tea, coffee and biscuits was one of the most important acronyms in IT. When I worked for a large investment bank in London, one of my first tasks was to roll out a sensible Change Management process across one of our service families. Trying to persuade grumpy techies who saw Change Management as red tape rather than an important part of service delivery was not going well until I brought out the big guns; Krispy Kreme doughnuts and chocolate biscuits. In all seriousness, a CAB meeting is where you want people to feel comfortable representing Changes or asking questions so anything that makes your meeting easier, nicer or makes people feel more relaxed can only be a good thing.
Step 2: Get organised
Make sure that your CAB has a terms of reference document so that everyone knows what they’re doing and why. The Change manager should send out the CAB agenda, including the Changes to be discussed, the Change Schedule (CS) and any Changes that caused Incidents well in advance of the CAB meeting. Service Delivery teams and Project Managers need time to read and consider the Changes as well as identify any potential issues or questions.
Step 3: Look for the big hitters
One of the biggest mistakes people make is insisting all Changes should go to CAB. Not a good idea unless you want your CAB meeting to be overrun with server reboots or patching requests. Use automation where possible so that the CAB meeting can focus on the major, high category Changes that need to be sanity checked and talked through.
Step 4: Play nicely with your attendees
Some members of the CAB will be needed for their opinion on every Change; for example the Service Desk, Network Services and Server support and will make up the core CAB attendee list. Other attendees such a Project Managers, Service Delivery Managers and external suppliers might only be needed to discuss a couple of Changes on the list. If this is the case then be kind. Move those Changes to the beginning of the CAB so that these temporary or “flex” CAB attendees can discuss the relevant Changes and then leave.
Step 5: Ask the horrible questions
You know the ones, what everyone in the room is thinking but no one wants to actually ask. Some examples could include:
“What’s the remediation plan? Do we fix on fail or roll back?”
“What happens if rolling back doesn’t fix the issue?”
“Is the person doing the Change empowered to make that decision or do we need to arrange for extra support to be on call?”
Or even; “is this really a good idea?”
It’s better coming from the Change Manager than from an angry customer or senior manager following a failed Change right? Make sure the Service Desk feel comfortable asking questions as well; they’ll be the ones at the sharp end of customer complaints if anything goes wrong so make sure they’re happy with the Change content and plan.
Step 6: Keep it pacey
There is nothing worse than a two hour CAB meeting. I guarantee you; if you are regularly putting your CAB attendees through marathon meetings then people will run short of both patience and good will. There’s also a very real chance that someone may fall asleep. Keep things moving. If someone has launched into a long winded, uber waffley technical explanation and you get the sense that it’s adding no value as well as making everyone in the room lose the will to live then break in with a question so that you can get things back on track. Do it nicely though obvs.
Step 7: CSI
Don’t forget to review your list of previously implemented Changes. If something’s gone well then brilliant! Let’s template it or add it to any Change models to share the love. If something hasn’t been successful or worse, has broken something generating a load of Incidents then look at what happened, figure out the root cause and look at ways of preventing recurrence. If your Problem Manager isn’t attending CAB then invite them – they are the subject matter experts in this area.
What do you think? What are your top tips for effective CAB meetings? Tell me in the comments!
So here it is. I think we can safely say that it hasn’t been a great few weeks for security or protecting people’s personal information. At the time of press both Vodafone and Talk Talk had been hit by security breaches and there are lots of anxious customers worried if their personal data has been compromised.
In the case of Vodafone, the data breach was external to Vodafone i.e. the data had been found elsewhere and the hackers were trying their luck on the Vodafone corporate site from some other breach to see how many customers has reused their passwords.
Password Management Best Practice
In a digital age, how do we keep our data safe? Here are our top tips for password management best practice (and no, we don’t recommend you try squirrel noises!).
– Do NOT use the same password for everything. I know, I know it’s a pain in the hoop having to remember multiple passwords but research shows that if your credentials are compromised, hackers will often try the same login details on Amazon, Ebay, Pay Pal etc. Nothing is bullet proof 100% of the time so let’s at least apply some damage limitation to the situation.
I had a real “ah here” moment a few months ago. I was given access to a corporate system for an organisation that will remain nameless. The system in question gave me access to the corporate e-mail & SharePoint systems as well as some key competitor & market trend analysis. What was the password? Wecome1. Come on people, we can do better than that!
A few simple hints and tips are:
Use long, complex passwords. Use multiple cases (i.e. capital & small letters), numbers & symbols / special characters.
Don’t use words that can be found in a dictionary. There are password cracking tools freely available on the internet which can crack passwords using what’s known as a “brute force” attack.
Don’t use your e-mail address, network id or personal information such as your National Insurance number or date of birth.
Don’t use common passwords such as “password” (and yes, people still do this) or “welcome”.
Don’t use sequential passwords such as 1 2 3 4 or QWERTY. No, just no!
Try using part of a saying to make a complex password easy to remember. One example we all know is Money Makes The World Go Round – so how do we make a secure password? Abbreviate, mix the cases up & substitute letters with characters and add in some numbers – suddenly you have a password that’s much harder to guess for example 20mMtw9R*15
You could also consider using a password manager. Password managers are software applications that securely store all your passwords so you only have to remember one password. The stored passwords are encrypted so you have to create one strong, master password that will give you access to the rest of your saved passwords. There are lots of password managers available online; Roboform, Dashlane and Password box are some examples the have been recommended by c|net, Infoworld, and PC Mag.
So there you have it. It’s a jungle out there so stay safe people! One last thought though, it’s not all doom and gloom. Check out Vodafone Ireland’s latest TV ad if you need cheering up if you’re an anxious Vodafone UK or Talk Talk customer. Guaranteed to make you smile, promise.
Many IT leaders are already familiar with the kinds of surveys the common support tools send out on ticket closure. But, it turns out, we may not be going about it the best way. This year’s winner of itSMF Australia’s Innovation of the Year was Dave O’Reardon. Dave has had 25 years’ experience working in IT and his award-winning transactional Net Promoter service, CIO Pulse, provides a whole new way of looking at how IT leaders can improve their services and start creating value for the businesses and customers they support.
After I photo-bombed Dave’s official awards photos, he gracefully agreed to an interview.
Can you explain the fundamentals of Net Promoter?
Sure! Net Promoter is a proven way of improving customer loyalty, or satisfaction, with a product, company or service. And its a metric – a Net Promoter Score – for understanding your progress toward that goal and for benchmarking your performance. It is not a piece of software and it is not Intellectual Property – it’s free for anyone to use.
If you’ve ever been asked a question along the lines of “On a scale of 0 to 10, how likely are you to recommend us to a friend or colleague?”, then you’ve come across a company that’s using Net Promoter. This question is usually followed by one or two open-ended questions. These follow-up questions ask the reason for the score and what could be done to improve. Based on a customer’s score (in response to the first question), they are categorised as either a Promoter (they scored 9 or 10), a Passive (they scored 7 or 8), or a Detractor (they scored 6 or below). Net Promoter then recommends a number of practices that can be used to convert Detractors and Passives into Promoters.
A Net Promoter Score is simply calculated by subtracting the percentage of Detractors from the percentage of Promoters. This calculation results in a score of between -100 (all your customers are Detractors) and +100 (all your customers are Promoters).
Net Promoter is commonly used in two different ways – transactional (also called operational or bottom-up) and relationship (also called brand or top-down). Transactional NPS is used to measure and improve the customer experience following a specific interaction (e.g. after an IT support ticket has been closed). Relationship NPS is used to measure and improve overall loyalty or satisfaction with a product, brand or service, e.g. via an annual survey.
Why is it important for IT teams to use a customer service improvement approach like Net Promoter?
There’s a few reasons. First of all, IT teams often rely too much on service level agreements, such as incident response and resolution targets. These targets are great for helping support staff determine what to work on and when, but tell you nothing about the customers’ perceptions. If you’ve ever had a wall of green traffic lights for your SLAs and yet the customer still isn’t happy, then you know what I mean. I like to call this the Watermelon Effect – SLA performance indicators are all green, but on the inside customers are red and angry. Traditional SLAs don’t measure the customer experience and customer perceptions, Net Promoter does.
The second reason is that process maturity assessments – formal and informal – don’t help IT teams prioritise in any way that is meaningful. We’re at maturity level 2 for Configuration Management, so what?! And on the flipside, even mature processes can be crap and fail to meet customers’ needs. Your Request Fulfillment process might be very mature – documented, automated, measured etc – and yet customers are still frustrated that hardware provision takes so long and that Jim is always gruff when asked for an update. A mature process doesn’t necessarily meet customer needs.
Bodies of knowledge like ITIL and COBIT are stuffed full of solutions. They are great to turn to when you’ve got a service issue and you want some ideas on how to solve it. But how do you know you’ve got a problem and how do you know which problem is the most urgent? If you want to improve service (and if you’re in the field of Service Management and you don’t, then you might be in the wrong field) you absolutely have to understand customer perceptions. Things such as service quality and value stem from customer’s perceptions.
Net Promoter is very widely used by consumer-facing organisations. How do you modify the typical Net Promoter format to suit internal teams like IT, HR and so on?
That’s a great question. Net Promoter is often overlooked as an improvement methodology by internal service providers because of the first question – “How likely are you to recommend us to a friend or colleague?”. It just doesn’t make sense to an internal customer. Who’s going to tell one of their mates at the pub that their IT Service Desk is fantastic and that they should give them a call the next time they have a problem with their iPad! The trick is just to reword the question so that it makes sense to the customer, e.g. “On a scale of 0 to 10, overall how satisfied are you with your recent support experience?”.
What’s wrong with the traditional transactional survey that we’re more familiar with?
Firstly, because internal service providers all use different surveys and different scales they can’t benchmark their performance against each other. Their scores are calculated in different ways and so one organisation can’t tell if another organisation is doing better than them or worse. Who should get improvement ideas from who?
The second thing is a bigger issue. Most organisations just don’t know what to do with the data they’re collecting. They survey, they calculate some sort of satisfaction score, and then they report on that score in a management report of some sort. But that’s all. And that’s a terrible shame, because there’s a bunch of behaviors that the transactional survey should be driving that can result in a significant improvement in customer satisfaction. But if all you do is survey and calculate a score, don’t expect anything to improve. I call this the ‘Chasm of Lost Opportunity’ – the powerful things that are not done between a survey being completed and a score being reported. By adopting the behaviors and activities recommended by Net Promoter – bridging the chasm – I’ve seen internal service providers make significant improvements to internal customer satisfaction in just months.
What sort of problems and improvement opportunities have you seen coming out of IT teams that start paying attention to customer feedback? Any particular areas that standout in common?
The most common feedback theme we see with transactional surveys comes down to poor communication – support calls that seem to disappear into black holes, customers not having their expectations managed re fulfillment/resolution timeframes, and tickets being closed without the customer first verifying that they’re happy that the solution has worked.
When it comes to the relationship surveys, every client is unique. We see everything from issues with network speed, being forced to use old PCs, poor system availability, inadequate engagement of the business in IT projects, releases introducing too many defects, service desk hours that don’t work for the business. Pretty much everything. And that’s why the top-down relationship survey is so important. When Net Promoter is used for periodically surveying internal customers, it provides really rich information on what the customer sees as IT’s strengths and weaknesses. The results often come as a surprise to IT management, which is a good thing, because, without that information they were in danger of investing limited improvement resources in areas that just aren’t important to the customer.
If you could distill all the experience you’ve had with transforming IT teams, is there one high-impact tip you could suggest?
Yes, but it’s more of a way of thinking than a tip per se. And that is – don’t dismiss customer feedback as something fluffy and unimportant. If you’re in the business of delivering service to a customer, then understanding customer perceptions is very very important. Dismiss customer feedback as fluffy and unimportant at your peril! Quality and value are both the result of perceptions, not objective measures like availability percentages and average response times.
Net Promoter-based transactional surveys are a great way to drive continual improvement in the Service Desk and IT support functions – improving the way IT is perceived by the large majority of its customers. And Net Promoter-based relationship surveys provide a valuable source of input to IT strategy, ensuring that IT is investing in the areas that are truly important to the business, not just because Gartner says so.
When IT teams don’t understand, and actively seek to improve, customer perceptions of IT, the end result is sad and predictable – IT is managed like a cost-centre, budgets are cut, functions are outsourced, and IT leaders are replaced. And at pubs and dinner parties, no matter what job we do in IT, our friends grumble at us because where they work, their IT department is crap.
Dave helps IT teams, and other internal service providers, adopt Net Promoter and provide better customer service, improve their reputation and increase internal customer satisfaction. He’s worked in IT for 25 years and is the CEO and founder of:
Silversix.com.au – a management consultancy that helps IT teams measure and improve internal customer satisfaction)
and cio-pulse.com (a transactional Net Promoter service that kicks the ass of the survey modules of ITSM tools).
This quick guide has been contributed by Mike Simpson of CIH Solutions.
The guide discusses how Knowledge Management (KM) can be used to manage risk and control costs in an IT Service Management environment. The guide identifies four ‘hot spots’ based on the author’s experience and outlines common problems and suggests solutions using KM.
As with most terms found in IT the term Knowledge Management means different things to different people. There is much available on the subject of KM and the term is often interchangeable with other terms such as intellectual capital, information management, data management and document management. In reality, KM embraces all of these.
So, what is my definition of KM in relation to an ITSM organisation?
First, this is not about scale. A KM system can operate just as effectively in a small organisation as a large enterprise. The principles remain the same – identifying, collating, storing and retrieving knowledge for use by all personnel in their day-to-day tasks. Also, this is not just about documents and data. When the experience of personnel is added into the mix we get Knowledge and this needs to be captured and stored for future use.
Second, from my experience the key feature of a KM system within an ITSM organisation is the understanding that different information has different values depending on circumstances. For me assigning value to information is vital and has priority over the capture of all available material.
At this point I should add that I do not differentiate between an MSP serving external clients and an internal IT service provider. The same KM principles apply. Also, the KM system described in this guide should be considered a ‘practical solution’ that can be implemented with limited resources and budget and extended over time.
I want to begin by briefly describing two KM systems that I have encountered in the course of my consultancy work.
I’ve seen only one truly outstanding example of an enterprise wide KM system and that was at a European pharmaceutical company. What struck me about this KM system was the sheer scale of the repository containing research papers, trials results and project documents covering decades of research amounting to many millions of pages and database entries. The success of this KM system was of course the strength of the underlying thesaurus that enabled scientists to discover (or perhaps re-discover) knowledge to support the design of a new R&D programme.
My second example is at the other end of the scale. This is a local KM system that supports an IT organisation that provides hosting support for external SAP clients. This KM system also impressed me but for a different reason. Without any real top down sponsorship or funding the technical teams had created their own KM system based on a single central repository, but where all the content was created, published and maintained under very strict guidelines by a few key members of staff, but accessed by many. The rationale for using this approach was to bring discipline to the management of documents and data that were considered vital to the successful running of their IT organisation.
KM Model for ITSM
The rationale for the second example above sounds somewhat obvious, but the background problem as explained to me was one of long term ill-discipline in the day-to-day management of key information. Individuals, both staff and sub-contractors, would create multiple documents, held in isolated repositories or held on local drives, resulting in poor retrieval and inaccurate information.
The problem is a familiar one. Admittedly, this KM system is basically document management, plus some other information formats and a simple data classification system, but in my view this doesn’t matter as the problem of badly managed information was controlled by introducing a strong KM framework with a central repository to address a specific local need.
It is this model of KM that I want to discuss as the starting point for KM for ITSM, but first I need to say something about the concept of assigning value to information.
Defining Business Value
I mentioned above that assigning value to information is vital.
I call this category High Business Value information. So, what does it mean exactly? Essentially, this is a category of business information that covers all the vital and irreplaceable business records, documents, information and data that are associated with sensitive areas like customer data, compliance, security, personnel, finance and legal and commercial activities.
It is this category that has the potential to damage an ITSM organisation should this material be compromised by loss, breach of security, inaccuracy or the inability to locate and retrieve quickly when needed. It is the failure to identify, capture, publish and retrieve this category of knowledge that can have a significant impact on the management of risk and cost control.
Whilst all information is valuable, depending on circumstances, some information suddenly becomes more valuable.
Our first step is to build a KM Framework. This framework must define the KM life cycle to create, capture, review, release, amend, publish and retire content. In addition, the KM Framework must define a system of classification for the ITSM information. We have already identified a need to segregate high value information – I’m calling this Layer 1 information. All the remainder of the ITSM information and data is collected into Layer 2.
Basically, for Layer 1 we know what we want and where it is – hence we can find it quickly using a hierarchy with a controlled vocabulary where everything is tagged.
However, for Layer 2 the structure is more linear using a Thesaurus and non-controlled vocabulary. This allows for a more ‘search and discover’ approach.
Finally, the framework will identify the ITSM knowledge managers who will be responsible for implementing the framework, plus a KM Steering Committee.
Five Stages of the KM Framework
There are five stages within the KM Framework and these are shown in Figure 1 below. By following this five stage sequence all the information considered as High Business Value can be identified and either uploaded into the KM Database or retained in local repositories (known as source databases). This is the Integrate stage that is covered in detail later on under the Hot Spot scenarios.
Each stage should be followed for Layer 1 and then repeated for Layer 2.
Figure 1 – Five Stages of KM Framework
Audit – once the categories within Layer 1 have been identified all the material to be included in each category needs to be identified. The audit will do this and will cover different media formats such as PDF, database tables, e-mails, webinars and HTML et al.
Map – during the audit the location of the material is identified. This is mapping and will be needed when the KM database is designed and built to identify what material should be transferred to the KMDB and what material should remain in local repositories.
Classify – once all the information has been identified for the categories of Layer 1, the documents and data can be classified according to the controlled vocabulary system and the hierarchy structure.
Assemble – once classified and physically located, the content for each category should be assembled as a schedule of descriptive metadata tables complete with information titles, document numbers, versions, data sets and physical location.
Integrate – once all the information has been assembled the metadata tables can be used to manage the population of the KMDB – either directly with content or connected to other repositories to extract the content. These are known as source databases.
As mentioned above it is important to classify by value as well as classify by subject. For example, all customer data should always be considered high value, but the exact list will depend on the types of client and services that are supported by the ITSM organisation.
When it comes to the subject of classification there are many standards1 on taxonomy and debates about linear versus the hierarchy structure approach. I’m therefore suggesting that it makes sense to divide our total ITSM information into two distinct groups – the High Business Value information already discussed and a second group which is essentially everything else. I’m calling the first grouping Layer 1 and the second grouping Layer 2.
Once all the information has been divided into these two layers we must structure the information in two different ways. Figure 2 below shows this division.
Layer 1 should be structured using a taxonomy with a hierarchy and controlled vocabulary. This scheme will identify the information according to importance, sensitivity and security level, and will be used to control access to the information in Layer 1. The search tools that underpin our KM system will then be able to locate and retrieve any of the information in Layer 1 very quickly. Layer 1 will typically have the lowest volume.
Figure 2 – Grouping Information by Layers
For our second layer – Layer 2 – I suggest a thesaurus with a more linear structure that will allow more of a free form of search and retrieval based on a smaller number of the terms.
Not everything needs to be tagged in Layer 2, instead broader searches and cross searches can be adopted to allow a more ‘search and discovery’ approach even ‘looking inside’ some documents and files to locate content of interest.
This makes sense as the population of Layer 2 will cover all manner of archived project material, design documentation, presentations, non-critical business records et al. Layer 2 will typically have the highest volume.
Hierarchy of Layer 1
Given the relatively simple structure of our KM system I suggest a top down approach for Layer 1, based on a hierarchy of Categories and Sub-categories using a controlled vocabulary to tag documents and data sets. An example is shown in Figure 3 below. As Layer 1 is the primary focus of our initial KM design and build it’s not my intention to outline the structure of Layer 2.
Figure 3 – Classification Hierarchy
Once all the constituents of Layer 1 have been identified during our Audit stage all the information and data can be divided into Categories. These categories will be assembled under various functional headings, for example:
Category 1 – Customer Data Category 2 – Compliance Category 3 – Legal Category 4 – Service Continuity Category 5 – Finance Category n – Security
Once all the Categories have been identified then the material should be further sub-divided into Sub-categories. I would suggest that these three drill-downs are sufficient to hold all the information in Layer 1. The Sub-categories will contain all the specific document and data sets that relate to a particular Category and this can be assigned by client or customer type or by any other specific grouping.
This hierarchy is not meant to be in any way prescriptive, just examples on the concept of Categories and Sub-categories.
Example ‘Hot Spots’
I’ve identified four possible ‘hot spots’ based on personal observations of real life events and these are shown in Figure 4. Clearly, there will be others depending on the set-up of a particular ITSM organisation and the types of client it supports.
The figure is based on a simplified ITSM organisation that could be either a MSP dedicated to external clients, or an ITSM organisation providing IT services to an internal client. The IT Operations can be either internal or external hosting with or without applications support. For the purpose of this guide it is assumed that the IT Operations is in-house and provides hosting, communications and applications support – within an overall governance framework.
There are four example ‘hot ‘spots’ shown in Figure 4.
Client Portal – Risk to reputation due to poor quality of customer information
Legal and Commercial – Cost of litigation due to incomplete contract audit trail
Compliance – Cost of compliance due to audit failure and forced re-work
Service Continuity – Risk to IT service continuity due to inadequate preparation
All of the above examples relate to the absence, inaccuracy or timely retrieval of information.
Figure 4 – Example Hot Spots
Risk to Reputation (Hot Spot 1)
In this scenario I’ve created a simple Service Operation (SO) organisation that has responsibility for managing the information available to customers via a Client Portal. I should state at this point that not all of the information available through the portal is the responsibility of the SO team. Some material will be supplied direct from the Client for uploading onto the portal – material from the Marketing Department such as prospectus and application forms.
The remainder of material will be service and technical support information produced within SO and cover such topics as service availability status, technical self-help and how-to-do-it video clips. The client portal also has a secure area for the client customer groups to access data on performance against SLAs.
The ‘Risk’ we are trying to mitigate here is out-of-date, missing and inaccurate information being posted to the client portal. The current arrangement within our SO is that information is currently held in separate repositories. Information is identified and collected and then manually or semi-automatically uploaded onto the Client Portal database using scripts. The risk here is that:
not all information is collected at the right time (like monthly SLA data updates)
incorrect information is selected for the right location
correct information is uploaded to the wrong location
not all information is collected
All the above risks can be minimised by the correct processes and checks in place and rigorously enforced. However, experience has shown that this manual and semi-automatic process can break down over time and quality – and reputation – can be impacted.
Figure 5 – KM Integration of Client Portal Information
All the client information that was previously managed manually has now been compiled into metadata tables from the Audit – Map – Classify – Assemble stages. We can now move to the Integrate stage. The metadata tables will hold the locations of all the information and data needed to be accessed by the client portal and the KMDB will use distributed queries to collect all the information and data from these locations. In practice these will be permitted areas within local repositories (or tool set databases) – known as source databases. See Figure 5.
For example, the Known Error database (KEDB) could supply diagnostic help and work-arounds for self-service customers for the most common errors. The KEDB will also collect Event and Incident Management data in support of the SLA reporting that is provided to the client business units via the portal. The Configuration Management database (CMDB) will also be another source database for the supply of data to the client on service configuration.
Cost of Litigation (Hot Spot 2)
My second scenario relates to the threat of litigation as a result of a breach of contract. Whilst this sounds dramatic it is important not to underestimate the legal and commercial requirements to hold and maintain all contractual material and associated business records.
Most service based agreements come with some form of service credit arrangement. However, a decrease in payment may not fully compensate a client for poor service particularly when a number of service failures occur in quick succession or a major outage lasting several days hits not just the client but the client’s customers. Such a scenario could be considered a breach of contract resulting in litigation to seek damages and a termination of the service contract.
Any move to litigation will result in a demand from the client’s legal team for all relevant information to be handed over. This is known as e-discovery2 and the Service Operation team along with the organisation’s legal department will need to respond very quickly in a short time frame.
Figure 6 – KM Integration of Legal Information
This is another example of how the KMDB can be used to store high business value information. Figure 6 shows how the KMDB can contain a Legal DB segment that is used to store in one location all contractual and historical SLA information relating to an individual client. As with Scenario 1, the metadata tables will hold the locations of all the information and data needed to be accessed by the Legal KMDB segment. Again, distributed queries are used to collect all the information and data from these source DB locations.
The information will include all versions of contracts, contract amendments, SLAs including email trails between the client and the IT Service Provider. This latter point of email capture is increasingly used to highlight any communication that might indicate an implied contract variation by either party. I would suggest the inclusion of a Message Record Management (MRM) system as part of the KM solution.
Also, it will be necessary to install an activity monitor to log and track activity of users of the KMDB segment. In reality, this would be good practice across all of the KMDB segments but essential in this instance.
One final point. Where the service provider is internal to an organisation, for example the public sector, the risk of litigation is negligible. However, be aware that a consistent under performance against SLA targets could be a fast track to IT outsourcing.
Here is another example of the importance of a KM sub-set of material that can be assembled on the basis of a specific demand. During a compliance audit, ISO27001 for example, there will be a specific document set that will need to be made available to the auditors for the certification process.
Cost of Compliance (Hot Spot 3)
I’ve seen this happen on a number of occasions. Although this is usually presented as an exercise in cost saving, invariably it is driven by a long term dissatisfaction in the performance of the internal service provider.
Without a rigorous KM approach there is the risk of auditors finding a shortfall in the control objectives and controls. This will result in low auditor marking and possible non-compliance. There is now a real cost involved with the remedial work needed for a re-run of the audit, particularly with the high daily rates charged by external auditors.
The material can range from Information Security Policies to Physical and Environmental Security. There is a wide range of different types of information and data and the Audit and Map stages of the KM Framework will require a lot of research and agreement from the KM Stakeholders on what should be included in this KMDB Compliance segment. It is likely that some of the lower level information may be located in Layer 2. If this is the case then it might make sense to leave it where it is and simply connect between the two layers. It is also true that the scope of ISO270013 is such that the KM will need to connect to a wider range of tools and assets.
One particular example is software asset management (ISO 27001 – Clause A8: Asset Management). Under this heading auditors will check the number and validity of software contracts held and check that the licences cover all the users who actually use the software. This could be addressed by setting up a source DB within a SAM tool and extracting all the data needed for the audit (as a controlled set) and then sending it to the KMDB. This is actually a very common failure point.
Risk to Service Continuity (Hot Spot 4)
In this final scenario I want to look at how the KMDB can be used to support Service Continuity. This has a much broader scope than just KM and I’m not intending to cover the whole subject of Business Continuity Management (BCM). Again, there are multiple terms involved here – like Disaster Recovery, Business Recovery and Service Recovery. In the case of ITSM and KM, I’m going to describe how KM can be used in support of Service Recovery within the broader BCM that covers the end-to-end business of a client.
The dilemma facing an ITSM organisation is no one can really identify all the situations likely to occur. Certainly, the evacuation of a data centre due to fire and flood is an obvious scenario, but thankfully not one that occurs very often. Clearly you can’t prepare for every instance but it is possible to target some common ‘knowns’.
So, here is a possible starting point. In our Layer 1 (High Business Value) under the Service Continuity category, the sub-categories should be constructed to reflect various ‘threat scenarios’ – one per sub-category, such as cyber threat, data theft and denial of service to name a few. We could also add major software outages that can and do occur from time to time.
Each ‘threat scenario’ can then be structured along the scope and guidelines of ISO223014. This will create a consistent framework for compiling all the recovery procedures, communication escalations and fall back plans for each scenario. Clearly, there is much more to discuss here but there is a future article that will address all of these aspects of service recovery which is planned for publication later in 2015.
What this guide attempts to outline is a number of possible solutions to common issues around both risk and cost control in an ITSM organisation. It is not intended to be prescriptive. The KM system described here should be considered an ‘entry level’ system, but with the capability of extension as time and budget permit. This KM system is also predicated on content being held within existing repositories, as well as a central KMDB, but extracted on demand. The success of implementing a KM system will always reside with the management and staff of an ITSM organisation and not the technology. Hence the emphasis must always be on developing a KM Framework as the starting point.
This quick guide has been contributed by Mike Simpson of CIH Solutions.
Organizations that are undertaking an ITSM initiative all too often leave out the centerpiece of success, or merely give lip service to it. Whether your organization is undertaking improvement of a single process, an entire transformational change, or even an ITSM tool replacement, Organizational Change Management (OCM) is that centerpiece to success.
In this article I will lay out some of the most important aspects and actions to consider for an OCM effort in your organization. These high level topic areas will be further expounded up in later articles.
Every project I’ve seen where OCM is a dedicated work stream, with thoughtful attention paid to it, has been extremely successful. Most of the failed projects I’ve encountered have either had no OCM component, or gave it a superficial nod in the beginning of the project, then quickly put such activities on the back burner.
At a high level, communication, training, and marketing are at the core of OCM, but there are other very important activities that should be considered.
Even though you know your organization, interesting details can emerge that can be of benefit to your initiative by completing an organizational assessment. Such assessments can determine your organization’s propensity for change on a detailed level. Also revealed will be the largest barriers that should be addressed through the OCM program.
The assessment will reveal how the people in your organization view the current state, the proposed future state as well as many measures to help you understand where issues could occur.
There is another very important output of the assessment, and that is to understand which changes you should make now, and which changes should wait for subsequent efforts. Change can only happen at a certain pace for a given organization and attempting too much change for the culture and current level of maturity will likely doom an effort to failure, regardless of how much care is put into OCM activities.
The Three Camps
In any change there will be three camps of people, two minority camps and one majority camp. The first minority camp will actively embrace the change and can be used to further the cause in the organization. The second minority camp will very much be against the camp, and some will likely even actively attempt to undermine the change. The majority camp (generally about half the population) will wait and watch to see what camp will win out. Target the majority camp with appropriate communications and marketing. The minority camp against the change is very unlikely to change their minds.
Assess the Change
A detailed assessment of the change should be completed to provide a rich understanding of how the change will affect the organization. Start by listing how the new state differs from current state. Then evaluate the following:
Who will have to be involved who wasn’t before?
Who was involved before and will not be now?
Who is more empowered or less empowered then before?
Which changes make things easier for people?
Which changes will be perceived to make things harder (for example, process or procedure where they didn’t exist before?
For each item listed determine a high, medium or low level of impact. All items on the high list will be called the “Major Shifts”.
Create the Messages
The information provided from “Assess the Change” will be an input into creating the messages. The messages are the communication bullet points to the organization about what is changing, why it is changing and the benefit of the change to the organization. Creating this list of messages will be the basis of several forms of communications in the OCM communication plan. The focus here needs to be on the Major shifts for broad communication, and on a smaller scale addressing the more minor points.
Identify a list of champions that will actively embrace the change and can help with the project, and organizational change itself. The champions should be very interested, involved parties which will clearly fall into the minority camp that embraces the change. From spreading the word in the halls to providing team based versions of broader communications the champions are the voice for supporting the change.
Strongly consider some incentive and reward for your champions for their efforts in helping to sell and realize the change. This will help them stay engaged for longer running projects.
Top Down and Bottom Up
Everyone is aware how important it is to have executive support for ITSM improvement programs, however, organizational change efforts should be targeted to the different audiences. In addition to messages from the executive team defining the vision and providing support for the program (top down) there should also be bottom up efforts. The champions can play a key part in this messaging. As an example, think about doing lunch and learns hosted by the champions, with their peers, addressing what changes will be coming up, and explaining the benefits to them and to the organization.
Communication and Training
The very first piece of a communication and training plan should be a stakeholder analysis. Every level of the organization should be mapped (CIO, VP Level, Director Level, Mid Manager Level, Heavy Process Participant Level, Casual Process Participant Level, Customer Level).
Each of these levels should have specific training and communication plans tailored to them. These plans should include messaging to address the following areas:
The major shifts discussed above
Benefits of the change
What they do not need to do anymore
What they need to do in the future
How they should communicate upward, downward, and to peers
It is most beneficial to structure training to include the OCM messaging, process training, and if applicable, tool training together in a single session. Using this approach allows for the elements to be pulled together so major shifts can be related to process, and process elements can be related to any tool changes.
Good Organizational Change Management relies upon well-crafted messaging that delivers the right information in precisely the right way for the organization. Utilize your organization’s existing marketing and training departments, when available, as they have the needed expertise in these areas to provide the right experience. Consider looking outside for assistance on planning and execution of a complete organizational change program that is directly tied into your ITSM program.
I like to say that ITSM (and any other) initiatives are made up of at most 20% process and 20% tool components. The carbon based units involved represent the remaining 60% of the equation. This should highlight why initiatives with a strong OCM component are so much more successful than those where OCM becomes an afterthought.
Mike DePolis is a seasoned IT leader with a strong focus on business alignment and ITIL V3 Expert certification. As the ITSM Practice Lead at Fruition Partners, Mike has vast experience heading large segments of IT departments, and helping clients improve their operations.
A practical look at why some metrics programs fail while others are successful, along with some tips you can use to kick your metrics up a notch.
I was math-challenged as a child and hatred of anything having to do with numbers followed me into adulthood. This hatred remained with me until I became a manager and needed to begin proving the work my team was doing or understanding where we were failing. Actually, the turning point may have been the now-overused adage “you can’t move what you don’t measure,” a powerful concept that has a lot to do with the metrics programs I’ve created over the years. I’ve worked hard at this, mainly because of my math aversion. While Excel certainly helps, it’s all still “funny math” and through practice I’ve learned how to justify any story I want to tell using the numbers available from the IT Management tools my organizations have used.
Ultimately, if you can a story with metrics, how do you decide what is the right story? That’s the focus of this blog: determining the story to tell and to whom.
Building a Business Oriented Metrics Framework
Ultimately, if you turn to ITIL for help with metrics, you can be led astray pretty easily unless you read all of the books (or at least Service Strategy (SS) and Continual Service Improvement (CSI)). This is because at the end of each process described, there is a list of Critical Success Factors (CSF’s) and Key Performance Indicators (KPI’s). These are great sample metrics for the process you might be implementing and are critical for measuring that process’ success, however providing them to business partners will have you producing the same type of metrics IT’s been producing for years, the type that are not of real interest to the business. You’ll also be led into a false set of security because they came from ITIL, didn’t they? YES!
While these process metrics are one of the three types of metrics ITIL recommends you produce (Process, Service and Technology) and while they are important metrics to produce, they’re of little or no interest to business partners outside of IT because they don’t tell you how well IT is doing at delivering on the key strategic initiatives of the business.
To craft a metrics program that is of interest to the business, you need to start with the business. To help you get started, you can use the informal framework for building business-based dashboards and scorecards presented here (If it seems familiar, it is. It’s based on ITIL’s Continual Service Improvement approach):
This framework is very simple:
Know the vision of the organization or line of business
Document the goals that support this vision
Discover those Critical Success Factors (CSF’s) the organization feels are needed to be successful
Create Key Performance Indicators (KPI’s) or measurable indicators of the Critical Success Factors. Include target levels for these, so success is clearly shown.
Organize them into dashboard views for each audience that may be viewed live (on-line).
Develop scorecards that may be used for trending, historical reporting.
Three Steps to Using this Framework
This framework can be delivered using five basic sets of activities or steps, which are described below.
In addition to these steps however, some of this can only be demonstrating using examples. For these, let’s use a sample organization that is expanding into web-based sales to demonstrate the concepts. In this organization, the new Web Sales department and the Audit/Control group are tasked with delivering on three goals that support the organization’s vision of “providing the best shopping experience on the web.”
These goals include:
Providing Customers with an Excellent Web Shopping Experience
Giving Customers the ability to do shop any time of day (or night)
Guaranteeing credit card security
With this in mind, let’s look at the five steps:
Step 1: Create a Focus Group
To ensure alignment, create a focus group consisting of key stakeholders from several lines of business and a few IT Managers. For the organization in the example, this would include managers from Web Sales, Audit/Control and the IT teams tasked to develop and deliver the website.
Step 2: Understand the vision, goals of the organization
With the focus group, take a look at the organization’s strategic plan. Typically the strategic plan includes a set of initiatives designed to support the organization’s vision, similar to the web sales initiative. These are often stated as goals so review the business goals associated with the initiatives and define the ways in which IT supports these goals. Think of the goals as the pillars that support the organization. This will ensure your program aligns with these goals and the strategic initiatives.
To move to the next step you will need the vision and goals, similar to the ones provided for the sample organization.
Step 3: Identify your audiences and their contribution
Next, working with the focus group, create a matrix to document the goals and critical success factors for each of the organizations to which you’ll be reporting. This matrix will be used to plan the dashboards and scorecard measures you need. Using the sample organization, the matrix would look like the one that follows.
Web Sales department
(1) Excellent Web Experience(2) Ability to do shop anytime
(1) Confidence when using credit cards
(1) Service Operations Excellence(2) “Fort Knox” security
Step 4: Make the goals measurable
To quantify the goals, you’ll need to work with your focus group to determine the Critical Success Factors that will demonstrate the fulfillment of their goals. The best Critical Success Factors (CSF’s) will be: “SMART”: Specific, Measurable, Attainable, Realistic and Timely.
Once your and the focus group have agreed on the CSF’s, you’ll be able to develop Key Performance Indicators, or measures that support the CSF. It’s extremely beneficial to develop KPI’s along with targets, so you and your business partners are clear on whether you’re successful in delivering on each of the goals. The best part about this approach is that when IT and the business agree on measures and targets, it’s easy to tell when IT has delivered or when IT is not meeting the needs identified by the business.
The ITIL books demonstrate this process clearly at the end of each process documented. The last section of the process description includes a list of Critical Success Factors for the process and Key Performance Indicators that support them.
For example, the Incident Management process (ITIL Service Operation 2011, p. 109) has a Critical Success Factor to “minimize the impact to the business of incidents that cannot be prevented.”
This is not measurable by itself, but four Key Performance Indicators follow it:
The number of known errors added to the Known Error Data Base (KEDB)
The percentage of accuracy of the KEDB
Percentage of incidents closed by the service desk without reference to other levels of support and
Average incident resolution time for those incidents linked to problem records
At the end of this stage, your matrix will be complete, similar to the one which follows for the sample organization:
Web Sales department
(1) Excellent Web Experience(2) Ability to do shop anytime
(1) Customers are satisfied with the website design and functionality(2) Web site is available 24×7
(1) 85% of customers give the site a 5-star rating on exit(2) Web site is 100% available
(1) Confidence when using credit cards
(1) Web site is PCI compliant(2) Security patches are up to date
(1) 100% PCI Audit pass rate(2) 90% of patches applied within 24 hours
(3) Service Operations Excellence(4) “Fort Knox” security
(1) Web site is available 24×7(2) Web site is PCI compliant(3) Security patches are up to date
(1) 100% site availability SLA(2) 99% performance SLA(3) 100% PCI Audit SLA(4) No Security Breach SLA(5) 90% on-time patch SLA
You might notice several things when reading this list:
A qualitative measure (5-star rating by customers) is used to determine the customer’s view of the website. This is a critical measure as the CSF points to the customer’s experience.
The quantitative measures that sound like IT performance measures are translated to SLA’s for reporting purposes under the IT list of KPI’s. When creating the dashboards and scorecards in the IT Service Management tool, these SLA’s may be configured to demonstrate IT’s achievements against the business KPI’s.
Most of them sound like technology metrics. While this is true, these are a short list of technology metrics that these audiences care about. Notice some frequently reported, but missing measures: average speed of answer at the service desk, mean time to restore service etc. These would be IT metrics that support teams would need, but not IT management or the business, unless IT is failing to deliver on the metrics listed in the matrix and management wants to dig down to discover the reasons.
Step 5: Build the dashboards and scorecards
Once the matrix is agreed on and the method of measuring each KPI is defined, documented and agreed on by the focus group, the final step is to design dashboards and scorecards that represent these KPI’s. These are both graphical views of the Key Performance Indicators listed above, showing the result in comparison to the target. The main difference between the two is in the delivery:
Dashboards are dynamic: live representations of the data, often provided via a web portal that is integrated either to a measurement tool or directly into an ITSM tool.
Scorecards are static: they provide a historical look at the data including trending over a period of time.
There are two final aspects of using this framework:
As these dashboards and scorecards are used by the business, it’s important to come back to the focus group to evaluate the results. This may lead to creating new KPI’s or tweaking the ways in which they are measured, depending upon the focus group’s satisfaction with performance. In the case of the sample organization, it’s possible that the business is not meeting their objectives and may initiate changes to their critical success factors that will drive a need to change the measures. The point here is that you should not build the dashboards and scorecards then forget about them. Rather, you should meet with the focus group quarterly to review the metrics programs and IT’s achievements. This is a great opportunity to talk about service improvements that the business might need to support the initiatives as well.
Knowing when to stop delivering a dashboard or scorecard report is the last critical piece to a successful program. Once IT is reliably meeting the targets set by the business for a particular goal, it’s a good idea to discuss this result with the focus group during the quarterly review. In this case, you’re not looking at changing CSF’s and KPI’s to address a business need, but rather you’re reviewing the KPI’s to see if the business still needs to see them continually and if any of the targets need adjustment.
Bear in mind that once you are achieving targets reliably, the business might want to work with IT to “up the game.” So in the sample organization, once the security patches and PCI audit result SLA’s are being met consistently, the business might want an shorter SLA for deployments of new features to the website. Thus, the matrix would be adjusted and the appropriate changes made to the dashboards and scorecards.
Benefits of the program
Providing metrics that are responsive to your business’ needs rather than the same stale set of IT metrics they don’t really care about will have a significant impact on the relationship between you and the rest of the business. Looking back at the reasons to measure, you can expect the following results:
Direct: Live dashboards also provide the ability to determine the activities needed to drive success of an initiative and whether these activities are providing the expected result,
Validate: You and your stakeholders are able to use the metrics you provide to validate whether IT’s performance is contributing to the business’ ability to meet their goals and objectives,
Justify: IT is able to produce metrics that support a business case for infrastructure or development projects related to the delivery of a service,
Intervene: Live dashboards provide IT and the Business to know when there is a performance issue and they can intervene immediately to turn the problem around.
This helps an organization move from a purely reactive mode to a more proactive approach that is integrated with the success of the business’ initiatives in mind.
What are the differences between Scrum and Kanban anyway?
When you’re studying two similar animals from different species – I don’t know, lets use crocodiles and alligators – it’s easier to spot the similarities than the differences. I’ll give you one now and reward you with another difference for reading all the way to the end of my article.
Crocs can lift their bodies off of the ground, gators can’t. Did you know that?
This is a similar problem for those of us that are beginning to explore the world of Agile, Scrum and Kanban. Are they the same? From the same species? What are the differences?
It’s so easy to see the commonality, because the distinctions are nuanced and harder to spot. If you’re not careful you’re unable to spot one from another.
Let me help you explore some of the diversity between Agile/Scrum and Kanban
Back to the beginning…
Before examining Agile/Scrum and Kanban it is worth pointing out that there are many distinctions to be drawn between Agile and Scrum. They aren’t one and the same thing and there is probably a whole other article for a whole other day to write them up.
For the purposes of this article I want to draw your attention to the suitability of Kanban in IT Operations and to achieve that I can leave Agile and Scrum lumped together.
The first place to understand the contrast between Scrum and Kanban is to look back at the roots of each method.
Scrum was born out of a line of iterative software development methodologies stretching back to the 1960’s and ‘70’s but coming into prominence in the 1990s as a pushback against the heavyweight Waterfall project management practices. In the ‘90’s methods such as Scrum and Extreme Programming became popular and in 2001 the Agile Manifesto was written to bind these disparate practices under a common banner.
But remember that Agile/Scrum was initially formulated to solve a Software Development Lifecycle problem.
Kanban incorporates a number of practices codified by the automobile manufacturer Toyota as part of TPS – The Toyota Production System – a precursor to the wider Lean movement which emerged in 1990.
These roots are based in business process, in manufacturing, in the process of refining raw materials into a valuable product through manual and automated labour. In converting chunks of rubber, steel and glass into gleaming, shiny cars rolling off of the production line.
Whereas Agile/Scrum was formed to provide an alternative to heavyweight Software Development Lifecycle methodologies, Lean has been more aligned to core business processes – seeking efficiency gains and quality improvements.
My objective here is to speak to you as IT Professionals considering adopting a Lean or Agile approach to IT Operations. It’s worth pointing you towards the works of David J Anderson who in 2010 wrote “Kanban: Successful evolutionary change for your technology business”, informally known as “The blue book”. This is the specific variant of Kanban that you want to study and learn more about.
So wait.. Kanban is not Agile?
If we are following strict definitions and examining Agile/Scrum and Kanban as if they were two separate animals… no, Kanban is not an Agile practice. It is a Lean practice.
But Kanban delivers a lot of the same benefits into an organisation that Agile promises to. And, as we’ll discover later in this post, does it in an evolutionary way rather than throwing the rule book out and introducing strange, new practices.
You could say that Agile, if done correctly introduces Agility into an organisation. Notice the capital “A” there. Kanban introduces business agility with a small “a”
More importantly where Agile/Scrum promotes product development agility, Kanban is positioned to make the whole organisation more agile.
Kanban practices can be used across the organisation from marketing, sales, product development and customer support and value chains can be found stretching between these organisations. Best of all heavyweight processes such as Waterfall, change and release management can happily exist with the wider Kanban framework.
This is the “evolutionary” part of the description in Davids book. Taking existing business processes, defining them as part of a value stream and finding ways to optimise the work.
OK – tell me more about Scrum
Scrum is a lightweight framework that defines roles (like Product Owner and ScrumMaster), artifacts (like Product Backlog, Release Backlog and Sprint Backlog) and practices (like Daily Standups, Sprint planning and Sprint review meetings).
Teams following Scrum take a body of work – typically a list of features that are required to build a software product – and break them down into discrete units of work (called User Stories) that can be re-prioritised according to business needs.
By taking a small section of those units of work and committing to finishing them before a short-term deadline (known as a sprint – often 10 working days) the team can focus on building the next small increment of the product before stopping, replanning and committing again.
Scrum is great! I’ve been successful with teams that have used Scrum to build products. But it is a fairly disruptive method and you won’t get 50% of the benefits by putting in 50% of the effort.
To be successful at Scrum you have to allocate people roles, train them and arrange your work according to the methodology. Expect to have backlog grooming sessions, to measure your work in story points and so on.
Scrum is a fairly prescriptive method that requires the team to bend around the rules in order to follow it correctly.
Much of the work in IT Operations is driven by external factors – servers experiencing hardware issues, ISP’s having intermittent connectivity issues. Although it’s nice to plan around the stability that Scrum promises – with a fixed sprint backlog of work – the reality is that teams have to deal with interrupt driven work and absorbing this isn’t a strong characteristic of Scrum.
There is another characteristic of Scrum that appears to make this activity very similar to Kanban… that is if you didn’t understand the difference between crocodiles and alligators.
The last thing to mention is that Scrum teams often maintain a “Scrum board” visualising their work on cards into lanes.
OK! Tell me more about Kanban
Well, the last thing to mention is that Kanban teams often maintain a “Kanban board” visualising their work on cards into lanes.
Herein lies the difficulty in distinguishing between Scrum and Kanban when the most visible artifact for either method is exactly the same.
But there are significant differences with Kanban. Firstly it is an evolutionary method to introduce change in an organisation. Meaning that no additional roles or practices are introduced by organisations that adopt the method.
Existing roles and processes are kept but are wrapped into Kanban. Workflows are investigated and visualised to provide control around the work but we don’t change how people do their jobs or interact.
Scrum deals with the problems associated with Product Development and introduces methods to increase Agility.
Kanban examines the value stream upstream (perhaps into the sales and marketing departments where leads are generated) through the manufacturing/development/technical departments down to the point where value is released to the customer – how products are shipped or released.
It’s similar to the same way that a manufacturing process for a Toyota car is defined all the way from the raw steel arriving at one end of the factory through the refinement process until a car rolls off the other end. Kanban maps and provides controls throughout the whole value stream.
Imagine you are in control of new laptop builds in an IT department. Surely you have a value stream which starts with a request from HR notifying you of a new employee. Actions are taken – laptops ordered, imaged, configured, added to the various management systems. At some point later (much later??) the laptop is delivered to the employee. You’ve just described a value stream that can be visualised, managed and incrementally improved with Kanban.
Here is a visual outlining the differences between the two animals.
I promised to reward you with the last difference between crocodiles and alligators. Look at the snout – but presumably from a distance. Crocs have a longer, sharper “V shaped” snout. There you go!
But this isn’t the action that I want you to take away from this article. Your IT organisation should be investigating new ways of working and building a culture of high performance and continuous improvement.
Agile/Scrum and Kanban are all worth investigating. My call to action in this blog post – if you are in a position to suggest work improvements in your department – is to buy David J Andersons Kanban book and see how evolutionary change is possible in your corner of the world. (Amazon Link)
Most successful Kanban adoptions are lead from the “middle out”. That is junior managers taking the initiative and adopting Lean practices influencing those that carry out the work. Their successes often influence upwards as senior managers identify the resulting service improvements.
Who knows – buy the book today and in a few months you could be blogging your IT transformation using Kanban on the ITSM Review. I’m looking forward to that!