Change Management: Responding to a Crisis

Keep Calm...
Keep Calm…

One of the things that isn’t covered as much as it should be is how to respond to a crisis directly linked to Change activity. This is one of those things where despite your Change process, despite your sparkly toolset and fab policies & work instructions, something has gone pear-shaped on a massive scale and you’re staring down the barrel of a Change Management related crisis.

Here is my guide to dealing with the fallout, without having to resort to mainlining chocolate or vodka.

 

Step 1

Keep calm! Easier said than done I know (and believe me, I know) but panicking or making reactive, snap decisions won’t help things and might actually make them worse. Take a deep breath, roll up your sleeves and get stuck in.

Don’t believe me about how panicking can make things worse? A couple of years ago, I was working on a client site in Milton Keynes. This client had a financial system for each EU country that was accessed by a command line interface. One fateful Wednesday, the German financial system experienced an issue and the tech support guy had to run a fix script and restart the system. The Service Delivery Manager for Germany, in her infinite wisdom, thought that standing over the poor techie (we’ll call him Bob) and shouting at him would speed thing up. It didn’t. What actually happened was poor Bob typed the command for rebooting the UK system rather than the German system onto his console, taking out all transactions for the UK and doubling the impact of the Incident. Not our finest hour I think you’ll agree.

Step 2

Get a handle on where you’re at. Is the Incident still on-going? First and foremost, look after your people. Is the environment safe? Are there any imminent hazards? Chances are Incident Management or Problem Management are running with the drama, or if it’s really serious, a Crisis Manager or IT Service Continuity management. Regardless of who is running with the fallout, you need to play a supporting role in helping them understand what the Change was meant to do and what went wrong. Hopefully, you already have a documented process for invoking the help of Incident Management if a Change fails with documented roles and responsibilities. If there’s nothing in the process for what happens if a Change fails spectacularly, then step up and help co-ordinate the fix effort. You can look at lessons learned, the chain of command and who sorts out what later on, once the immediate danger has been contained.

Step 3

Figure out who’s going to handle comms. Not just the generic Service Desk e-mail but depending on the magnitude of the issue, you may well have to communicate with:

Don't let things go from bad to worse
Don’t let things go from bad to worse

(1) Angry customers

(2) Angry stakeholders within your business

(3) The press

(4) Regulatory bodies.

Make sure only authorised people speak to the relevant parties so that you don’t make an already bad situation worse.

As a general rule, I think that for internal reporting, the more detail the better. This will help you understand what went wrong, how we fixed it, could we have done anything differently and what we can do to stop it from happening again. Internal reporting will cover exactly what happened, the technical details and if there was any human error. For external customers; all we need to cover is what went wrong, how we fixed the issue as quickly as possible, were there any opportunities for Continual Service Improvement (CSI) and that we have taken the appropriate action to prevent recurrence. Basically, saying that the engineer booted in daft mode isn’t something that will every look acceptable in the eyes of most customers.

Step 4

Have we got a fix? Test, test and test again. Check and double check anything that goes out to your customers. First things first, have we got the right people working on the fix and do they have enough support? If not then work with your peers to get agreement that as it’s a crisis situation, break – fix work must be prioritised. Support your technical teams by helping them get the Emergency Change raised and if appropriate, setting up an E-CAB. Every organisation is different and for some, the paperwork can be raised retrospectively so there is no delay to implementing the fix. Either way, to save time, your Emergency Change probably won’t be as detailed or nicely formatted as a BAU Change request but should at least have:

• The nature of the work

• Who is carrying it out

• How it’d been tested

• Linked Incidents

• How it has been verified and how we can prove it has fixed the issue

• Who’s authorised it (even if it’s a senior manager shouting JFDI make a note of it on the Change)

• Any customer communications

Let your customers & stakeholders know when the Change is due to be implemented then send out a second communication once the Change has gone in, the Incident has been resolved and all is well.

 Step 5

Ensure safety and security...do not let the post mortem meeting turn into a witch hunt
Ensure safety and security…do not let the post mortem meeting turn into a witch hunt

The post mortem; aka the Incident Review, Change Review or drains up session. This is not, I repeat, not a witch hunt. Co-ordinate your review activities with Incident & Problem Management. The last thing you want is for your guys to be stuck in 3 separate meetings, answering the same questions just being asked in slightly different ways. Set ground rules and reassure everyone in the room that the meeting is to look at what happened and how it can be prevented from recurring, not to assign blame, At this point I use something called my umbrella speech to get everyone in the room to relax. I won’t bore you with a big wafflely speech but the gist is something like the following:

“ I just want to understand what happened. Think of me as your umbrella. I ask all the difficult/ horrible/controversial questions so that you’re not getting interrogated by senior managers or irate customers. You can trust me because you know me, we’re all in the same team plus we’ll all be going down the pub together for a stiff drink once this is over.”

Getting people to understand that they’re in a safe environment is hugely important. If your guys feel supported, then an honest, constructive conversation can take place to understand the root cause, short term fix actions, long term fix actions and anything else that could prevent recurrence.

Step 6

Follow up with customers and stakeholders. This isn’t a one off activity, on the day of the crisis; you may find yourself in hourly meetings to keep senior management or customers updated. Once the issue has been resolved, you can issue a short holding statement that explains what happened and the actions being taken to stop a repeat performance. The next step is important. If there are lots of post Change / Incident / Crisis actions, commit to regular updates in an agreed format. These updates could be in the form of updating a Problem record or a weekly update e-mail or a Service Improvement Plan depending on the magnitude of the failure.

 

Frazzled people require pizza
Frazzled people require pizza

Step 7

Look after your people. Chances are, they’re tired, stressed out and frazzled from working all hours to put things right. Now is the time for that trip to the pub or a morale boast in the form of caffeine, chocolate or pizza.

Step 8

Look at your FSC. Are there any similar pieces of work planned that need to be cancelled, reworked or rescheduled? How are you going to reassure your customers that lessons have been learned and the same mistakes will not happen again? As a Change Manager, you need to ensure, that if a similar piece of work is planned in the future, it has actions present in all stages of the implementation plan to ensure the issues you’ve just sorted out don’t come back to haunt you.

Sometimes, the most appropriate action to take is to delay the Change as a good will gesture to give the customer additional time to communicate to any of their onward customers or to take additional steps externally to mitigate risks. I’m not saying it’s always the right thing to do, or the easiest thing to do but sometimes a little time and breathing space can work wonders.

Step 9

Lessons learned...
Lessons learned…

Remember your lessons learned. When you had your post mortem you will have come away with a lovely long list of actions to make this better. I know it’s easier said than done but don’t just file them away under “ugh horrible day – glad it’s over” add them to your lessons learned log so that those actions are documented, reviewed and acted on. If you don’t have a lessons learned log then start one up. You need to be able to refer back to it, to read your actions and to share them. Examples of sharing lessons learned could be Availability Management for downtime issues or Capacity Management for performance glitches.

 

Step 10

If you don’t have the Service Desk and Incident Management on your CAB, invite them immediately. One of the things that never fails to surprise me out on consultancy gigs is how many organisations simply deploy Changes into the production environment without thinking to mention it to the people who have to pick up the pieces if it all goes horribly wrong. Ditto Problem Management and IT Service Continuity Management, as simply assuming that they will be able to swoop in and save the day should the Change fail is taking blind optimism a step too far.

 

Final thoughts

As someone who has seen and managed her fair share of own goals and Change related failures here are some things I find it helpful to refer back to.

(1) This too shall pass/Nothing endures. This quote and its connected story was recently referenced in “The Big Bang Theory”. The fable goes that there was once a king who assembled a group of wise men to create something that would make sad men happy and happy men sad. The result was a ring inscribed with the phrase, This Too Shall Pass. Take a deep breath…nothing lasts forever!

(2) It helps to have a sense of humour when dealing with the bad stuff. This is one of my favourite quotes from the author Terry Pratchett:

“Some humans would do anything to see if it was possible to do it. If you put a large switch in some cave somewhere, with a sign on it saying ‘End-of-the-World Switch. PLEASE DO NOT TOUCH’, the paint wouldn’t even have time to dry.”

In short, no matter how good your process is, and no matter how hard you try, things will go wrong sometimes. You can’t stop all problems; what matters is how you deal with it. So keep calm, take a deep breath and let’s sort things out.

 

Guardian News & Media: "Our SLA is to ensure the paper is published"

Guardian News & Media

Guardian News & Media (GNM) publishes theguardian.com, the third largest English-speaking newspaper website in the world. Since launching its US and Australia digital editions in 2011 and 2013 respectively, traffic from outside of the UK now represents over two-thirds of the GNM’s total digital audience. In the UK, GNM publishes the Guardian newspaper six days a week, first published in 1821, and the world’s oldest Sunday newspaper, The Observer.

theguardian_rgbGNM is a dynamic and pioneering news organisation across all departments. Amongst all this cutting edge transformation, GNM’s IT service desk has been going through its own upheaval. Over the last year the team has experienced arguably the most transformative change any service desk is likely to face—that of insourcing from a third-party outsourcer and rebuilding from scratch.

So what is life like for IT service management (ITSM) folks at GNM? How do they handle delivery of IT services for one of the world’s leading brands?  How have they insourced the service desk? These are all questions I was keen to ask when I met the team in London.


Note: SysAid commissioned this case study. Thank you to Vicky, Louise, and Steve from Guardian News & Media for being so candid and sharing their views. 


Meet the team

Left to right – Steve Erskine, Louise Sandford and Vicky Cobbett, Guardian News and Media
Left to right – Steve Erskine, Louise Sandford and Vicky Cobbett, Guardian News and Media

Insourcing the service desk

GNM has around 1,700 staff working for them globally. Roughly half of GNM staff work in commercial teams, the other half are in editorial teams including worldwide journalists and bloggers. The 60-member IT team supports 1,200 Macs, 800 PCs and twin mirrored datacentres in London and Bracknell.

The service desk was insourced from the 1st August 2013 when a team of six service desk analysts took over the front line of IT service and support.

The insource meant choosing a suitable solution to underpin its service management processes. Previously, the IT team was provided with technology as part of IT outsourcing contracts, such as Remedy or ServiceNow. With ITSM now firmly the responsibility of the in-house teams, there was a requirement for a smaller system that suited their needs. Flexibility and value for money were key drivers. Following a review of the market, the team chose SysAid.

A GNM version of ITIL

The IT support team at GNM records 600–650 incidents a week, working core hours of 8am until 6pm, with extended cover until 3am to support publication of the printed newspaper.

“We resolve as many calls as we can on the first line, not just log and flog, we try to do as much as we can and only escalate to second line if we get stuck,” said Vicky Cobbett, Service Desk Manager.

Incidents arrive in the way of system monitoring, email, telephone and walk ups. The team has not yet implemented any self-service options with SysAid, as they wanted to build up a reputation and confidence in existing channels first.

Third line teams are arranged by technology stack or competence area, such as business applications, networks, integrations, multimedia, AV, Oracle applications and so on.

“Our technology base is really quite broad,” says Steve Erskine, Technology Supplier Manager. “We are digital first. It’s a very different company than the newspaper I originally joined.”

“GNM is at the cutting edge of the media industry, it means we are constantly changing. We are constantly being brought new things to manage,” added Louise Sandford, Application Analyst.

Like most organizations that refer to best practice frameworks, GNM has cherry picked guidance from ITIL to suit its requirements.

“We’ve adopted a GNM version of ITIL,” says Steve.

“We have a Change Advisory Board (CAB) every Monday and use SysAid to manage all of our changes. If you look at the ITIL book, we’re not quite doing it the way ITIL suggests, we’ve taken the bits that are appropriate for us.”

“For example, we don’t have a change manager because of the diverse teams in our IT staff, but we make sure we follow a change management process and follow ITIL where appropriate.”

“Individual teams get direct calls too. We work in a deadline driven environment so things need to be resolved quickly. Sometimes you need to resolve the ticket before logging it,” said Louise. “We try not to get too caught up in process protocol – publishing the paper comes first.”

Our SLA is to ensure the paper is published each night and that our website remains online

Publishing the newspaper and keeping the website up in total alignment to business requirements was a recurring theme during our conversation. There is no time for navel gazing about service desk metrics at GNM. Its focus is on deadlines and the key priorities of the business seem familiar to the old fable about President Kennedy visiting the Space Center.

It is said that the President approached a man sweeping and said “Hi, I’m Jack Kennedy. What are you doing?” to which the janitor replied “I’m helping put a man on the moon, Mr President.”

I found their customer focus refreshing. I asked the team: “How do you know if you’re doing a good job? How do you measure success?”

“The newspaper gets printed. The website is always up,” said Vicky.

The team monitors call volumes, call open times and escalates where appropriate – but the main focus of meeting customer requirements is via the personal relationships developed by Business Relationship Managers (BRMs) who go out to the business and listen for requirements, help prioritize projects and develop a medium term plan.

“Success for me is if we can put processes and procedures in place without slowing the business down,” said Steve.

“We don’t get too caught up with measuring statistics. The company knows we work hard to close all tickets as quickly as possible and are focussed on helping the company print the paper and keep the website up,” said Vicky.

“In terms of statistics and metrics and comparing this year with last year – that’s not what we’re about… and I don’t think we’ll ever get to that point,” added Steve.

“We work in a vocal environment, if we’re not doing the right thing people will soon tell us. We also have our BRM team who are going out to the business to ensure we are doing the right thing and meeting their requirements.”

“We don’t really work to formal Service Levels. We might be working on something quite important to one person, but if something happens, which means we can’t get the paper out, everything gets dropped to fix it and that person will have to wait. If we’re going to breach a Service Level Agreement (SLA), we’re going to breach it. We’ve got to get the paper out.”

“Everyone in the company has this focus. It’s our purpose for being here,” added Louise.

Why SysAid?

As Application Analyst Louise is the main owner of SysAid. She has looked after the application since insourcing back in August and works with their account manager Yair Bortinger at SysAid.

GNM learnt from working with previous tools that despite all the bells and whistles on offer they would only end up using a small fraction of the features available. So a reason for choosing SysAid was that it is a smaller system and easier to customize to their own requirements.

“We find it user friendly,” says Louise. “With other systems we’ve worked with you have to stick to the templates or labels issued by the software company. SysAid is a lot more flexible to customize to your own requirements so you can label things the way you want them and in a way the whole IT department will understand. We use the cloud version so we can use it anywhere, we can use it at home.”

A quirky bunch

I asked the GNM team about their experiences with SysAid as a company. They were extremely complimentary. Specifically, the team stated that customer service was their strongest asset.

“They’re a quirky bunch,” said Vicky, “very, very friendly.”

“They are amenable and get back to you quickly,” added Louise.

“Sometimes when you work with software companies, you’ll deal with the salesperson and they are the friendliest person in the world, but once you’ve signed the contract the relationship changes. With SysAid, when we phone them up, they’re as friendly as the day we signed the contract,” said Steve.

“…And that’s not just one person, that’s everyone you speak to, the account management team, professional services, senior management,” said Louise.

“We sometimes ask the professional services team to do something completely random and weird and they say, yeah ok, we’ll do that for you,” said Vicky.

“I hope they don’t get bought and stay as they are. We are doing this case study because they are good not because of some commercial arrangement. We want to give something back in exchange for their great product and great service,” added Steve.

IT Service desk bar

The IT service desk ‘walk up’ desk. Situated away from the main IT department in a central position within the business.
The IT service desk ‘walk up’ desk. Situated away from the main IT department in a central position within the business.

The GNM IT team has built an “IT service desk bar” as a concierge desk for walk-in IT support enquiries. It is situated adjacent to a main stairwell and thoroughfare of the business and is intentionally separate from the rest of the IT department. Three service desk analysts work at the service desk bar, which accounts for around 15% of all incidents.

“It’s meant that we’ve built better relationships within the company. They see IT as having a face rather than being a voice at the end of a phone,” says Vicky

“But around 15%–20% of incidents come from the service desk bar. 50–60% come in via email and around 20–25% are phone calls.”

Customizing to requirements

Louise estimates that the split between in-house customization and development from SysAid is around 70/30.

“I do as much of the customization myself and liaise with Yair and the SysAid professional services team to do everything else,” said Louise.

“One of the great things we like about SysAid is that it’s so configurable and it’s very flexible. It is also quite user-friendly, so without a huge amount of configuration knowledge you can pick it up and use it quite effectively.”

User account creation, which was previously managed in Lotus Notes, is now handled by SysAid.

“That was a custom project they built for us. SysAid is used to automate the account creation of logins for new users. It’s completely out of scope for what SysAid is designed for but they’ve been very ‘can do’ about the whole project. It feels like a partnership,” said Vicky.

Future plans

Having embedded change management, the team aims to look at problem management in more detail and also plans to build an asset register to record laptops and desktops using SysAid. Knowledge management is also on the agenda, done at a steady pace with issues ironed out as they go.

“It’s such a small system in the grand scale of things in terms of all the systems we use. But it’s such an important one,” said Louise.


Guardian News & Media

  • The Guardian first published in 1821
  • Offices in UK, USA, Australia
  • Headquarters: King’s Cross, London, UK
  • Revenue Guardian Media Group Plc. £210M
  • Over 100million monthly unique browsers for theguardian.com
  • 1,700 staff, 60 IT Team staff
  • www.theguardian.com

Overall Review of SysAid by Guardian News & Media

“It’s a great tool, with great service,” said Steve.

Strengths

  1. Customer service from the SysAid team
  2. Ease of use
  3. Customization

Weaknesses

  1. Reporting – doesn’t have the depth we’d like but SysAid is addressing this in Q4 2014.
  2. Reverse customization – when you’ve built something by configuring it and need to undo it, it is not always straightforward. Some elements aren’t as friendly as others. Some of the workflow elements could be improved.

Ratings

  • Customer Service 9.5/10
  • Product 8.5 / 10
  • Reporting 5/10

Back to basics: Why your change fell at the first hurdle

Stop
“If you don’t give us this information, we’ll make bad decisions which ultimately expose the business to unnecessary risk due to operational instability or sacrifice our responsiveness to changing business demands.”

Hands up if, as a change manager, you’ve seen some truly horrendous change requests?

Changes so mangled and broken that their only conceivable purpose could be to serve as a dreadful warning to other change requests to straighten up and get a job.

We are occasionally labelled as the ‘parking wardens of the ITSM world’. That’s not to say we’ll invent improbable and eye-watering fines, but we are on the lookout for likely offenders and we’ll be consciously (and sometimes subconsciously) assessing your change requests against a ‘bingo card’ of suspicious behaviour when giving each request an initial quality check.

Good quality changes present the right information to the right people to make the right decision – so what are common reasons for rejection at that first quality gate?

Dear Requestor – your change has been rejected because:

  • ‘N/A’. – Change forms are pretty generic, we get that – but the minimum we’re looking for here is a sensible reason why it isn’t applicable.
  • Risk/Impact = ‘None’ – If you’re touching production, I’d argue that the risk is never none. I’ll accept ‘negligible risk to production based on rehearsal test results’ or ‘no material impact to key business services; isolated on a separate vLAN’ but I’ve seen far too many ‘harmless’ changes killing production.
  • TBC’. – Ok, we’re not totally unreasonable, it takes time to get some information and we know you only had 20 minutes after identifying the need for change to get it in before the weekly CAB cutoff, but when will it be confirmed? What information are you waiting for? Will you have it in time for CAB?
  • Leaving blanks. If you’ve submitted it with key information missing (why, what, where, when, who & how) then we’d find it difficult to ask CAB to make a good decision based on missing information. Cover the basics well enough, and you might get occasional slight offences overlooked (especially if you arrive at CAB with delicious pastries).
  • Suspiciously short answers – ‘Rollback’ is not a remediation plan. ‘Rollback to last snapshot taken at start of deployment. Takes 10 mins, will cause an outage and need 30 mins further checks afterwards by the DBAs. Rollback has been used in the past with no issues’ is a much better starting point.
  • Suspiciously long answers – Just like the overdue motorist who starts winding up a long, complex and improbable excuse, if you’ve copied and pasted 37 pages of vendor release notes into a text field we’re going to examine the rest of the change even more carefully. By all means attach supporting documentation and give a summary. This one leads me directly to:
  • Change descriptions comprising only code – Look, I get that you’re smarter than me when it comes to development, query plans, subnetting, or many other fields of specialty. I’ve even had a change request comprising only an algorithm for a Kalman filter – which even experienced statisticians regard as voodoo. I’m looking for reasons to trust that you know what you’re doing when you raise a request. If you can wrap your code snippet in plain english to describe the problem it fixes and (at a high level) how, then we’re good. I’ll also understand more about your change which means I can help you make a case for it. We’d rather help than hinder.
  • “Step 1 – Do the change. The End.” If you look back at this in 6 months time, because a sneaky recurring problem started at about the same time and has been driving you insane trying to figure out what caused it, will you know what it was you did? And how do we trust that you really know what you’re doing? Even simple changes have more to them than ‘just do it’. (You’ll thank me for this at midnight on a friday in about a year from now.)

Here’s an example of an implementation plan for a really simple patch:

  1. Obtain patch from vendor site at www.vendor.site/patch-id=12345
  2. Extract binaries, verify release notes and checksum
  3. Check service & server monitoring for unexpected issues which may impact release. Escalate to Duty Ops Mgr if in doubt.
  4. Stop application service
  5. Deploy patch following (attached) vendor instructions with the following deviations [xxxxxxx]
  6. Check logfiles for [xxxxxxxx] errors
  7. Restart application service
  8. Check service & server monitoring after change
  9. Close change record and hand back to ops.

Why is missing a few bits of information such a drama?

Because change management is a decision game. And the only way to consistently win decision games is to make decisions based on the best information possible. If you don’t give us this information, we’ll make bad decisions which ultimately expose the business to unnecessary risk due to operational instability or sacrifice our responsiveness to changing business demands. Or to be brutally direct: garbage in, garbage out.

The Remedy

It’s unlikely that poor change requests are the result of malicious individuals (unless you’re really unlucky). It’s also overly-simplistic to call it laziness:

“Ordinary laziness was merely the absence of effort. Victor had passed through there a long time ago, had gone straight through commonplace idleness and out on the far side. He put more effort into avoiding work than most people put into hard labor.”
~ Terry Pratchett, Moving Pictures

I’ve witnessed requestors put more effort into arguing why they shouldn’t have to raise a change request than they have into following the process in the first place. Luckily, these events are the exception rather than the rule, but as change manager, ask yourself (or others for an objective view) these questions:

  1. Is the change process logical, efficient, intuitive and easy to understand? How about the form/tool they’re logged in?
  2. Are there any unnecessary bottlenecks that can be engineered out?
  3. Is your approach to managing change proportionate? Do you have lighter processes for simpler and less risky changes? Not just Standard (catalogue) changes, but minor technical one-off changes that may suffice with a peer review and change manager approval* and don’t need the full process?
  4. Do people know what a good change request looks like? Do you have ‘gold standard’ example change requests in the back of your policy/process document?
  5. In fact, do they know where to find the policy/process documents? (hint – put a link to them in your email signature)

(*for anyone aghast that the change manager can approve a change ex-CAB, check out the ITIL(R) (2011) Service Transition core publication in section 4.2.5.5 which in figure 4.5 shows an example of a graduated approval structure. Not all Normal changes necessarily have to go to CAB if that’s what you agree in your policy, based on risk & impact.)

If your processes, tools, forms and model changes are shining beacons of efficiency, clear simplicity and proportionate governance, then you likely have a training or cultural issue. Cultural issues are too complex for this article to deal with, but if you’ve studied frameworks such as ITIL, you’ll have an idea of how to sell the benefits of industry good practice to the people in charge, and you also have the option to create your own culture within Service Transition.

Training is an easier topic. Work out your most important message, stick to it and keep repeating it. Training problems will keep re-appearing as new staff arrive and as people forget, so keep your training materials handy and up to date. New staff induction training is one area to consider – you’ll get them whilst they’re still excited about their new job and keen to please. If this is impractical, then mandatory change training can be given before new users are allowed to raise change requests in the system. Weekly email tips/reminders is something else I’ve found to be useful in some situations.

If they still don’t get it

I’m often asked what to do about repeat offenders. An important message is that you are not here to knock down their changes or waste their time, you want to show them how they can create changes which can be processed quickly and efficiently.

I’ve seen HR policies and public shaming used to identify & punish people not following process. But apart from gross negligence, threatening to tell HR can have unpredictable results, and public shaming simply creates an unhealthy culture.

My graduated approach now is:

First Offence / Requests for help
Sit down with the requestor (geography permitting) and explain what needs to be improved and why. You can even help them (re)write it. It’s time well spent to show them the professional respect for their time that you’d want in return for the change process. If you can do this even before they submit their first change, so much the better. Prevention is, after all, better than cure.

A shot across the bows
Return it to the requestor with a short description of the gaps and a link to the process, policy and ‘gold standard’ example change. Offer to help if they’re struggling to articulate part of their change request; sometimes they might just need introducing to the relevant subject matter expert or someone who’s recently delivered a similar change.
Repeat this step at your discretion.

Defcon 3
Return it, but copying the requestor’s line manager as final warning informing them that the next step will lead to:

Sanctions
Remove the requestor’s ability to raise changes in the system via access control. It can only be reinstated by someone senior enough to cause the requestor discomfort in having to ask them.

I’ve rarely had to apply sanctions. But if handled correctly and objectively, it’s a proportionate response which is within the power of most Change Managers as a last resort.

And finally…

The parking warden analogy I opened with tells only half the story. It bears repeating that this isn’t about stopping requestors or making their lives difficult, it’s ultimately about protecting the business and responding to their needs. To do that we need to be able to make good decisions based on accurate and complete change request information as efficiently as possible.

Perhaps a better metaphor would be that of an Air Traffic Controller. We want you to land safely, but if you can’t evidence that know how, or if you list your destination as ‘Not Applicable’, then we’re not even going to let you start the engines, let alone get off the ground.

Image Credit