Service Desk

10 Tips for Better ITSM Performance Reporting and Metrics

Joe The IT Guy

6 min read

1516 views

Performance Metrics

IT service management (ITSM) is such a nuanced set of practices that you need the right support structure in place to better understand performance, outcomes, and improvement opportunities. Metrics might not be the most exciting subject, but the right reporting framework will help you to stay in control (and in your stakeholders’ good book).

No one automatically thinks about metrics at the beginning of any new endeavor and all too often in ITSM, we offer up what our ITSM tool can easily deliver. The issue with a tool-centric approach to metrics is that we get carried away with the wealth of available metrics. We get so excited by sparkly dashboards, balanced scorecards, and pivot tables that we lose focus. Then, instead of reporting on key business drivers and results we produce bloated metric reports that take days to create and forever to get through in review meetings. And never mind whether people truly understand the content.

This blog looks at some key things to consider when looking at the overall health of your ITSM practices and outcomes, offering the first ten of twenty tips for better ITSM performance reporting and metrics.

This blog looks at some key things to consider when looking at the overall health of your #ITSM practices and outcomes. Share on X

1. Start at the beginning

Start with an appropriate set of goals or mission statement. Make it relevant and supportive of your organizational role and employed practices. Think of your goal(s) as a statement of intent – something that sets out what you want your practice to look like. Some examples include:

  • Change control (the ITIL 4 version of change management) – ensuring that all changes are deployed effectively, efficiently, and safely
  • IT asset management (ITAM) – managing, controlling, and protecting your IT asset estate
  • Incident management – resolving or mitigating all incidents as quickly as possible and with as little adverse business impact as possible.

By being clear, concise, and on-message it sets the tone for what you want to achieve and makes your process goals, and then the measurement mechanisms, transparent to all.

2. Make your CSFs count

Make sure your critical success factors (CSFs) directly translate to the mission statement and/or goals we’ve just talked about.

Taking change control as an example, you could have three critical success factors from the goal stated above, which will start breaking down your critical deliverables and outcomes into smaller chunks.

Example CSFs for change include:

  • Deliver change effectively
  • Deliver change efficiently
  • Deliver change safely.

When defining your CSFs, you’re documenting the factors that will support the desired outcome(s), the conditions needed to create the outcome(s), as well as the assets and capabilities needed to achieve the stated goals.

3. Make your KPIs manage business as usual (BAU)

Your key performance indicators (KPIs) are the next level down from CSFs in the metrics hierarchy, and they start getting into the granularity of measurements and metrics. KPIs focus on performance and can be used at an individual, team, and organizational level.

Some real-life examples of KPIs include:

  • 99.5% of all changes are closed as being successful (linking back to the effectiveness CSF)
  • 95% of changes are delivered on time (linking back to the efficiency CSF)
  • Less than 2% of major incidents are caused by change (linking back to the “deliver change safely” CSF mentioned above)

Make sure that all your KPIs are directly translatable to their related CSFs so you can see a clear line of progression. The idea is that you should be able to track your metrics from lower-level metrics, through KPIs and CSFs, to the mission statement.

Here @Joe_the_IT_Guy shares the first ten of twenty tips for better #ITSM performance reporting and metrics. Share on X

4. Make your SLAs work for everyone

We’ve all heard about or seen “watermelon” service level agreements (SLAs) – ones where everything performance-measurement-wise is just about green but as soon as you scratch the service there’s red everywhere.

Poorly crafted SLAs are bad news for everyone so make sure you consider both the service and the multiple stakeholders involved when drafting SLAs and targets so that you capture everyone’s requirements.

When drafting your SLA, look at which metrics can help you report an accurate state of affairs to the business. Things to look at include:

  • What will the service look like day-to-day? What are the expected levels of availability (uptime) and capacity (performance)?
  • How will availability be measured? And uptime during business hours? Have these business hours been defined? So, is it 9-to-5 or is it more likely to be 24×7? Really understand what is needed before you add it to the SLA!
  • How will performance be measured? What does good look like? What about transaction times or the time it takes for a screen to load. Sit down with everyone involved and ensure that everyone is in agreement (on a measure and target) before it’s captured in the SLA.
  • How will maintenance windows be managed? And if the business requests that IT skips a previously-agreed maintenance window can the SLA be relaxed such that it can be rescheduled without incurring any fines or penalties?

5. Ensure your OLAs support your SLAs

Your SLA is only as strong as the operational level agreements (OLAs) that underpin it. Confused? I’ll walk you through it.

It’s all very well having an SLA that states that all P1 incidents will be resolved in under 4 hours, but what happens if the OLA says something different? What if the OLA time to respond to the P1 is 30 minutes, the fix takes 4 hours but then testing and validation checks take a further 30 minutes? Then take into account that it may take the service desk time to contact the affected users and confirm that all is well – and you’ve gone at least 90 minutes over your SLA without even trying.

When creating SLAs, make sure that you have the correct OLAs in place such that you can actually deliver the agreed level of service. Try to add some extra cushioning – but don’t take advantage – in timelines for handoffs and for the service desk to check in with end users.

6. Don’t forget your UCs to underpin both SLAs and OLAs

We’ve all seen incidents which look to be on course only for things to go sideways as soon as external parties need to get involved. Underpinning contracts (UCs) are the versions of SLAs that exist between different companies, for example between an IT organization and its telco vendor.

UCs have to be agreed by both parties and as with OLAs, they need to be reviewed to ensure the third-party timelines will support the SLA that has been agreed with the business.

7. Use XLAs to represent the customer experience (CX)

We’ve all probably been to service reviews where technically the SLAs have been met but no one is feeling particularly good about the overall experience. As a frequent review meeting attendee, I can tell you that no one is comfortable when the SLA has been met but the actual end-user experience has tanked.

Introduce eXperience level agreements (XLAs) into your reporting mix to capture the customer experience to prevent this (or at least minimize it). This will demonstrate that you’re committed to both the customer and the end-user experience.

XLAs can be used to capture experience-related performance information such as:

  • A consistent experience across all service desk platforms – self-help, chat, email, or over the phone
  • Service desk analyst approach – how was the issue dealt with? And was the issue understood from a business perspective?
  • Clear, concise, and professional communication and updates
  • The knowledge level of support staff.

8. Compliance matters – so ensure that it’s measured

Being transparent is important and, depending on your industry, having compliance metrics may even be a legal requirement.

Common compliance functions include risk management, internal audit, compliance training, and policy enforcement. Here compliance measurements can act as important, leading indicators of potential risk. Example compliance metrics include:

  • Number of compliance items open
  • Number of compliance items closed
  • Number of outstanding audit issues
  • Percentage of internal audits completed on time
  • Percentage of internal audits completed in scope.

9. Keep improving

Build continual improvement into your metric portfolio. It demonstrates a commitment to quality and ensuring that the overall levels of service and performance will improve over time.

Top tip from @Joe_the_IT_Guy: Build continual improvement into your metric portfolio. It demonstrates a commitment to quality and ensuring that the overall levels of service and performance will improve over time. #servicedesk Share on X

Keep it simple to start with. For example, agreeing to put an improvement register in place and to add a certain amount of improvement suggestions each month.

10. Tailor your performance reports to your audience

Don’t have a one-size-fits-all report. Instead, share different metrics with different stakeholders.

You wouldn’t make your service desk sit through a two-hour presentation on the financial health of the company. Nor should you expect your end user community to sit through multiple hours of technical reviews.

Understand what data each stakeholder needs and then tailor your approach accordingly. Don’t include reports or data just for the sake of it. The golden rule? If you’re not sure, then ask the person you’re reporting/presenting to. If your reports aren’t adding value, or supporting that person in getting their job done, that it’s time to rethink your reporting offering.

What tips would you offer to help people when pulling together and sharing measurements and metrics? Please let me know in the comments. 

IT Maturity quiz

What did you think of this article?

Average rating 1 / 5. Vote count: 1

No votes so far! Be the first to rate this post.

Did you find this interesting?Share it with others:

Did you find this interesting? Share it with others:

About

the Author

Joe The IT Guy

Native New Yorker. Loves everything IT-related (and hugs). Passionate blogger and Twitter addict. Oh…and resident IT Guy at SysAid Technologies (almost forgot the day job!).

We respect your privacy. By continuing to use our site, you agree to our privacy policy.

SysAid Reviews
SysAid Reviews
Trustpilot