Discover why PagerDuty users are switching to xMatters. Listen to insights from Ben Narramore, Director of Global Operations at PlayStation.Watch webinar

Uptime Blog

What is MTTR and How Does It Impact Your Bottom Line?

Mean time to repair (MTTR), sometimes referred to as mean time to resolution, is a popular DevOps and site reliability engineering (SRE) team metric. MTTR identifies the overall availability and disaster recovery aspects of your IT assets or application workloads.

As modern organizations increasingly rely on software to run their businesses, having a clear understanding of this metric helps DevOps and SRE teams meet their service-level agreements (SLA) and maintain healthy infrastructure and services. Ideally, teams should aim to keep their MTTR low, so they spend less time fixing problems and customers enjoy the best possible solution or service. A high MTTR can lead to customer frustrations, possibly igniting the fire that causes them to switch to a more reliable competitor.

If you’re interested in learning about what MTTR is, how it impacts your bottom line, and how to maintain yours with industry-best tips and tricks, you’re in the right place.

What is MTTR?

MTTR stands for mean time to repair. This incident management metric enables businesses to measure the average time needed to troubleshoot and repair IT systems’ problems. MTTR tells us how much time it takes to return to a healthy and stable system.

The acronym MTTR can cause some confusion since it has different meanings across different industries. Sometimes, MTTR refers to mean time to respond: the amount of time needed to react to a problem. However, in this article, we look at how long it takes to reach a resolution since this is the more commonly accepted definition that deeply impacts customer satisfaction.

In DevOps and SRE practices, MTTR typically measures IT environment availability and reliability. Given its focus on the repair process, it can also refer to the DevOps and SRE teams’ service quality rather than the IT systems themselves.

To calculate, use the following MTTR formula:

Total Hours of Unplanned Maintenance Time
________________________________
Total Number of Repairs 

Imagine you run an application workload, and it fails three times. The total time to fix those three issues is six hours. In this example, the MTTR is 6/3 = two hours.

Let’s examine a more concrete example from the IT industry. When Hurricane Katrina hit the US Gulf Coast in 2005, Axcent Networks owned and operated most of the telephone landline infrastructure. They had just completed a merger of two large network infrastructures, and had not yet wholly entered data from this merger into the system. Although this wouldn’t cause a significant issue during normal operations, it did during the hurricane and the following months.

Since the provider couldn’t use regular routes, they went into network discovery mode. This action led to a slower response time (MTTR) overall. It also drove up the running costs for the operator who couldn’t meet their SLAs with their partners and customers. Additionally, it caused severe expenses for inventorying all information and extensive fees for the quick break-and-fix (MTTR) during the first few days after the hurricane.

Eventually, this moved into a more solid MTTR several months after the hurricane. Axcent implemented brand new landline cabling infrastructure in the area, which provided a more stable network experience, consequently increasing the availability. The new equipment led to far fewer outages — and therefore far fewer repairs.

Causes and Consequences

MTTR starts when you detect a failure and stops when you repair the issue, returning the impacted workload to a running state. This time typically encompasses diagnosis, troubleshooting, developing a solution, and implementing it.

Causes of a high MTTR may include:

  • A major system failure
  • A delayed realization that there is an issue
  • A delayed or incorrect issue diagnosis
  • Lack of parts, knowledge, or expertise to fix the issue
  • Building a high-quality, long-lasting resolution

Causes of a low MTTR may include:

  • A minor issue
  • Quick notifications and response
  • Fast and accurate issue diagnosis
  • Access to parts, knowledge, and expertise to fix the issue
  • Implementing a solution that is quick but not long-lasting

You can interpret a low or high MTTR in different ways. A high MTTR sounds bad, but if it’s only calculated for large-scale incidents or complex outages, it may be rational. A longer MTTR could be a sign of poor response time, but it could also be a matter of outage severity. A severe outage may still have a long MTTR even if your responders start working on a solution immediately.

Also, consider the relationship between response time and resolution time. A fast response time is helpful, but if it takes the responders a long time to troubleshoot and develop a repair strategy, the overall MTTR is still high.

When the MTTR is too high, it can harm your company’s reputation as customers wait for you to fix issues and word spreads of your long downtime. High MTTR can also become expensive when you must compensate customers for SLA violations.

A short MTTR means that teams pick up any incident causing failures or outages fairly quickly and fix them within an acceptable time. This MTTR may be so short that users don’t even notice the outage. However, it still impacts the application or system’s overall quality and stability. Although a low MTTR can be good, it could be a sign of a quick or temporary fix. There’s no guarantee the impacted system or workload will be more stable after the repair, just that the initial issue had been resolved.

It’s not good practice to look at MTTR as the only relevant incident metric. Doing so can lead you to make the wrong decisions. When the MTTR is vital to your overall business operations, you should also consider the mean time between failures (MTBF).

If you have a short MTTR, meaning you’re fixing issues quickly, but you have frequent outages (low MTBF), this low MTBF still harms your business. You gain a reputation for being unreliable. Existing customers complain, potential new customers won’t consider your products or services, and it becomes challenging to remedy your reputation. If you have a long MTBF, meaning infrequent outages, customers tend to be more forgiving of resolution time.

How to Improve MTTR

As you release software deployments more frequently, performance and reliability issues are likely to increase. Reducing MTTR is more important than ever. Let’s take a look at some of the ways to improve this metric.

Extensive Monitoring

Having an adequate monitoring solution in place is one way to improve MTTR. After all, you can’t start fixing a problem if you don’t know it exists. By having a continuous stream of real-time data about your system’s performance, you’ll know immediately when something goes wrong. Rely on metrics and logs, and also integrate tracing into your monitoring practice. If equipment or applications fail, the team can often quickly determine the incident’s root cause through observability, as they use the system’s external outputs to understand its internal state.

Imagine customers encounter a bug when they attempt to log in to your service using a mobile device. With proper monitoring in place, your team already has detailed logs of all failed login attempts and alerts for unexpected issues before customers complain. Detailed logs can tell you if all users have this problem or only certain users, when this issue started, and if it shares any characteristics with previous problems. Having this information handy helps reduce your MTTR because you don’t have to spend time searching for it, and it helps you trace the issue.

Without proper monitoring, you probably lack critical information. Your team must read into unreliable user experience accounts or troubleshoot issues manually. The problem’s root cause could be larger than just mobile devices. Maybe there are connectivity-related issues from a mobile connection, but not from a powerful wireless or cabled connection. In that case, apart from the failed logins, teams should also monitor all other components that make up the application stack.

Without observability, troubleshooting mobile device login issues is challenging. If you lack a clear view of the network layer, how can you identify the incident’s root cause? Monitoring is a key component of optimizing MTTR for your workloads.

Automated Incident Management

Having automation on your side is one of the most effective ways to shorten resolution time, and ensure that the process is as smooth as possible. Automation in the resolution process can appear in a number of places, from incident recognition, alerting and communicating with resolvers, and in some cases simplifying resolution entirely with automated deployment rollbacks. xMatters Flow Designer is one of many innovative tools that helps businesses include automation from the start of their incident resolution processes. Users can create workflows that can help or handle any part of the resolution process, with simple drag-and-drop functionality that makes automation easy and accessible for everyone.

Post-Incident Review

When an incident is resolved, incident postmortems are a critical part of truly completing the resolution process and setting yourself up for future success. In theory, postmortems can be uncomfortable, nobody likes to be in the hot seat. But if everyone on the team can resist the urge to point fingers and instead look at a situation neutrally, you can reduce the impact and likelihood of a recurrence. Each postmortem is different from the next, but the general agenda is the same. To learn how to run effective and painless postmortems, find all the details in the xMatters blog Best Practices for Painless Incident Postmortems.

Next Steps

Maintaining a healthy MTTR is essential for maintaining a healthy bottom line. Keeping your MTTR low ensures that you meet your  SLAs and continue to provide your customers the best service possible. When your MTTR begins to creep upwards though, you run the risk of frustrating customers, who can quickly turn to your competition and leave.  You may even risk further financial losses if you need to compensate customers for SLA violations.

However, it’s also important to keep in mind that although MTTR is an important metric, so too are the mean time between failures (MTBF) and mean time to failure (MTTF). Improving just one of these metrics is of limited use because they all interact. Quick resolutions are great, but if your systems constantly fail, customers may become frustrated with frequent interruptions. They may be more forgiving of a long resolution time if it is a rare occurrence. For the best user experience, you should aim to improve all three metrics in unison.

Good incident management, including detailed postmortems, helps you improve all three metrics. You can identify which components fail frequently and take action to replace them regularly, have backups in place for unrepairable systems, and quickly access information for a quick resolution.

Now that you know more about MTTR, you can start improving this metric for more reliable services and happier customers. To learn more about workflow automation to shorten your MTTR, request an xMatters demo today!

Request a demo