When monitoring mission-critical applications, it’s easy to miss the big picture. The editors at Solutions review break down the key points on what true visibility in application performance monitoring should look like.
In our increasingly digital, interconnected world, businesses run on their mission-critical applications. Business grinds to a halt if customers or employees can’t get the services they need, if core transactions can’t be processed, or if the user experience becomes sluggish or unreliable. Customers get frustrated. Deadlines get missed. Sales targets fall short. Stock prices can plunge. And in some cases, companies never recover.
The problem is that even the most conscientious IT leaders are dealing with a fundamental disconnect– the way conventional monitoring tools understand a workflow or application experience can be very different from the real-world user experience. We need to see the big picture to get true visibility in monitoring mission-critical applications.
What True Visibility in Monitoring Mission-Critical Applications Looks Like
A number of factors contribute to how users experience an application or transaction journey. Consider variables such as the fact that users may be working with different hardware and device types, different OS versions, or different security solutions, with their personal user journey taking a unique path with unique network characteristics. Think about all the diverse skillsets and behaviors. You’re monitoring users across the spectrum of computer skill levels and understanding.
Some people are just bad at entering data and make frequent errors. Others get distracted or step away at critical points in a workflow, hanging up transactions. Some use dynamic login variables for multifactor authentication (MFA), or different OAuth tokens, in ways that can be extremely difficult for workflows to account for. In large enterprise applications, there’s a good chance at least some aspects of a mission-critical workflow will interface with legacy systems of record. Things like applications that are not HTTP and therefore beyond the reach of most monitoring and APM tools. You also need to contend with API dependencies on all sorts of internal and external services, as well as insertions (pixels, ads, tracking cookies, etc.) from a wide range of third-party providers. If your monitoring doesn’t account for all those variables, it can’t give you a realistic picture of performance over time.
So what does true visibility in monitoring mission-critical applications look like? What should it look like?
It Looks Like Full Active Application Monitoring
You should be able to measure and test against all aspects of the application, not just a subset. That includes third-party data, non-HTTP functionality, MFA/OAuth integration, and more. You should be able to test anywhere and everywhere—from any location to any cloud, remote data center, or on-premises system. And you should be able to conduct multiple checks continually so that you can view application health as an actual percentage of baseline, not just an aggregate.
It Looks Like Full Application Lifecycle Testing
If you’re not using the same consistent testing methodology everywhere, you won’t have consistent metrics to provide true service-level assurance. You should be able to test applications and workflows across the full software development lifecycle, ideally using the same tests in monitoring as you use in QA and load testing. Those tests should integrate into your continuous application/continuous delivery (CI/CD) pipeline across development, blue/green deployment environments, QA, and support. And you need to be able to deploy tests quickly, scaling to simulate millions of users in minutes on an ad-hoc basis. If you have to test a new critical security patch for a live exploit, for example, you don’t have hours to wait for results.
It Looks Like Full Ecosystem Integration
Your monitoring and assurance systems aren’t beneficial if they live on an island. They should fully integrate with the broader ecosystem of tools and partners you use for visibility (IE. Grafana, Splunk, Tableau, AWS QuickSights), support (IE. ServiceNow, PagerDuty, and others), and APM/NPM capabilities. For example, to support faster remediation times, you should be able to run a test, detect a problem in one part of a workflow, and open a ServiceNow ticket, including pulling detailed log data to support that ticket, as part of a single, automated process.
It Looks Like Visibility Based on Real Business Metrics
Your monitoring and assurance should map to real business value, not abstract metrics that need to be translated and interpreted. That is, you should be able to dimension your data and add business logic on top of it, so you can understand how the performance of workflows (or lack of it) affects the overall business, not just technical functions. You should be able, for example, to calculate service loss/gain based on real dollar value. You should also be able to visualize key performance indicators (KPIs) as a percentage of overall service-level health. And you should be able to apply geographic- and department-level awareness to your real-time monitoring and reporting.
True visibility in mission-critical application monitoring looks like seeing the big picture. It looks like seeing the workflow from the eyes of the user, and not just “one size fits all” metrics that don’t account for nuances such as the age of the hardware being used, the skill level of the user, or even geographical location. True visibility looks like accounting for all variables and mapping performance to actual business metrics that can be easily understood and addressed.
Read the full report “Mission-Critical Applications through the Eyes of Your Users” from Apica for free.