What Should Your Network Performance Monitor Do? (With AppNeta)
The world of networks isn’t static; it’s evolved constantly to meet new standards as they are laid down. With innovations such as the Internet of Things, BYOD, and the upcoming Wi-Fi 6, network technology has continued to grow. Still, one idea remains the same: networks need to perform effectively. To keep that idea in check, network performance monitors (NPMs) allow IT teams to see how their network is doing in real-time.
With so many NPMs on the market, however, how do you know if an NPM is worth the time and money to implement? What should any NPM be capable of, and how can you distinguish a good NPM from a great one?
We spoke with Matt Stevens, CEO and co-founder of NPM provider AppNeta, on what NPMs should do for you, how they can adapt as network technology grows, and shortcomings they need to address.
At its most basic level, what should any NPM be able to do? In other words, what’s the baseline for any NPM solution provider to reach?
MS: Any NPM solution worth its salt should be able to arm IT teams with the data they need to ensure the best end-user experience possible for the business-critical apps employees need most from the locations where they do their jobs.
In the modern enterprise, most knowledge workers are in the remote office where the majority of NPM tools are blind, robbing centralized IT of the local perspective they need to ensure app performance end-to-end. Being able to tie the end-user experience to the underlying end-to-end networks is crucial since organizations no longer own all of the pathways across the network. IT needs monitoring tools that can see across all network pathways and past the old network borders.
What are some NPM functions or features that currently aren’t standard, but that should be included in every basic NPM?
MS: A comprehensive solution should collect and analyze four dimensions of network and app data to deliver a complete picture. This includes data related to network paths, flows, packets, and synthetic web/URL information to ensure all performance metrics are delivering a complete end-to-end picture of performance.
Many solutions, including most standard SD-WAN offerings, approach the problem from one end of the delivery network only, with little or no context into the route the path took along the way — including the ISP or cloud vendors that were traversed on the way. Furthermore, these solutions often give centralized IT almost no insight into the local perspective of remote users, leaving IT blind to issues that may impact app delivery on a specific LAN.
Typical network metrics of loss and jitter are driven from lack of available end-to-end capacity, which few solutions can deliver upon in a production environment. Even worse, the typical network performance KPIs of loss and jitter are often second-order conditions. Without knowing how much actual capacity the network is delivering and comparing that against what a business’s apps need to function well, IT is left chasing their tail.
Often, teams will employ tools that, for one reason or another, only deliver partial insight into all of these key areas (e.g., limiting views to devices on the LAN). This leaves IT with even more applications to manage, and a jumbled, incomplete view of the network when all is said and done.
Are there any performance metrics that users tend to ignore or pay little attention to that should be a higher priority?
MS: It’s not so much the case that IT is ignoring or undervaluing certain critical metrics, but rather that the NPM solutions many teams employ simply don’t make certain data available to customers. That’s because the vast majority of solutions don’t take a comprehensive, four-dimensional approach to network monitoring that’s necessary in the cloud era.
For instance, many solutions will only monitor the applications in use that are deemed “business critical” as opposed to all of the apps leveraging network capacity. This leaves IT teams blind to how non-essential tools are using capacity and the related impact those apps are having on the solutions teams rely on most.
Many NPM solutions simply fail to provide real context on how an app is performing in a given location. Does anyone care if an app’s latency is 20 ms vs. 45ms if the overall app is performing well? We think IT needs to start with the end-user experience of the app first, providing business-level answers and insight vs. IT-centric metrics, while still having the fine-grained underlying metrics that support the analysis for further drill down as needed.
When searching for an NPM solution, teams need to be mindful of terms like “hop-by-hop” or “end-to-end,” as this indicates the solution is monitoring for all steps their traffic takes across the WAN to get into the precise detail that IT will really need to ensure optimal performance. This deep insight, coupled with the ability to account for all apps, devices, and tech using the network, empowers IT with the clear picture they need.
What can NPM users and solution providers do to adapt to new innovations in network technology in order to maintain network efficiency?
MS: Users can start by baselining their current performance from where the users are (the remote office, for instance) for the apps most critical to business to understand the underlying network performance that drives that experience. Teams can then baseline their learnings to drive the policies used to control and manage the new technology rollout by focusing on the biggest problems first, and ensuring that they’re not breaking something by accident through a poorly considered policy. Teams can then confirm and optimize the results to find and address gaps or new apps that emerge on the network in order to make sure you stay current with user’s habits and application changes.
Thanks to Matt for sharing his perspective!
Check us out on Twitter for the latest in NetMon news and developments!