Ad Image

Why Monitoring Flow Data Means You’re Only Seeing Half the Picture

Flow Data

Flow Data

As part of Solutions Review’s Premium Content Series—a collection of contributed columns written by industry experts in maturing software categories— David Ratner of HYAS illustrates in detail why monitoring flow data means you’re only seeing half the picture, and how you can see the forest for the trees with DNS telemetry.

SR Premium ContentThe role of visibility in detecting and addressing potential network problems (whether that be an attack or something more benign, such as a misconfiguration) is certainly not a new concept. In the past, network administrators and security teams often kept track of what was happening on their network by using data from Cisco NetFlow or one of its many third-party variations (collectively referred to as “flow data” in this article).

However, over time, companies’ operating environments have become more decentralized and expanded exponentially, introducing a big-data problem to the ingestion and monitoring of flow data. Not only is finding small anomalies within these reams of data like finding a needle in a haystack, but it can also be prohibitively expensive, as the cost of querying samples of flow data can quickly add up depending on how much data you request. In addition, companies have begun to migrate their operations to the cloud in droves. All of these developments have robbed administrators and security teams of the insight they once relied on to notice anomalies that could alert them to potential issues before they become costly problems.

Download Link to Data Integration Buyers Guide

Flow Data is Only Half The Picture


In the Beginning, There Was NetFlow…

Cisco introduced NetFlow in 1996, and initially, its chief purpose was to help network architects understand how their networks functioned. Over time, however, administrators began using it for security purposes to get a reliable record of all the IP, TCP, and UDP transactions occurring on their network, letting them identify traffic patterns across their infrastructure. NetFlow became a de facto industry standard, and other router manufacturers began including it or a variation with similar features on their products as well.

Unfortunately, the technology has not been able to grow in sync with the industry. There is simply too much information being collected to effectively monitor it in a way that lets administrators stop problems in a timely fashion. Also, as mentioned before, flow data was originally a tool for network architects to understand how traffic was flowing within their networks and alert them to changes. With many companies now operating entirely in the cloud, this use case for flow data has changed — further disincentivizing teams from paying close attention to it.

That said, if you ask any company if they are capturing their flow data (or at least DNS transactions), they may respond with a confident “yes.” But is this data actually being looked at? And if it is, how effectively is it being used? In a contemporary network environment, if you are just querying large samples of flow data, you are not getting the whole picture and spending a pretty penny. You need that canary in the coal mine that will alert you when something strange is happening and tell you exactly where it is happening.

A Friend in DNS Telemetry

This is where DNS telemetry can complete the picture. By monitoring DNS traffic, you can be alerted to anomalies in real-time. Using this information, you can then query your flow data to confirm that a transaction actually took place. If you find that no transaction took place, you can rest easy, but if it did, you now know exactly which devices were involved and who they were communicating with. To use a phone call as a metaphor: DNS monitoring lets you know a call was placed, which points you to the flow data that provides details like if someone answered the call, who they talked to, how long it lasted, etc. Essentially, DNS monitoring solves the big data problem by helping security and DevOps teams zero in on the exact transactions that raise a red flag rather than hoping to stumble over the needle in a haystack by trying to query and analyze large samples of flow data.

With these two solutions working in tandem, you can respond to incidents efficiently and in real-time. This speed is vital, as you can block malicious transactions before they have had a chance to cause any real damage. Once a threat infiltrates your systems, it doesn’t make itself known immediately, as it needs time to spread through the network and locate vital resources. If you can stop the attack before it enters its additional stages, you’re cutting the fuse on a lit timebomb.

Flow data’s usefulness may have decreased in the age of big data, but when paired with DNS telemetry, it still plays a vital role in the modern security landscape. More importantly, it restores the visibility that your DevOps and security teams have been lacking to fulfill their roles effectively. By cutting through the noise, you can take a more proactive posture against cyber threats and increase your organization’s overall resiliency— all while reducing your costs. But in the end, those savings are a drop in the bucket compared to the revenue you safeguard by averting a cyber-attack before it bites.

Download Link to Data Integration Buyers Guide

Share This

Related Posts