Ad Image

The Age of Virtualization: Cybersecurity Strategy Evolved

traffic analysis

traffic analysis

Solutions Review’s Contributed Content Series is a collection of contributed articles written by thought leaders in enterprise software categories. Scott Aken of Axellio talks cybersecurity strategy and network traffic analysis in the age of virtualization.

The virtualization strategies we saw in many other sectors of the IT industry over the past couple of decades are still mostly missing from traffic monitoring and analysis systems.  The processing power that analyzes network traffic and generates this information is still based on the heavy iron approach: processing live traffic with pricy, complicated applications operating on often proprietary hardware. Switches and routers are even being virtualized in the hardware-centric network industry, while proprietary hardware is still used in many network analysis systems.

But times are changing.  Traffic analysis applications can now be virtualized thanks to next-generation network visibility platforms, opening the door to a new era of software-defined analysis apps for network and cybersecurity applications.  The security business needs more flexible analysis, greater comprehension, and—most importantly—a more affordable and scalable strategy.  And virtualization of traffic analysis is helping to make this a reality.

Download Link to Data Integration Buyers Guide

Traffic Analysis in the Age of Virtualization


Present Traffic Analysis is Hardware-Heavy

Today, monitoring network traffic requires a lot of hardware as network speeds and volume continue to increase. What makes it even more challenging is to scale across the many delivery platforms and locations used by any enterprise.  With the sophistication and frequency of attacks increasing, organizations need to get better visibility into all traffic, not just at select points in the network.  With attackers exploring networks for months before the actual attack happens, visibility into the internal network activities is becoming increasingly important. This is not just for detection but also for determining the impacted resources (some call this “blast radius”) and swift incident response.

Given the various analysis goals across network and security operations, most organizations are challenged with many self-contained solutions, each one capturing and analyzing the same traffic on their own proprietary hardware for different analysis purposes. This model generates narrow “stovepipe” solutions with the need to duplicate the traffic to multiple analysis applications, limiting interoperability and the exchange of vital data. In addition to prohibiting effective interaction between network and security operations, it also further increases the need for even more hardware to process and store the growing amount of traffic. These systems often use custom NIC cards to record and analyze information.  They also have custom analysis engines running on FPGAs or on several CPUs at once to keep up with the analysis of the incoming traffic in real-time, store the findings, and then ignore and discard the original traffic.

This is one of the reasons why many organizations have implemented monitoring logs, alarms, and flow data generated by network elements and endpoint devices, to reduce the need for traffic analysis. As this resulting metadata is creating less volume than actual network traffic, these observations are based upon applications like SIEMs and SOARs, and can be more easily virtualized. The primary drawback with relying on metadata is that it only provides indicators of attacks or emerging issues, but can’t determine the event’s severity. This requires a view into the network traffic, which is rarely, if ever, available.

How network and security operations can build a scalable and economical system to decrypt and analyze growing network traffic has long been a topic of discussion.  Infrastructure for servers and storage has successfully and effectively virtualized all computing and storage components for many years.  Even in networking, a trend to virtualize switches, routers, and firewalls has been taking hold over the last several years.  But the technique of virtualizing traffic monitoring and analysis is still in its infancy.  This makes actual traffic monitoring solutions concentrate on key aggregation locations, such as network ingress-egress points, often in the form of IDS or IPS systems, or Network Detection and Response solutions.  Due to the cost and complexity of those systems, internal core networks or other internal high-value content infrastructure is often left unmonitored.  Visibility in those areas is often left to metadata supplied by network or end-point devices, resulting in more abstracted information and flow data, which adds to more data with less insight.

Managing the Traffic Load

The IT industry previously learned that tapping network connections separately for each application was wasteful and led to more failure points in the network.  As a result, traffic aggregators and packet brokers evolved. These combine traffic from several network monitoring points, filter out unnecessary traffic, and timestamp packets, duplicate and load balance traffic for each tool, and decrypt the communication. However, filtering out traffic to reduce the overall volume requires a lot of foresight, and can increase the risk of an exploitation. Therefore, many organizations still want to analyze most of their traffic, or even expand this approach from the network edge to the internal core network, which is becoming increasingly under attack as well.

This has led to a tipping point for network traffic analysis in cybersecurity.  We have reached a stage where we need to think about virtualizing network traffic examination to benefit from the scalability of virtualized settings to create economical solutions that expand visibility across the network.  In essence, the traffic analysis architecture has to be updated.  Each traffic evaluation tool available today records traffic, analyzes it in real-time, and retains the results as metadata, driving these massive hardware requirements.

Taking Network Analysis in an Entirely Different Direction

Moving ahead, we must reconsider this strategy by further centralizing the traffic collection and storage. This allows for traffic distribution via software APIs and centralizes additional traffic collection, decryption, and storage capabilities on a single platform. Software APIs for distribution are key as they allow for virtualized analysis applications.  They also control the traffic flow, adjusting the volume to the maximum that the analysis application can safely consume. In today’s approach, if the analysis application is overwhelmed by the traffic volume, traffic is dropped and remains unanalyzed, creating vulnerabilities. With this new approach, traffic is slowed down to ensure all traffic is analyzed, even though with a slight delay, but no traffic is ever lost. Also, this may buy the time needed to spin up additional virtual analysis engines to deal with the increase in traffic.

The additional advantage of storing traffic is that for any event, alarm, or finding, the original traffic is still available for pre- and post-event forensic analysis, which provides the insights needed to quickly and reliably address the issue.

This fundamentally modifies the overall monitoring strategy.  Analysis apps may now run in real-time or near real-time, rather than losing traffic when they can’t keep up with demand, such as large analysis and computing infrastructure.  The additional benefit is that a thorough forensic examination of the actual network traffic around any incident is possible, giving more knowledge about the pre- and post-event activities.  This historical research is especially pertinent to cybersecurity, as new attack techniques may have existed but went unnoticed.

The virtualization of resource-intensive traffic analysis operating on less costly servers, as well as the elimination of expensive proprietary hardware, are made possible by this method of further centralizing traffic collection with the capacity to store and distribute at extremely high rates.

The network visibility platform absorbs traffic spikes and surges, so designing for average traffic consumption becomes the norm, extending the lifespan of the existing monitoring and analysis infrastructure.

By being able to access all packets during an incident – both before and after it occurs – the impacted organization has the necessary context to assess the severity and priority of the incident. This analysis is critical with the speed at which attackers are targeting networks today. By virtualizing this analysis to provide software-defined traffic monitoring and analysis, greater network visibility will provide more insight, while spending less time and money overall.

Download Link to Data Integration Buyers Guide

Share This

Related Posts