Ad Image

The State of Container Security and What to Expect in The Future

State of Container Security

state of container security
Kubernetes and containers provide users with a mobile, fast, and functional development tool. They run directly on an OS kernel, eliminating many operation headaches. However, security issues often hold containers back from their full potential. This can be resolved with proper management and tools. Also, understanding new threats and vulnerabilities allow users to maintain consistent security. We strive to understand the state of container security for our readers. This sometimes leads to a valuable interview.

To gain some key industry insights, we chatted with Kris Raney from Ixia Solutions Group, Keysight Technologies. Kris is a distinguished engineer who dives into new areas for Keysight Technologies. He finds new tools and uses cases to develop business strategies.

What responsibility do users have when it comes to securing containers?

The content of the container and the behavior of the container are two vital attributes of the container related to security that falls squarely in the responsibility of the user.

For the contents of the container, it’s incumbent upon the user to be careful about what containers are selected, and also to keep rolling forward to a new version to stay current with all the latest patches. Exploits have a shelf life from the time they’re discovered until the time they’re patched, and the lightweight nature of containers helps keep that shelf life short. To really take advantage, that really means building a bias for updates right into the automation.

When it comes to behavior, it’s network communications that is most critical, since it’s what provides a vector for one compromised component to infect others or to do damage. Users have a responsibility to use a “least privilege” philosophy to limit the scope of what a component can do, and they have a responsibility to provide some transparency and visibility so that there can be oversight of the behaviors from outside the container.

Download Link to Managed Service Providers Buyers Guide

Is there anything currently being overlooked in container security?

The biggest omission I see today is the detrimental effect of false positives or overly restrictive policy. You want your security to be tight, of course, but you’ll never achieve perfection, and also rolling forward with new versions of the thing you’re protecting means that there will be subtle changes to what it needs over time. One possible outcome of mistakes or changes is that you leave something open that can be exploited. People put lots of thought into that.

The other possibility is you restrict something that is legitimate and necessary. The question becomes, how does this affect the behavior of the system? Unless you’re careful, it will manifest as inscrutable, hard-to-diagnose failures. You don’t see a “the security system blocked me” error. You just see a failure to communicate. It may seem intermittent since the specific situation security is interfering with may only come up under particular circumstances. This is especially true in a microservices architecture where you have zillions of tiny components interacting with each other. Your sincere attempt to keep security tight can turn into a debugging nightmare. It looks like a buggy app when it’s really an overzealous security layer.

How can developers stay secure while using a public repository?

First, you need to be sure you’re confident in what you’re getting. And don’t just do these checks up front, they need to be built into your automation so that they keep happening as updates happen and things change.

Second, you should trust but verify. There are occurrences throughout the history of the industry where despite all the checks, a compromised executable makes it into a “verified” payload.

You don’t want to think of your checks in binary terms. “This passed the checks,” dust off your hands, “done deal.” Instead, you should think of it in layers and probabilities. We verified the image, so that lowers the probability of compromise. Then we’re going to watch the behavior, and that lowers the probability some more. You never get it completely to zero. But the more things you do, the lower the probability gets.

What new threats do you expect as containers grow in popularity?

One thing I expect to see is published, compromised container images. Effectively a trojan horse. This could be a deliberate act or just an honest mistake. But it’ll definitely happen from time to time.

The second thing I expect is techniques to weaponize innocent containers. An example of the concept is a DNS-based DDoS. You spoof a very small request to a bunch of DNS servers, and each one responds with a very large response to the victim address you spoofed as. The DNS server becomes an unwitting party in the attack. The same concept applies to microservices. “If I make this request to the service, it causes it to spam the database.”

It’s a specific case of a general class of threat I call “illegitimate uses of legitimate channels.” Superficially, the request comes in looking like any other, so you can’t block it at a firewall or based on some generic rule. But hidden within it is a malicious intent, and that’s only revealed by behavior. Quite possibly, it’s only apparent by looking at behavior holistically across many services. The DNS-based DDoS case is an example of this, one spoofed request to one DNS server isn’t noticeable and really isn’t a concern. Thousands of the same request distributed across thousands of servers makes a DDoS.

What changes will serverless introduce to the container market?

Serverless really has two different aspects. The first is, “pay someone else to manage the infrastructure.” In that aspect, it somewhat competes with containers.

The other aspect of it is “small, transient one-shot tasks”. This is the nuts-and-bolts of how Functions as a Service is structured. That part goes with containers very well. OK, you run your own infrastructure, but you do it using these ephemeral, short-lived, single-function-call-sized compute resources. I think Serverless is going to train people to think about solving problems in this way, and as a result, you’ll see that philosophy influencing container-based architectures.

That will strongly influence how monitoring is done, for one. When containers are intended to just do one brief thing and then go away, container uptime is no longer a meaningful measure of the stability of your system. Queue length and latency become more meaningful. The result code from the exiting container becomes more critical. Did it finish, or did it crash? How often is this particular kind of task crashing?

Monitoring for security will, therefore, change too. You won’t have very long to ponder over the behavior of one particular task. By the time you decide it was doing something odd, it’s already gone. You’ll have to be able to categorize tasks and relate them. “This looks like another of those suspicious ones.” And you’ll need to be able to trace back from them. What triggered this? How far do I have to trace back to get to a user-initiated action? Did somebody do something malicious, or is this just a bug, or is it some transient condition in the underlying infrastructure getting in the way?

In other words, as you inevitably build FaaS into your container hosting, you’ll have to put some effort into building adequate oversight of those functions to even make a judgment about whether things are going right, or things are going wrong.

This article is a part of our interview series about container security. Check out more here.

Download Link to Managed Service Providers Buyers Guide

Share This

Related Posts