Kubernetes and containers have grown in popularity due to their powerful capabilities. They usurped virtual machines by running directly on an OS kernel, meaning they’re easier to move and divide. Operations headaches are minimal so developers have more time to focus on pressing tasks. However, container security has been known to struggle without the right tools and approach.
At the AWS Summit New York, I met with Mark Brooks, Alert Logic‘s global VP of solution engineering, to discuss the state container security. Alert Logic provides protection to all layers of web applications and the infrastructure stack. I followed up with Mark to ask him some questions for the site. He provided great insights into container security and the user’s role in this security. Due to the depth of Mark’s answers, this interview will be posted in two parts.
What responsibility do users have when it comes to securing containers?
Container-based virtualization platforms provide a way for organizations to run multiple applications in separate instances. There are several benefits to this approach including increased scalability, resource efficiency, and resiliency. There are security challenges that can arise specific to containerized environments, however. When considering containers, organizations need to think about all of the different layers of the container stack and the security challenges that each present.
Using a layered defense model with containers is a common best practice. That means considering security across the following components: container stacks, container images, container registries, operational containers, and the security of the container daemon. Applying the principle of least privilege, continuous monitoring, scanning, logging and alerting are all steps that organizations should take to keep these virtualized environments secure. Organizations also have a choice—whether they want to manage and pay for the infrastructure to provide these layered defenses or rely on a managed service provider to assist them.
With a managed service, customers typically deploy an agent to the container host that’s been specifically designed to monitor network traffic between containers as well as traffic between the container and the local network. Agents can operate in a listen-only mode, so there is no bottleneck to communications or container performance. In the event that IOCs are found, they are processed through an analytics platform that correlates IOC related events into one or more incidents. Ideally, these incidents should be reviewed 24×7 by a GIAC certified SOC analyst before being escalated (if critically appropriate and follows customer escalation preferences). In this scenario, the managed service provider is responsible for maintaining the detection ruleset for the container, following the customer escalation requests, as well as the correlation logic to protect organizations from attack.
These considerations are not new. There are numerous parallels that exist today. Consider compliance as an example:
With compliance—there are different requirements —like log review and threat detection. Managing these different approaches and requirements quickly becomes a complicated balancing act, often with real trade-offs. When considering how to address these areas, organizations can take care of all those requirements themselves or partner with a managed service partner. A managed security provider can help organizations with the complexity of the security requirements for all these things. Containers are a similar parallel. They increase the level of complexity, and there are options in how they can be secured. Users may find that the security model for containers can be difficult to manage on their own.