Ad Image

Container Control: Experts Weigh in on Docker’s Drawbacks

DrawbacksofDocker

If you work IT and have a pulse, then you’ve heard the hype surrounding Docker and their Linux containers. Chances are you’re up to your ears in it. Lately, it seems like there’s a new article hyping Docker’s shiny new containers every day.

This “lightweight virtualization” is DevOps’ silver bullet, or so we’re told. But as with all good things, containerization comes with its own set of drawbacks and compromises, and before you dive head-first into Docker, it’s important to take those drawbacks into account.

For the uninitiated, a Linux container is a virtualization instance in which the kernel of an operating system allows for multiple isolated user-space instances. Unlike virtual machines (VMs), containers do not need to run a full-blown operating system (OS) image for each instance. Instead, containers are able to run separate instances of an application within a single shared OS.

This new tech gives developers the flexibility to build once and move applications without the need to rewrite or redeploy their code, which advocates say makes for faster integration and access to analytics, big data and services.

But, as mentioned above, anyone who’s been around the block a few times knows that new tech doesn’t just bring new advantages, it brings new ways of doing things, and often, new complications.

To identify some of those complications we reached out to IT industry experts working with Docker and Linux Containers with one question: In your experience, what are some of the main drawbacks to using Linux containers?

As it turns out, we’d unknowingly opened the floodgates—we received dozens of experts responses on a wide range of issues from “container sprawl”  to unstable ecosystems and the lack of tooling and data management. Read on for some of our favorite responses.

Please note: This article isn’t meant to be a ‘takedown’ of containers, rather, it’s meant to give our readers a sense of perspective, and a guide to the challenges inherent in the adoption of this new technology.


Widget not in any sidebars

An Unstable Ecosystem Causes Apprehension 

“The container ecosystem is moving rapidly, which is causing turmoil and confusion. New specifications and tools are constantly emerging, and there is very little backwards compatibility or interoperability. Additionally, the vendor landscape is constantly shifting with new entrants and companies being scooped up. In the future, you can expect to see even more mergers and acquisitions. this instability in the ecosystem makes it hard for any enterprise buyer or senior decision maker to make a confident bet on suppliers or a specific container technology.” – Tom Drummond, CEO, Heavybit Industries

“Docker faces some of the same adoption issues that Virtual Machines did 10 years ago. Conversations at DockerCon 2015 remind me of the same conversations from VMworld 2008. “How much of your infrastructure is virtualized? Everything but the databases.” Early adopters are still weary of running their production databases inside of containers. The tools are getting better, but they’re not there yet.”Aaron Brongersma. Senior Infrastructure Manager at Modulus

Complicated Vulnerability Management

“While Docker provides some awesome advantages for development and operations, as with any new technology, there are new risks and threats to defend against. First, managing vulnerabilities across many containers deployed across many places isn’t a simple process. Second, organizations must ensure their containers and the fabrics they run on are hardened using the open CIS Docker benchmark recommendations. Finally, and especially in larger enterprises, customers want more fine-grained authorization capabilities and integration with existing directories and authentication protocols.” – John Morello, CTO TwistLock

Rapid Deployment Can Lead to ‘Container Sprawl’

“When virtualization enabled us to create more workloads, it became like the subprime lending market.  VM-to-host ratios rose rapidly and often hit 40:1 and began to impact performance because the feeling was that VM real estate was free.  As containers make deployment more rapid, immutable, and simple, the potential for container sprawl is strong with risks of losing control of the container environment within the organization quite quickly. Understanding the true effect of increased container density will be a must in order to get the best application performance from a containerized deployment.” Eric Wright, Principal Solutions Engineer and Technology Evangelist at VMTurbo,

…Complicated Data Management.. 

“Most applications have state which has to be saved to storage systems outside of the container. Application developers have to architect and design their services to leverage external storage mechanisms, which may not always be available and quite difficult to configure. The net result is that while containers are consumable nuggets of greatness, the overall application architecture has its complexity increased by requiring new storage (and networking) backplanes to create application fidelity. While this sort of architecture makes an application more cloud native, there are a large class of applications where this is unnecessary complexity introduced by the dependence on containers.” Tyler Jewell, CEO and Founder at Codenvy

Repetitive Troubleshooting…

“Everything is temporary in Docker and not saved by default—we always seem to be at a clean state. This is good and convenient when everything is working and configured correctly, but can be a real pain when developing the containers or analyzing problems. You find yourself at the command line, running and configuring again and again the same things to get to the root-cause of issues.”Muly Oved, testRTC Co-founder & CTO testrtc.com

…And Complicated Build Processes

“The build process can become complex. As some of it requires pulling software packages from the internet. At times, these packages cannot be found or there are some broken dependencies. This eats into your development time.

This all boils down to Docket being a new technology. It isn’t battle tested in all of its edge cases, and as we push the envelope, we reach these points. Things like streaming data to and from stdin and stdout. The fact that it is new means that there’s not enough resources or code examples out there and we end up troubleshooting a lot of the issues on our own. We love Docker and see a lot of value in it, but it isn’t a silver bullet.” Muly Oved, testRTC Co-founder & CTO  testrtc.com


Widget not in any sidebars

Share This

Related Posts