ConatinerJournal

Docker Shrinks Size of Container Images by 95 Percent

Containers take the components of a traditional application and break them down into separate, self-contained elements. The whole point of containers is to break code down into smaller, more manageable, and more agile pieces. As small as they are, though, container images can still affect performance–especially when we’re dealing with cloud-based applications and services that have to work across an Internet connection. Docker is addressing that challenge by switching the Linux variant it builds its container images on to a much smaller version.

DevOps and microservices are changing the way organizations develop apps and manage IT, and Docker is one of the most recognizable names of the DevOps revolution. Although Docker is established as the de facto leader among container technologies, there is a rising swell of competition out there as well, which means Docker can’t afford to be stagnant. Lately, it has made some movies to put containers on a bit of a diet in an effort to offer customers a unique advantage.

The primary benefit of containers is to take the entire runtime of an ecosystem and compress it into modular containers that can be swapped between environments and platforms. Large applications can be broken down into an array of processes and elements—each running as a separate container. Container images can be created, multiplied, managed, and destroyed at the push of a button.

One problem organizations face with containers, however, is size—and the burden containers place on performance. The Docker container image is built on a Debian Linux foundation. Debian is a full-featured OS, which means the Docker images are 100MB or more.

That may not sound like a lot. I still remember when hard drives were first becoming a thing and I paid nearly $300 for an 85MB hard drive. No, that isn’t a typo—I mean 85MB, not 85GB. When you can buy terabytes of storage capacity for under $100, 100MB container images seem almost trivial. However, most microservices and container environments function in the cloud. When you view it from the perspective of transferring 100MB across the Internet every time a container is created or accessed, it can be a significant and unnecessary burden on performance.

Docker recognizes this challenge, and is actively working to address it. It recently acquired Unikernel Systems. A recent ContainerJournal post explains that unikernels compile source code into a custom operating system that includes only the functionality required by the application logic. The idea, ostensibly, is to reduce the footprint of the container images by stripping it down to the bare essentials.

Unikernels are a new concept being applied to a nascent technology, though, and some organizations are reluctant to embrace cutting edge when it comes to production. Unikernels will most likely gain mainstream momentum, but in the meantime Docker is working on alternative solutions as well—like switching which Linux build it uses for the core of Docker images.

Check out the full story on ContainerJournal: Docker Puts Containers on a Diet.

Scroll to Top