These days containerization of work, applications and storage on systems has become a hot topic. Not to say it wasn’t before, but it’s got a boost from the cloud computing segment of the industry. With that I felt the need to write up what I’ve discovered of the history in this industry so far. I’d love feedback and corrections if I’ve got anything out of order here or if – heaven forbid – I’ve got something wrong.
What are Containers?
Before I get into what a container is, it is best to define what operating system-level virtualization is. Sometimes this is referred to as jailed services or apps running in a jail.
This level of virtualization often provides extremely similar functionality as a VMware, Virtual Box or Hyper-V virtual server would provide. The difference however is primarily around the idea that the operating system-level virtualization actually runs as a service, usually protected, that runs apps as if it were an operating system itself.
So what’s a container?
Linux Contains is a feature that allows Linux to run a single or more isolated virtual systems that each have their own network interfaces, computer process threads and namespaces, user namespaces and states.
One of the common abbreviations for Linux Containers you’ll see is LxC. There are however many distinct operating system-level virtualization solutions.
- Open VZ – this technology uses a single patched Linux kernel, providing the ability to use the architecture and kernel version of the system that is executing the container.
- Linux V-Server – this technology is a virtual private server implementation that was created by adding operating system-level virtualization to the Linux kerne. The project was started by Jacques Gélinas. It is now maintained by Herbert Pötzl of Austria and is not related to the Linux Virtual Server project. The server breaks things into partitions called security contexts, within that is the virtual private server.
FreeBSD Jail – This container technology breaks apps and services into jails.
- Workload Partitions – This is a technology built for AIX, introduced in AIX 6.1. Workload Partitions breaks things into WPARs. These are software partitions that are created from the resources of a single AIX OS instance. WPARs can be created on any system p (the new old thing, was the RS/6000 tech) hardware that supports AIX 6.1 or higher versions. There are two kinds of WPARs, System WPARs and Application WPARs.
Solaris Containers – is a container tech for x86 and SPARC systems. It was first released in February 04′ for Solaris 10. It is also available in OpenSolaris, SmartOS and others as well os Oracle Solaris 11. The Solaris container combines resource controls in seperations referred to as zones. These zones act as completely isolated virtual servers within a OS.
What is so great about a container?
Ok, so I’ve covered what a container is. You’re probably asking, “so what do I do with these containers?” There are a number of things, for starters speed is a huge advantage with containers. You can spool up entire functional application or service systems, like an API facade or something, in seconds. Often times a container will spool up and be ready in less than a second. This provides a huge amount of power to build out flexible, resilient, self-healing distributed systems that otherwise are just impossible to build with slow loading traditional virtual machine technology.
Soft memory is another capability that most containers have. This is the capability of being allocated, or being allocated and running, in memory. As one may already know, if you run something purely out of memory it is extremely fast, often 2-10x faster than running something that has to swap on a physical drive.
Managing crashing services or damaged ecosystem elements. If the containers are running, but one gets hit with an overloaded compute ask, software crashes on it, or one of the many receive some type of blocking state like a DDOS of sorts, just reboot it. Another option is just to kill it and spool up and entirely new instance of the app or service in a container. This ability really is amplified in any cloud environment like AWS where a server instance may crash with some containers on it, but having another instance running with multiple containers on it is easy, and restarting those containers on running instances is easy and extremely fast.
Security is another element that can be assisted with container technology. As I alluded to in the previous point above, if a container gets taken over or otherwise compromised, it’s very easy to just kill it and resume one that is not compromised. Often buying more time to resolve the security concern. Also, by having each container secured against each other container, controlling a container does not result in a compromised physical machine and operating system. This is a huge saving grace when security is breached.
Container Summary
Containers are a hot ticket topic, for good reason. They provide increase management of apps and services, can utilize soft memory, increase security and they’re blazing fast. The technology, albeit having been around for a good decade, is starting to grow in new ways. Containers are starting to also become a mainstay of cloud technology, almost a requirement for effective management of distributed environments.
Next up, I’ll hit on Docker tech from DotCloud and Salomon Hykes @solomonstre.
For now, anybody got some additions or corrections for this short history and definitions of containers? 🙂
Your Oscon story made Containers ability to effect reactive business easy for me to grip. The schematic was priceless http://cliveboulton.com/search/container
Hey Adron, as you know Red Hat has been working on the idea of containers for a long time now. OpenShift is one of the PaaS platforms which has taken the idea of containers in the right direction to enable developer productivity. Our partnership with Docker is all about doing containers right. I would love to chat with you on the topic or bring anyone from OpenShift to discuss the topic in detail. Containers are going to play a significant role in the future and we are excited about where things are headed. Thanks for blogging on the topic of containers. I feel that we need to do more to highlight the value offered by containers.
Hey Krish,
I’m right there with you! Containers are a huge part of the speed, power, scaling and modularity that is more than a promise for future computing. The things that are possible grow exponentially with containers when building containers. I’ll loop back around and line up a conversation, maybe even do a google hangout & we can post it?
Nice rundown of the history. One interesting twist on AIX WPARs is that they have Live Application Mobility which lets containers be moved around. AFAIK, they’re the only example (today) of this capability.
It’s probably worth noting that containers have historically been most associated with service providers because they depend on a single operating system kernel which actually has a lot of benefits in a homogeneous environment but means they’re much less useful with heterogeneous OSs and OS versions as has been the norm for enterprises–and which is a big reason why hypervisor-based virtualization took off rather than containers in that role.
But modern cloud-style environments–including but not limited to platform-as-a-service–is much more homogeneous and, if you’re homogeneous, the container benefits like density, efficiency, and speed of spinning up and down tend to win out. (That’s why we use a container approach and are working with Docker with Red Hat OpenShift PaaS for example.)
There were good reasons why hypervisor-based virtualization largely won out a decade or so back but there are just as many good reasons for why containers are likely to become very important for a lot of cloud environments.
Want to a work on a list of the key differences that enabled hypervisor vs container virtualization to win out years ago? Then one that would put together a list of why containers are becoming a vital new element in building systems, designs and architecture today?
Hi Adron,
I touched on a lot of that in this piece: http://bitmason.blogspot.com/2013/11/why-it-not-about-containers-or.html IMO hypervisor-based virtualization largely won out because OS heterogeneity was an enterprise requirement and, even before virtualization hardware assists, it delivered really impressive efficiency compared to 10% utilized Windows servers. (Which is another reason–containers were largely a *nix phenomenon, a sort-of kludgy and belated version of Virtuozzo for Windows notwithstanding.)