Linux containers are an extremely powerful tool. They are a midway point between running a program on bare metal and running it on a virtual machine. Linux containers provide a separate Linux installation on top of the same kernel used by the host machine. As a result, they provide almost all the features of a full virtual machine with greatly reduced overhead.
Process Isolation On Linux
So what are Linux containers? To answer this question we first need to discuss two methods to application isolation. Both containerization and virtual machines have been around since the timesharing mainframes of the 1960s. The main difference between the two is the level at which the isolation occurs. Virtual machines work at the hardware level. On the other hand, containers perform isolation within the Linux kernel itself.
An implemented virtual machine system utilizes a piece of software called a hypervisor. The job of this program is to create a virtual representation of an entire computer. When set up, the guest will not be aware that it is emulated. This provides a great deal of flexibility. However, this also comes with a high level of overhead.
Originally, hypervisors handled all virtualization in software. While this worked, it was very slow. Running a VM meant having the hypervisor recreate what the processor was already doing. As a result, each instruction could potentially require many more clock cycles to run than normal. Techniques such as dynamic recompilation eased the issue. However, the issue was far from fixed.
In response to this, processor manufacturers began to redesign their processors to better handle virtualization. In 2005 and 2006, both Intel and AMD introduced their respective offerings. The instruction set extensions developed by both companies did more or less the same thing. Many of the functions traditionally handled by the hypervisor were now part of the processor. In doing so, they allowed for hypervisor code to become much simpler. Later on, the two companies extended these instruction sets and improve speeds.
Virtualization takes several different forms on modern Linux systems. Hypervisors running on top of Linux include Oracle’s Virtualbox and VMWare. Also, modern Linux kernels themselves can act as a hypervisor directly. OpenVZ and KVM both take this approach.
Linux incorporates both OpenVZ and KVM as kernel modules. OpenVZ is an older offering. Its development take place separately from the Linux kernel. On the other hand, KVM development occurs along with the Linux kernel.
Compared to virtual machines, containers are more similar to mandatory access controls like SELinux and AppArmor in that they work primarily off functionality in the kernel. They do not utilize any of the virtualization instructions included in modern x86-64 processors. Like KVM, LXC development takes place directly in the kernel repository itself. These two facts mean that you have access to LXC anywhere your kernel has not be explicitly compiled with LXC removed.
Which Linux Features Enable Containers?
Linux containers are a form of access control. All applications on top of Linux run on processor ring 3. This alone provides a great deal of security. No software at this level is able to directly access the hardware in the computer. Rather, The kernel fields all requests and feeds them to the device drivers on ring 2 as needed.
Linux containers work within the kernel to allow greater control over this process. Traditionally, the kernel presents a more or less complete view of the current state of the system to any programs requesting it. On a stock Linux system, the discretionary access controls and the program’s root status are the only things preventing complete access. However, the kernel presents only a limited view of the system if the requesting program runs in a container. Internally, this functionality uses control groups and namespaces. These two kernel features allow more fine-grained access to the system.
Linux kernel 2.6.24 introduced control groups. They allow the limiting of resources usable by a group of processes. In the case of LXC, each container is part of several cgroups. Each one manages a different aspect of its access to the host system.
Namespaces and cgroups generally go together. However, the two differ in functionality. Cgroups limit how much of a resource is accessible. On the other hand, namespaces hide resources entirely. For example, a program running within a file system namespace will be unable to see any files other than the ones in the namespace with them. The same goes for other resources such as processes, devices and block storage.
What are Linux containers used for?
Linux containers often fill the same use cases as virtual machines. They are more or less an entirely separate computer contained within your host machine. Within a container you are able to install programs as if you were on a brand new computer. They are entirely separate from the base system.
How Do Linux Containers Compare to Docker?
Linux containers and Docker share much in common. They both utilize cgroups as well as namespaces to isolate processes. The roles they fill are the biggest difference between them. Linux containers act like virtual machines. They closely replicate an entire Linux environment. On the other hand, Docker containers isolate individual processes. The point is to completely separate the program from the hosting system. This prevents modification to any unauthorized files. In addition, it gives it the ability for individual programs to have their own set of libraries. These libraries are part of the Docker container and managed independently from the package manager of the host system.
Overall, LXC represents a major leap forward for software isolation on Linux systems. System administrators can use containerization technology to better organize their servers. They add a very strong degree of separation of concerns which can pay off greatly in the right hands.