- Open Source
Containers are the future, but where did they come from and why should you care?
Before we describe what a container is, let’s start by explaining what a container IS NOT. A Container is NOT a virtual machine. A virtual machine is hardware level isolation technology that allows a hypervisor, such as VMware ESX or Microsoft Hyper-V, to segment a physical server into many smaller parts, with these parts being used to create a virtualized abstraction of a server.
A standard server can be divided into multiple VMs (industry average is 12), giving a sizeable reduction in TCO as a result. VMs have served IT well for the past 20 years, however, even VMs are inefficient as a VM that “idles” with just the OS installed still consumes at least 10GB of storage, and upwards of 500MB RAM (a Windows OS dramatically more). Additionally, as VMs run a full OS that “thinks” it’s on a physical server, the OS needs “head room” to operate. Generally, an admin would assign 1.5x the resources their app needs just to stop the OS running out of resources and crashing. By multiplying this idle capacity plus the over-allocation by the number of VMs you have running you can very quickly see where the inefficiency comes from.
To learn more about the technical differences between containers and VMs read this blog article.
Containers, on the other hand, take the principles of server virtualization, and move the abstraction layer up out of the hardware layer into the OS. In that regard, containers can be thought of as operating system virtualization, or even application virtualization. Containers do not run their own operating system, nor do they have statically assigned virtual hardware to manage. As a result, containers have dramatically reduced overhead, both from a hardware resource and a capacity management perspective. A standard server can be divided into dramatically more containers than VMs (industry average is 7-9x).
Containers are comprised of only the application executables, application dependencies (runtimes), and the OS binaries necessary for whatever it is to run. By their very nature, containers “trick” an application to believe it is the only application running on a server, even though there may be many other containers sharing the same container host. Containers, by default, have no visibility into the underlying host OS, so a container that runs on say, an Ubuntu Host, could actually contain CentOS binaries, and that container would “think” its running CentOS. Neat trick ah. There are limits to this magic though, as all containers on the OS must share a common OS Kernel, so the container binaries must be interoperable with the base kernel version.
Containers enable standardisation, which makes the container image itself 100% portable. As a self-contained runtime; anywhere you can run the container, you can stand up your application in a predictable way. This makes the premise of “multi-cloud” or “hybrid-cloud” a reality, and its also what allows developers to create containers on their laptops, and have these laptops execute perfectly, every time, in production, without modification.
Containers can be used for encapsulating anything from a monolithic “single tier” application, through to a full “twelve-factor” microservice. It's this flexibility, coupled with the portability and efficiency, that have made containers so popular today.
Docker was founded in 2008 (originally called DotCloud) and set out to create a platform that would enable developers to build and ship applications quickly and easily. Today, Docker is the defacto-industry standard runtime environment for building software inside containers. It allows developers to create Docker images to deploy containerized applications. Docker is built on open standards and works inside most common operating environments, including Linux and Windows. Today, more than 3.5m developers use Docker to create images making it the mainstay of the modern container environment.
Containers give developers a simple and effective way to reliably build, test, deploy, and redeploy applications across a range of environments, from a local laptop to a managed cloud environment.
Reduced time to market. Because the behaviour of containers is predictable, regardless of hosting environment, the hosting variability is removed and applications can be delivered more quickly. This has significant cost/ROI benefits to organizations.
Organizations use containers to achieve a range of different outcomes:
A business application landscape can comprise hundreds (even thousands) of individual containers to run in production, which can make containerization difficult to manage. As a runtime, Docker is designed to run on a single host, so to get hundreds of containers running requires careful orchestration to ensure containers get access to the resources they need and applications perform as expected.
Organizations can choose to run multiple standalone Docker hosts, managing the distribution of application containers across these hosts manually, or they can delegate this to an orchestration system to do the heavy lifting for them. Kubernetes is the primary orchestrator in the market today; however, a number of organizations still rely on the lesser used Docker Swarm, or Hashicorp Marathon orchestrators as they are fundamentally simpler to deploy and use, but are limited in regards to their ultimate scale.
Kubernetes orchestrates the operation of multiple containers, running across multiple hosts, in harmony together. It manages areas like the use of underlying infrastructure resources for containerized applications such as the amount of compute, network, and storage resources required. Kubernetes make it easier to automate and scale container-based workloads for live production environments.
Kubernetes (or K8s as it’s often known) is complicated. As a system, it was never designed to be manipulated by engineers directly through the command line, rather it was supposed to be driven machine-to-machine via an automation API. Although Kubernetes underpins a lot of CI/CD processes which are fully automated, there are lots of organizations trying to manage Kubernetes manually.
It is, of course, quite possible for platform engineers and developers to learn Kubernetes - but it is hard, it is time consuming and it’s not actually necessary.
Portainer exists to help organizations adopt Kubernetes and Docker Swarm by giving organizations a super easy to use Kubernetes GUI that does much of the heavy lifting for them. With Portainer, not everyone needs to know everything about Kubernetes, which significantly reduces the time to value.