Mission Talks: AWS Containers Session Re:Invent 2020 Insights [Video]
In this blog, Mission cloud consulting experts dive into some of the highlights from the Containers leadership session at re:Invent.
Containerization, or the act of building a ‘container,’is a major trend in software development, and exists as a counterpart to virtualization. Containerization is the packaging or encapsulation of code (and related dependencies) so that they can be run uniformly on any infrastructure.
Containerization can be used either synonymously, or sometimes interchangeably with microservices, in conversation. Microservices traditionally refers to an architectural style of an application, where large bodies of code are broken into loosely coupled services that can be deployed, updated, and refactored - independent of other services. Containerization is the actual act of bundling up code and libraries for use with a container platform (such as Docker).
Companies who move their applications to the cloud often do so in a “lift and shift” pattern, where resources are duplicated 1:1 (as much as possible) in their cloud provider of choice. An app running on a server with 2 Gib of RAM and 2vCPU will be given the same resources in the cloud, initially, whether they are being utilized or not. This is usually done for speed and go-to-market-ability, but it leaves leaders wondering where their next optimization steps can be. Containerization is a key optimization avenue and Docker, is (arguably) the king of containerization. This article will be breaking down what Docker is, why it’s important to organizations, and what it can do for cloud-first companies.
Docker is a technology that allows for the use of containers on hosts (or virtual machines, instances, droplets, etc). A container is a standard unit of software which packages up code and all associated dependencies into a deployable, portable image so the application runs quickly and reliably from one computing environment to another. Docker allows developers to separate code (applications) from infrastructure, allowing for a single host to run many containers. At a high level, it runs a client-server process (aka a container runtime) on top of the host-OS, called “Docker Engine”, to coordinate and provide access to resources for the containers it controls. Docker is not a programming language, and it is not a XaaS (Docker, as a company, does have services offerings), and it is open source (no licensing costs for the core engine).
In the days of yore, when company-owned data centers dotted our lands and the sun went ‘round the earth, organizations overwhelmingly opted to use virtual machines whenever it made sense. You will notice the Guest OS takes up a large portion of the individual “column” for each App.
There are a number of technical reasons why this may be necessary: each app may need full access to the guest OS for compliance or regulatory purposes, or you may have a specific requirement for a server OS that doesn’t support containerization.
In many cases, however, this would be excessive. Using a restaurant analogy, this would be the equivalent of each table (application) in a restaurant (server) having access to not only a waiter, but also an individual kitchen and support staff. Sure, it’s convenient for the consumer to have that much power on demand (just in case), but it’s highly ineffective for the resource provider.
This was the standard practice until Docker exploded on the scene in 2013. Docker is not the first containerization of software; but it’s the most popular, well-known, and widely known.
When we replace machine virtualization with application containerization, individual operating systems for each application are gone. In their place, we have a Docker engine running on top of a single host OS, while identical app containers share necessary libraries.
Now you have a single, fully stocked and manned kitchen (server) which can easily horizontally scale to accommodate individual tables (applications) with more efficiency and speed than was possible with the vertical approach.
Additionally, the Docker Engine conducts all the resource coordination, letting administrators provide hard and soft limits, programatically, for each container - ensuring resource usage doesn’t spiral out of control. Gone are the days where a single developers’ application eats an entire test server because of bad memory management. We end up with a far more economical use of resources, which provides a perfect segue into...
Beyond just being efficient for physical computing resources, Docker can be more economical to run than traditional servers, even in the cloud.
With traditional deployments, it is common to find applications not utilizing the lions’ share of the hardware they rest on. Containerization utilizes what was once a waste of computing resources by allowing multiple containers to run on a single server.
Additionally, Docker saves human capital hours, with a write once, run anywhere mentality - containerization removes a lot of the lag between code authoring and integration. It speeds up testing and reduces configuration errors between development and production environments. Properly configured groups of containers often demand far less resources than traditional multi-instance deployment models. Services such as AWS Fargate are an excellent way to start utilizing containers, as they give users flexibility in hardware and granular controls for deployment.
Organizationally you save time-to-market as code can be shipped more rapidly, as well as time-to-surge (TTS) when meeting unanticipated demand on resources due to the flexibility inherent in the cloud. You save time by having necessary resources available in seconds, with absolutely minimal (if any) hands-on-keyboard intervention.
To learn more about how to leverage Docker and move toward an agile, cloud-native environment, reach out to Mission.
Deliver better service to customers, and keep pace in a competitive landscape.