Author: Aaren DeJong


Let’s imagine for a minute that you are an Ops or Systems Administrator type of person. You may even be a DevOps type of person by practice, and you use VMs at work. Perhaps you’re curious about using containers, or you may already have a facility for using containers in your lab or at work, but you’re not sure where to begin. You may need to reshape how you think about containers, much like I had to after growing up in the age of VMs. Containers were new to me until recently, but now I need to understand them for my day-to-day. This blog will lay out the basics of what needs to be done differently in containers as opposed to VMs, and will help you start thinking differently about how resources get provisioned for applications and services.

code

I encourage a lot of discussion on this topic because people will draw their own parallels to help them understand containers in contrast to VMs, and the more voices in the conversation, the better. Keep reading as I illustrate some of the ways I was able to reach that understanding myself. To get started, it’s important to know this fundamental distinction: containers are instance processes that provide only the necessary dependencies to a single application within, while the VM provides ‘all or nothing’ of those resources and dependencies to any number of applications.

One of the most defining features that contrasts containers to VMs is the create/destroy/create type of life cycle. Containers by nature are minuscule compared to VMs, as there is no surrounding operating system. Instead, only the necessary abstractions and libraries are presented to a program that needs only those few things to function. As such, it is preferable to destroy and create containers on as-needed basis. A good example would be in an update scenario: where in a VM we would run a backup, then update the OS, then verify all is well. We have automation tools for this now, this was once a fully manual process. By contrast, in the container world we would destroy the container and recreate it with necessary updates already applied, rather than update the existing container. You would do this quickly and efficiently with an automation tool, such as Ansible, which provides us with infrastructure-defining code, to allow for these containers to be created and re-created on a whim. Doing the same thing with a VM isn’t as simple and usually means more heavy lifting and OS knowledge.

Coming from a VM-dominant world, I found it especially helpful to map the analogous terms and concepts between VMs and containers. The language may be different or unfamiliar to begin with, but there are clear parallels we can draw between the two technologies.

Thing or Task VMs Containers
storage mounted disks any defined path
storage life persists within VM gone when container is destroyed (unless mapped to persistent volume)
parent environment hypervisor/ OS Linux-based OS with container facility
memory static/dynamic allocation based on hypervisor as-needed shared consumption (doled out by parent OS)
CPU count static allocation as-needed shared consumption (doled out by parent OS)
CLI access remote SSH shell access (rarely if ever in prod)
typical life-span months to years minutes to days
typical updates backup -> update -> repeat destroy -> pull new -> re-deploy
OS resource ISO image/ disc container image (docker image, lxc tar-ball, etc)
resource overhead very heavy (compared to containers) very light
networking addresses per VM ports per container application
binary sources OS repositories public or private registries

We can also draw parallels in the management piece between containers and VMs. We’re all familiar with building and managing VMs from a hypervisor UI, such as oVirt/RHV engine or VMware vCenter, or Virt-Manager. The container equivalent (loosely) isn’t as directly simple, but essentially, it makes sense to look at something like Kubernetes for a parallel. Put into comparative perspective, we expect the UI for our VM hypervisor to handle things like creating VMs, doling out virtual NICs, provision storage, etc, while containers require not quite the same set of things. Something like OpenShift is a tool that leverages Kubernetes for orchestration of containers. What we get in the end is a full package that basically contains the containers. Without them, containers have too many dangling loose ends, such as “where does my storage come from?”, or “what network address can I show myself on?”, or “what project am I serving for?”.

code

All of that and more is what makes OpenShift or Kubernetes the equivalent in container realm, to our hypervisor’s management UI, in the VM realm. Those tools logically separate container instances into pods, akin to VM pools (or similar terms) where application relevant resources are grouped together. This whole concept is much easier to grasp once you’ve built your own K8s (Kubernetes) or OCP (OpenShift) cluster to test with. Though I hope the above is a start for those that still operate with mostly only VMs in their day-to-day.

Let’s consider what these new tools have the potential of doing differently over VMs:

  • Containers in a cluster aren’t as much of a single point of failure as VMs, which to achieve HA, require MUCH more footwork.
  • Containers are not reliant on a parent OS for packages (no more dealing with OS ops); some see this as opportunity for typical sysadmins to become infra-code experts instead.
  • Removing the burden of VM OS management from application deployments is a blessing for developers, who otherwise have to wrestle with OS compatibility and such lifecycle issues.
  • The same is true as above, for system administrators, who can then spend more time on important things rather than babysitting applications deployed within VMs.
  • We gain a lot more density of running applications per hardware OR virtual node, since the overhead of resources is diminished for containers to run.
  • The unique life cycle of containers allows development speed-up through automated continuous integration pipelines (aka: something like Jenkins to auto-test and report on iterations/branches of your code without much babysitting). In the same vein, operations tools can be improved or updated in the same automated manner.
  • Container environments allow for a reliable layer of environment predictability, which can’t always truly be said for full OS environments.

I will add that while an OCP cluster adds some complexity in the setup of a functional container environment, the pay-off in both accomplishment and usable tooling is well worth the investigation. For those still wary about containers versus VMs, just remember that VMs as consolidation of physical resources aren’t going away: we still need VMs for many unique workloads. Containers, however, are where developers and even more operations teams are choosing to put their code and tools. Not to mention that container clusters still take advantage of VMs to exist, and that doesn’t seem to be going anywhere either.

At Arctiq, we’ve built Openshift architecture for many of our clients and the momentum continues. If it’s something you’re curious about, feel free to comment below or get in touch with us on social media (@arctiqteam), and keep an eye out for our future meetups. This discussion will only continue to evolve as the technology continues to grow and expand in both Arctiq’s ecosystem and the industry at large, and I encourage everyone to keep the conversation going.

Tagged:



//comments