A guide to container orchestration with Kubernetes

A guide to container orchestration with Kubernetes
Seth Kenlon
Thu, 06/09/2022 – 03:00

Register or Login to like
Register or Login to like

The term orchestration is relatively new to the IT industry, and it still has nuance that eludes or confuses people who don’t spend all day orchestrating. When I describe orchestration to someone, it usually sounds like I’m just describing automation. That’s not quite right. In fact, I wrote a whole article differentiating automation and orchestration.

An easy way to think about it is that orchestration is just a form of automation. To understand how you can benefit from orchestration, it helps to understand what specifically it automates.

Understanding containers

A container is an image of a file system containing only what’s required to run a specific task. Most people don’t build containers from scratch, although reading about how it’s done can be elucidating. Instead, it’s more common to pull an existing image from a public container hub.

A container engine is an application that runs a container. When a container is run, it’s launched with a kernel mechanism called a cgroup, which keeps processes within the container separate from processes running outside the container.

Run a container

You can run a container on your own Linux computer easily with Podman, Docker, or LXC. They all use similar commands. I recommend Podman, as it’s daemonless, meaning a process doesn’t have to be running all the time for a container to launch. With Podman, your container engine runs only when necessary. Assuming you have a container engine installed, you can run a container just by referring to a container image you know to exist on a public container hub.

For instance, to run an Nginx web server:

$ podman run -p 8080:80 nginx
10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
[...]

Open a separate terminal to test it using curl:

$ curl --no-progress-meter localhost:8080 | html2text
# Welcome to nginx!

If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.

For online documentation and support please refer to
[nginx.org](http://nginx.org/).  
Commercial support is available at [nginx.com](http://nginx.com/).

_Thank you for using nginx._

As web server installs go, that’s pretty easy.

Now imagine that the website you’ve just deployed gets an unexpected spike in traffic. You hadn’t planned for that, and even though Nginx is a very resilient web server, everything has its limits. With enough simultaneous traffic, even Nginx can crash. Now what?

Sustaining containers

Containers are cheap. In other words, as you’ve just experienced, they’re trivial to launch.

You can use systemd to make a container resilient, too, so that a container automatically relaunches even in the event of a crash. This is where using Podman comes in handy. Podman has a command to generate a systemd service file based on an existing container:

$ podman create --name mynginx -p 8080:80 nginx
$ podman generate systemd mynginx
--restart-policy=always -t 5 -f -n

You can launch your container service as a regular user:

$ mkdir -p ~/.config/systemd/user
$ mv ./container-mynginx.service ~/.config/systemd/user/
$ systemctl enable --now --user container-mynginx.service
$ curl --head localhost:8080 | head -n1
HTTP/1.1 200 OK

Run pods of containers

Because containers are cheap, you can readily launch more than one container to meet the demand for your service. With two (or more) containers offering the same service, you increase the likelihood that better distribution of labor will successfully manage incoming requests.

You can group containers together in pods, which Podman (as its name suggests) can create:

$ systemctl stop --user container-myngnix
$ podman run -dt --pod new:mypod -p 8080:80 nginx
$ podman pod ps
POD ID     NAME   STATUS  CREATED  INFRA ID  # OF CONTAINERS
26424cc... mypod  Running 22m ago  e25b3...   2

This can also be automated using systemd:

$ podman generate systemd mypod
--restart-policy=always -t 5 -f -n

Clusters of pods and containers

It’s probably clear that containers offer diverse options for how you deploy networked applications and services, especially when you use the right tools to manage them. Both Podman and systemd integrate with containers very effectively, and they can help ensure that your containers are available when they’re needed.

But you don’t really want to sit in front of your servers all day and all night just so you can manually add containers to pods any time the whole internet decides to pay you a visit. Even if you could do that, containers are only as robust as the computer they run on. Eventually, containers running on a single server do exhaust that server’s bandwidth and memory.

The solution is a Kubernetes cluster: lots of servers, with one acting as a “control plane” where all configuration is entered and many, many others acting as compute nodes to ensure your containers have all the resources they need. Kubernetes is a big project, and there are many other projects, like Terraform, Helm, and Ansible, that interface with Kubernetes to make common tasks scriptable and easy. It’s an important topic for all levels of systems administrators, architects, and developers.

To learn all about container orchestration with Kubernetes, download our free eBook: A guide to orchestration with Kubernetes. The guide teaches you how to set up a local virtual cluster, deploy an application, set up a graphical interface, understand the YAML files used to configure Kubernetes, and more.

To learn all about container orchestration with Kubernetes, download our new eBook.

CC-BY-SA William Kenlon  http://www.williamkenlon.com

William Kenlon. CC BY-SA 4.0

What to read next

Creative Commons License
This work is licensed under a Creative Commons Attribution-Share Alike 4.0 International License.

Powered by WPeMatico

Author: dasuberworm

Standing just over 2 meters and hailing from о́стров Ратма́нова, Dasuberworm is a professional cryptologist, entrepreneur and cage fighter. When he's not breaking cyphers and punching people in the face, Das enjoys receiving ominous DHL packages at one of his many drop sites in SE Asia.

Share This Post On