Kirill Kazakov

Kubernetes Cookbook


Скачать книгу

Docker has practically become synonymous with containers, and this reputation is well-deserved. Docker was the first tool to show many users the concept of containers. It made managing container lifecycle, communication, and orchestration easier.

      What is Docker?

      The term “Docker” encompasses various meanings. At a broad level, Docker refers to a collection of containerization tools, including Docker Desktop and Docker Compose. At a more detailed level, Docker represents a container image format, a container runtime library, and a suite of command-line tools. Additionally, Docker, Inc. handles developing and maintaining these tools. Finally, Docker, Inc. founded the Open Container Initiative (OCI), a critical governance structure for container standards.

      Docker Engine vs. Docker Desktop

      As of today, Docker, Inc. offers two primary methods to use Docker: Docker Engine and Docker Desktop.

      If you have a popular Linux system, you can install Docker Engine. Run the official installation script or use your package manager. Docker Engine installation includes the Docker Daemon (dockerd) and Docker Client (docker). Docker Engine is highly regarded for its user-friendly nature and ease of use.

      Docker Desktop helps you use Docker Engine with a graphical interface and useful tools. If you are on macOS or Windows, the exclusive way to use Docker is through Docker Desktop. To use Docker Engine on these operating systems, you need to run it on a Linux virtual machine. You can use tools like VirtualBox, Hyper-V, or Vagrant to manage and set up the VMs.

      Docker Desktop itself uses a virtual machine. The virtual machine runs a Linux environment. The Linux environment has Docker Engine as its core component. The choice of virtualization technology depends on the host operating system. It can use Windows Subsystem for Linux (WSL) or Hyper-V on Windows. On MacOS, it may use HyperKit or QEMU. You don’t have to know how Docker Desktop’s virtualization works and use it like Docker Engine.

      Exploring Podman

      Podman (the POD Manager) is a more recent container engine initially released by RedHat in 2018. Podman differs from Docker because it doesn’t need a separate daemon to run containers. It utilizes the libpod library for running OCI-based containers on Linux. On macOS, each Podman machine is backed by QEMU, and on Windows, by WSL. Unlike Docker, Podman can run rootless containers by default without any prerequisites.

      Podman makes it easy to migrate your project to Kubernetes. It is capable of generating manifests and quickly deploying them in your cluster. This chapter will delve deeper into Podman and its migration capabilities.

      Colima: The Newcomer

      Colima is a relatively new development tool released in 2021. It uses Lima on Linux virtual machines. Lima has containerd runtime with nerdctl installed. Colima adds support for Docker and Kubernetes runtime. Colima’s virtual machines use QEMU with an HVF accelerator. Colima works on MacOS and Linux and is easier to use than Docker Desktop. The good part is that it’s completely free. Yet, it’s important to note that Colima is still in its early stages and has a few limitations.

      Docker, Podman, Colima: Distinctions and Considerations

      In most cases, you can simply interchange Docker, Podman, and Colima. However, there are some critical distinctions between them.

      alias docker=podman

      To use Colima, you must install the Docker or Podman command-line tools.

      When switching from Docker to Podman, users may face minor problems. Podman has a compatibility mode with Docker, which lets you use the same commands. Yet, caution is crucial when switching between these tools in a production environment.

      If you like GUI, you can use Docker Desktop or Podman Desktop. Podman Desktop is a multi-engine tool that is compatible with the APIs of both Docker and Podman. This means you can see all engine containers and images at once.

      All the container tools mentioned above can work with Kubernetes, but their support could be better than high-end tools like Rancher, Kind, or Kubespray. The Kubernetes server runs in the container engine. It is less customizable and is designed for single-node setups. So, it is primarily used for local testing purposes.

      Recipe: Wrap Your Application into a Container

      In this part, we’ll learn how to use various tools to put your application in a container. Assuming we have an online microservice called auth-app, which handles authorization. We wrote this microservice in Rust. We will begin with Docker, then move on to Podman, and finally, Colima. Also, we will modify our containerized application step by step for the better.

      Containerizing with Docker

      To start with Docker, you need to have Docker Desktop installed. Use [official website] (https://docs.docker.com/engine/install/) to get it done, then check the Docker version by using this command:

      docker – version

      You should see something like this:

      Docker version 20.10.7, build f0df350

      We won’t dive deep into our application’s code. You can find it in the GitHub repository. For now, assume that we have the following project structure:

      auth-app/

      ├── src/

      │ ├── main.rs

      ├── Cargo.toml

      ├──.env

      ├──.gitignore

      ├── README.md

      The `main.rs` file serves as the entry point for our Rust application. Hypothetically, if we’re about to use it in a non-container environment, we must install all specified Rust dependencies from Cargo.toml. You can do this by using a command with the help of the Cargo. Cargo is the Rust package manager. It is similar to npm in the JavaScript world or pip in the Python world.

      cargo build

      Then, we can run the application by using the following command:

      cargo run

      And that’s it. The application will continue because of the Actix framework’s infinite loop until you stop it manually. You can verify this by making a Curl request to the /health endpoint:

      curl http://localhost:8000/health

      You should see the following response:

      {“status”: “OK”}

      Running the application in Docker isn’t significantly different. We need a Dockerfile to make an image. Dockerfile is a text document with instructions for the command line. The syntax is straightforward to learn. Let’s create a Dockerfile in the root directory of our project:

      FROM rust:1.73-bookworm as builder

      WORKDIR /app

      COPY..

      RUN – mount=type=cache, target=$CARGO_HOME/registry/cache \

      cargo build – release – bins

      FROM gcr.io/distroless/cc-debian12

      ENV RUST_LOG=info

      COPY – from=builder /app/target/release/auth-app.

      CMD [”. /auth-app”, "-a”, “0.0.0.0”, "-p”, “8080”]

      Let’s go through this Dockerfile line by line:

      FROM rust:1.73-bookwork as builder

      This line tells Docker to use the official Rust image as a base image. The 1.73 tag means using the Debian 12 Bookworm distribution. We also give the base name to the image. We will use it later.

      Many base images from various vendors on the Docker Hub public registry exist. You can find any programming