“colima”,
“DockerEndpoint”: "unix:///Users/m_muravyev/.colima/default/docker.sock”,
“KubernetesEndpoint”: “”,
“ContextType”: “moby”,
“Name”: “colima”,
“StackOrchestrator”:””
},
{
“Current”: false,
“Description”: “”,
“DockerEndpoint”: "unix:///Users/m_muravyev/.docker/run/docker.sock”,
“KubernetesEndpoint”: “”,
“ContextType”: “moby”,
“Name”: “desktop-linux”,
“StackOrchestrator”:””
}
]
The Colima context is the default one pointing to the Docker daemon inside the Colima machine. The desktop-linux context is the default Docker Desktop context. You can always switch between them.
Building Multi-Architecture Docker Images
Docker, Podman, and Colima support multi-architecture images, a powerful feature. You can create and share container images that work on different hardware types. This section will briefly touch on the concept of multi-arch images and how to make them.
Let’s refresh our memory about computer architecture. The Rust compiler can build the application for different architectures. The default one is the host architecture. For example, if you want to run an application on a modern macOS with an M chip, you must compile it on that machine. That’s because the M chip has “arm64” architecture. This architecture differs from the common “amd64”, which you can find on most regular Windows or Linux systems.
You can use Rust’s cross-compilation feature to compile a project for any architecture. It works on any source host platform, even if the target is different. You need to add simple flags to build up running for Apple’s M chip on a regular Linux machine. No matter what our system is, the Rust compiler will always make M chip compatible binary:
rustup target add aarch64-apple-darwin # add the M chip target tripple
cargo build – release – target aarch64-apple-darwin # build the binary using that target
To build the application for Linux, we can use the target triple “x86_64-unknown-linux-gnu’. Don’t worry about the “unknown’ part. It is just a placeholder for the vendor and operating system. In this case, it just means for any vendor and Linux OS. The “gnu’ part means that the GNU C library is used. It is the most common C language library for Linux.
It is important to say that there are drawbacks to using this method instead of creating images that support multiple architectures:
– Cross-compilation adds complexity and overhead to the build process because it works differently for each programming language.
– Building an image takes more time because of installing and configuring the cross-compilation toolchains.
– Creating distinct Dockerfiles for each architecture becomes necessary, leading to a less maintainable and scalable approach.
– Distinguishing the image’s architecture relies on using tags or names. In the case of multi-arch images, these tags or names may remain identical across all architectures.
Let’s create a multi-arch image for our application. We will use the Dockerfile we created earlier.
docker buildx create – use – name multi-arch # create a builder instance
docker buildx build – platform linux/amd64,linux/arm64 -t auth-app: latest.
Buildx is a Docker CLI plugin, formerly called BuildKit, that extends the Docker build. Because we are using Colima with Docker runtime inside, we can use Buildx. Podman also supports Buildx. The ` – platform’ flag specifies the target platforms. The “linux/amd64” is the default platform. The “linux/arm64” is the platform for Apple’s M chip.
Under the hood, Buildx uses QEMU to emulate the target architecture. The build process can take more time than usual cause it will start separate VMs for each target architecture. After the build is complete, you can find out the image’s available architectures by using the following command:
docker inspect auth-app | jq '.[].Architecture’
You need to install the “jq’ tool to run this and further commands. It is a command-line JSON processor that helps you parse and manipulate JSON data.
brew install jq
You will get the following output:
“amd64”
You might notice that only one architecture is available. This is because Buildx uses the ` – output=docker’ type by default, which cannot export multi-platform images. Instead, multi-platform images must be pushed to a registry using the ` – output=oci’ or simply with just the ` – push’ flag. When you use this flag, Docker creates a manifest with all available architectures for the image and attaches it to a nearby image within the registry where it’s pushed. When you pull the image, it will choose your architecture’s image. Let’s check the manifest for the [official Rust image] (https://hub.docker.com/_/rust) on the Docker Hub registry:
docker manifest inspect rust:1.73-bookworm | jq '.manifests[].platform’
Why don’t we specify any URL for a remote Docker Hub registry? That is because Docker CLI has a default registry, so the actual command above explicitly looks like this:
docker manifest inspect docker.io/rust:1.73-bookworm | jq '.manifests[].platform’
You will see output like so:
{
“architecture”: “amd64”,
“os”: “linux”
}
{
“architecture”: “arm”,
“os”: “linux”,
“variant”: “v7”
}
{
“architecture”: “arm64”,
“os”: “linux”,
“variant”: “v8”
}
{
“architecture”: “386”,
“os”: “linux”
}
You can see that the Rust image supports four architectures. Roughly speaking, the “arm’ architecture is for the Raspberry Pi. The “386” architecture is for 32-bit systems. The “amd64” architecture is for 64-bit systems. The “arm64” architecture is for Apple’s M chip.
The Role of Docker in Modern Development
Docker has transformed modern software development by providing a standardized approach through containerization. This approach has made software development, testing, and operations more efficient. Docker creates container images on various hardware configurations, including traditional x64/64 and ARM architectures. It integrates with multiple programming languages, making development and deployment more accessible and versatile for developers.
Docker is helpful for individual development environments and container orchestration and management. Organizations use Docker to streamline their software delivery pipelines, making them more efficient and reliable. Docker provides a comprehensive tool suite for containerization,