Kirill Kazakov

Kubernetes Cookbook


Скачать книгу

image to the registry by using the following command:

      docker tag auth-app: latest <username> /auth-app: latest

      docker push <username> /auth-app: latest

      Imperative Deployment with kubectl run

      The fastest way to instantly deploy an application is to use the “kubectl run’ command.

      This command creates a pod Kubernetes object. A pod is the smallest and simplest unit of deployment in Kubernetes. At this point, let’s assume that it is a group of one or more containers that share storage, network, and specification. Also, it is the basic building block of Kubernetes.

      Let’s start Minikube and create a deployment. Use the following command:

      kubectl – run auth-app – image= <username> /auth-app: latest – port=8080

      Then check the pod status by using the following command:

      kubectl – get pods

      You will get the following output:

      NAME READY STATUS RESTARTS AGE

      auth-app 1/1 Running 0 4m55s

      To get all the events how the pod got the running state, use the following command:

      kubectl – get events – field-selector involvedObject.name=auth-app

      You will get the following output:

      LAST SEEN TYPE REASON OBJECT MESSAGE

      10m Normal Scheduled pod/auth-app Successfully assigned default/auth-app to minikube

      10m Normal Pulling pod/auth-app Pulling image "<username> /auth-app: latest”

      10m Normal Pulled pod/auth-app Successfully pulled image "<username> /auth-app: latest” in 7.158188757s

      10m Normal Created pod/auth-app Created container auth-app

      10m Normal Started pod/auth-app Started container auth-app

      The pod came over the running state in four steps. First, it was scheduled to the node. Then, it pulled the image from the registry. After that, Pod created the container and started it. We now have a running pod, but we cannot access it outside the cluster. To do that, we need to expose the port’s pod.

      Exposing Your Application with Port Forwarding

      To expose the pod to the outside world, we need to use the “kubectl port-forward’ command. It forwards the local port to a port on the pod. Use the following command to make the pod accessible on port 8080:

      kubectl – port-forward pod/auth-app 8080:8080

      After that, you can request the `/health’ endpoint by using the following command:

      curl http://localhost:8080/health

      You will get the following output:

      {“status”: “OK”}

      Also, we can check access log of the pod by using the following command:

      kubectl – logs -f pod/auth-app

      You will get the following line specifically for our request.

      [2023-11-11T12:58:01Z INFO actix_web::middleware::logger] 127.0.0.1 “GET /health HTTP/1.1” 200 15 "-" “curl/8.1.2” 0.000163

      Using port-forwarding exposes the pod, but it’s not advised for production-like infrastructure. This is because it is not scalable, forwards one port at a time, and is insecure. It is also not reliable because it does not have any retry mechanism. And it’s still an imperative, less convenient command.

      You can use port-forwarding with complete confidence in a local development environment. For example, when you must debug or test the application manually. Sometimes, it makes sense to use it in CI/CD pipelines. When you need to test the application by running integration or system tests, the declarative description looks too redundant compared to a simple command.

      Conclusion

      In this section, we have introduced Minikube as a local Kubernetes environment, outlined its installation and usage, and demonstrated deploying and managing an application through an imperative method, emphasizing Minikube’s capabilities for local development, testing, and learning Kubernetes fundamentals.

      Preparing Your Project for Kubernetes Migration

      Architectural Redesign for Kubernetes Optimization

      This section will discuss principles and patterns to help you scale and manage your workloads on Kubernetes. Kubernetes can handle different workloads, but your choices impact how easy it is to use and what’s possible. The Twelve-Factor App philosophy is a popular methodology for creating cloud-ready web apps. It helps you focus on the most essential characteristics.

      Although checking out the Twelve-Factor App philosophy is highly recommended, we will discuss only some factors here. We will also discuss the most common anti-patterns and how to avoid them.

      Choosing Between Stateless and Stateful Applications

      The first factor that is on everyone’s lips is the application’s state. Kubernetes has a robust mechanism to handle both stateless and stateful applications. To make applications easier to scale and manage, it is essential to strive for statelessness, making as ephemeral a container as possible. You can also move the state to a separate service, like a database. This could be a cloud service like Amazon RDS or Google Cloud SQL. Scaling managed databases and other storage services independently from your application is simple. Lastly, running stateful applications exclusively on Kubernetes takes extra effort and expertise. However, in the long term, it will give you great flexibility and efficiency in operating.

      Embracing Decoupling

      The next thing is that decoupling applications into multiple containers makes scaling horizontally and reusing containers easier. The ideal goal is to equal one container to one process, but it is only sometimes possible. Microservice design is something you should strive for. It is also worth noting that microservices are not a silver bullet. They have their drawbacks, such as increased complexity and overhead. You should use them only when it makes sense.

      Managing Application Configuration

      The third factor is configuration. It means that the application’s configuration must be stored separately. To keep the configuration for Kubernetes applications, you should use ConfigMap or Secret Kubernetes objects, mapping its data to your application’s environment variables or configuration files. You can always use third-party security storage like HashiCorp Vault or AWS Secrets Manager with or without Kubernetes integration.

      Storing configuration in the code is a common mistake beginners make due to the application’s immutability concept. It means that you need to rebuild the image to change the configuration. It is also not secure, has scalability side effects, and is not flexible.

      Centralizing Logging

      The fourth crucial aspect is the implementation of logs within the application. This entails the application’s need to generate logs directed to standard outputs (out) and errors (err). Platforms such as Filebit or Fluentbit are instrumental in aggregating these logs, subsequently transmitting them to processing entities like Fluentd or Vector. These processors should be configured with log pipelines aligned with the specified format. It is advisable to store these logs in a databases such as Elasticsearch or Loki and later access them using visualization tools like Kibana or Grafana.

      In Chapter 14, we’ll talk about logging aspects related to K8s. But logging, in general, is complicated and requires its book to cover all the nuances.

      Implementing Health and Readiness Probes

      The next important