Eugeny Shtoltc

IT Cloud


Скачать книгу

6 15m

      kubectl exec -it .... -c .... bash -c "rm -f healt"

      controlplane $ for i in {1..5}; do echo $ i; done

      one

      2

      3

      four

      five

      controlplane $ kubectl delete deploy readiness

      deployment.apps "readiness" deleted

      Consider a situation when a container becomes temporarily unavailable for work:

      (hostname> health) && (python -m http.server 9000 &) && sleep 60 && rm health && sleep 60 && (hostname> health) sleep 6000

      / bin / sh -c sleep 60 && (python -m http.server 9000 &) && PID = $! && sleep 60 && kill -9 $ PID

      By default, the container enters the Running state upon completion of the execution of scripts in the Dockerfile and the launch of the script specified in the CMD instruction if it is overridden in the configuration in the Command section. But, in practice, if we have a database, it still needs to rise (read data and transfer their RAM and other actions), and this can take a lot of time, while it will not respond to connections, and other applications, although read and ready to accept connections will not be able to do so. Also, the container transitions to the Feils state when the main process in the container crashes. In the case of a database, it can endlessly try to execute an incorrect request and will not be able to respond to incoming requests, while the container will not be restarted, since the database daemon (server) did not formally crash. For these cases, two identifiers have been invented: readinessProbe and livenessProbe, which check the transition of the container to a working state or its failure by a custom script or HTTP request.

      esschtolts @ cloudshell: ~ / bitrix (essch) $ cat health_check.yaml

      apiVersion: v1

      kind: Pod

      metadata:

      labels:

      test: healtcheck

      name: healtcheck

      spec:

      containers:

      – name: healtcheck

      image: alpine: 3.5

      args:

      – / bin / sh

      – -c

      – sleep 12; touch / tmp / healthy; sleep 10; rm -rf / tmp / healthy; sleep 60

      readinessProbe:

      exec:

      command:

      – cat

      – / tmp / healthy

      initialDelaySeconds: 5

      periodSeconds: 5

      livenessProbe:

      exec:

      command:

      – cat

      – / tmp / healthy

      initialDelaySeconds: 15

      periodSeconds: 5

      The container starts after 3 seconds and after 5 seconds a readiness check starts every 5 seconds. On the second check (at 15 seconds of life), the readiness check cat / tmp / healthy will be successful. At this time, the livenessProbe operability check begins and at the second check (at 25 seconds) it ends with an error, after which the container is recognized as not working and is recreated.

      esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl create -f health_check.yaml && sleep 4 && kubectl get

      pods && sleep 10 && kubectl get pods && sleep 10 && kubectl get pods

      pod "liveness-exec" created

      NAME READY STATUS RESTARTS AGE

      liveness-exec 0/1 Running 0 5s

      NAME READY STATUS RESTARTS AGE

      liveness-exec 0/1 Running 0 15s

      NAME READY STATUS RESTARTS AGE

      liveness-exec 1/1 Running 0 26s

      esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods

      NAME READY STATUS RESTARTS AGE

      liveness-exec 0/1 Running 0 53s

      esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods

      NAME READY STATUS RESTARTS AGE

      liveness-exec 0/1 Running 0 1m

      esschtolts @ cloudshell: ~ / bitrix (essch) $ kubectl get pods

      NAME READY STATUS RESTARTS AGE

      liveness-exec 1/1 Running 1 1m

      Kubernetes also provides a startup, which remakes the moment when you can turn the readiness and liveness of the sample into work. This is useful if, for example, we are downloading an application. Let's consider in more detail. Let's take www.katacoda.com/courses/Kubernetes/playground and Python for the experiment. There are TCP, EXEC and HTTP, but HTTP is better, as EXEC spawns processes and can leave them as "zombie processes". In addition, if the server provides interaction via HTTP, then it is against it that you need to check (https://www.katacoda.com/courses/kubernetes/playground):

      controlplane $ kubectl version –short

      Client Version: v1.18.0

      Server Version: v1.18.0

      cat << EOF> job.yaml

      apiVersion: v1

      kind: Pod

      metadata:

      name: healt

      spec:

      containers:

      – name: python

      image: python

      command: ['sh', '-c', 'sleep 60 && (echo "work"> health) && sleep 60 && python -m http.server 9000']

      readinessProbe:

      httpGet:

      path: / health

      port: 9000

      initialDelaySeconds: 3

      periodSeconds: 3

      livenessProbe:

      httpGet:

      path: / health

      port: 9000

      initialDelaySeconds: 3

      periodSeconds: 3

      startupProbe:

      exec:

      command:

      – cat

      – / health

      initialDelaySeconds: 3

      periodSeconds: 3

      restartPolicy: OnFailure

      EOF

      controlplane $ kubectl create -f job.yaml

      pod / healt

      controlplane $ kubectl get pods # not loaded yet

      NAME READY STATUS RESTARTS AGE

      healt 0/1 Running 0 11s

      controlplane $ sleep 30 && kubectl get pods # not loaded yet but image is already zipped

      NAME READY STATUS RESTARTS AGE

      healt 0/1 Running 0 51s

      controlplane $ sleep 60 && kubectl get pods

      NAME READY STATUS RESTARTS AGE

      healt 0/1 Running 1 116s

      controlplane $ kubectl delete -f job.yaml

      pod "healt" deleted

      Self-diagnosis of micro service application

      Let's consider how the