Eugeny Shtoltc

IT Cloud


Скачать книгу

gcloud config list project

      [core]

      project = node-cluster-243923

      Your active configuration is: [default]

      essh @ kubernetes-master: ~ / node-cluster $ gcloud config set project node-cluster-243923

      Updated property [core / project].

      essh @ kubernetes-master: ~ / node-cluster $ gcloud compute instances list

      NAME ZONE INTERNAL_IP EXTERNAL_IP STATUS

      gke-node-ks-default-pool-2e5073d4-csmg europe-north1-a 10.166.0.2 35.228.96.97 RUNNING

      gke-node-ks-node-ks-pool-ccbaf5c6-4xgc europe-north1-a 10.166.15.233 35.228.82.222 RUNNING

      gke-node-ks-default-pool-72a6d4a3-ldzg europe-north1-b 10.166.15.231 35.228.143.7 RUNNING

      gke-node-ks-node-ks-pool-9ee6a401-ngfn europe-north1-b 10.166.15.234 35.228.129.224 RUNNING

      gke-node-ks-default-pool-d370036c-kbg6 europe-north1-c 10.166.15.232 35.228.117.98 RUNNING

      gke-node-ks-node-ks-pool-d7b09e63-q8r2 europe-north1-c 10.166.15.235 35.228.85.157 RUNNING

      Switch gcloud and look at an empty project:

      essh @ kubernetes-master: ~ / node-cluster $ gcloud config set project node-cluster-prod-244519

      Updated property [core / project].

      essh @ kubernetes-master: ~ / node-cluster $ gcloud config list project

      [core]

      project = node-cluster-prod-244519

      Your active configuration is: [default]

      essh @ kubernetes-master: ~ / node-cluster $ gcloud compute instances list

      Listed 0 items.

      The previous time, for node-cluster-243923, we created a service account, on behalf of which we created a cluster. To work with multiple Terraform accounts, we will create a service account for the new project through IAM and Administration -> Service Accounts. We will need to make two separate folders to run Terraform separately in order to separate SSH connections that have different authorization keys. If we put both providers with different keys, we will get a successful connection for the first project, later when Terraform proceeds to create a cluster for the next project, it will be rejected due to the invalid key from the first project to the second. There is another possibility – to activate the account as a company account (you need a website and email, and check them by Google), then it will be possible to create projects from the code without using the admin panel. After dev environment:

      essh @ kubernetes-master: ~ / node-cluster $ ./terraform destroy

      essh @ kubernetes-master: ~ / node-cluster $ mkdir dev

      essh @ kubernetes-master: ~ / node-cluster $ cd dev /

      essh @ kubernetes-master: ~ / node-cluster / dev $ gcloud config set project node-cluster-243923

      Updated property [core / project].

      essh @ kubernetes-master: ~ / node-cluster / dev $ gcloud config list project

      [core]

      project = node-cluster-243923

      Your active configuration is: [default]

      essh @ kubernetes-master: ~ / node-cluster / dev $ ../kubernetes_key.json ../main.tf.

      essh @ kubernetes-master: ~ / node-cluster / dev $ cat main.tf

      provider "google" {

      alias = "dev"

      credentials = file ("./ kubernetes_key.json")

      project = "node-cluster-243923"

      region = "europe-west2"

      }

      module "kubernetes_dev" {

      source = "../Kubernetes"

      node_pull = false

      providers = {

      google = google.dev

      }

      }

      data "google_client_config" "default" {}

      module "Nginx" {

      source = "../nodejs"

      providers = {

      google = google.dev

      }

      image = "gcr.io/node-cluster-243923/nodejs_cluster:latest"

      endpoint = module.kubernetes_dev.endpoint

      access_token = data.google_client_config.default.access_token

      cluster_ca_certificate = module.kubernetes_dev.cluster_ca_certificate

      }

      essh @ kubernetes-master: ~ / node-cluster / dev $ ../terraform init

      essh @ kubernetes-master: ~ / node-cluster / dev $ ../terraform apply

      essh @ kubernetes-master: ~ / node-cluster / dev $ gcloud compute instances list

      NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS

      gke-node-ks-default-pool-71afadb8-4t39 europe-north1-a n1-standard-1 10.166.0.60 35.228.96.97 RUNNING

      gke-node-ks-node-ks-pool-134dada1-3cdf europe-north1-a n1-standard-1 10.166.0.61 35.228.117.98 RUNNING

      gke-node-ks-node-ks-pool-134dada1-c476 europe-north1-a n1-standard-1 10.166.15.194 35.228.82.222 RUNNING

      essh @ kubernetes-master: ~ / node-cluster / dev $ gcloud container clusters get-credentials node-ks

      Fetching cluster endpoint and auth data.

      kubeconfig entry generated for node-ks.

      essh @ kubernetes-master: ~ / node-cluster / dev $ kubectl get pods -o wide

      NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE

      terraform-nodejs-6fd8498cb5-29dzx 1/1 Running 0 2m57s 10.12.3.2 gke-node-ks-node-ks-pool-134dada1-c476 none>

      terraform-nodejs-6fd8498cb5-jcbj6 0/1 Pending 0 2m58s none> none> none>

      terraform-nodejs-6fd8498cb5-lvfjf 1/1 Running 0 2m58s 10.12.1.3 gke-node-ks-node-ks-pool-134dada1-3cdf none>

      As you can see, the PODs were distributed across the pool of nodes, while not getting to the node with Kubernetes due to lack of free space. It is important to note that the number of nodes in the pool was increased automatically, and only the specified limit did not allow creating a third node in the pool. If we set remove_default_node_pool to true, then we merge the Kubernetes PODs and our PODs. According to requests for resources, Kubernetes takes up a little more than one core, and our POD takes half, so the rest of the PODs were not created, but we saved on resources:

      essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ gcloud compute instances list

      NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS

      gke-node-ks-node-ks-pool-495b75fa-08q2 europe-north1-a n1-standard-1 10.166.0.57 35.228.117.98 RUNNING

      gke-node-ks-node-ks-pool-495b75fa-wsf5 europe-north1-a n1-standard-1 10.166.0.59 35.228.96.97 RUNNING

      essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ gcloud container clusters get-credentials node-ks

      Fetching cluster endpoint and auth data.

      kubeconfig entry generated for node-ks.

      essh @ kubernetes-master: ~ / node-cluster / Kubernetes $ kubectl get pods -o wide

      NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE

      terraform-nodejs-6fd8498cb5-97svs 1/1 Running 0 14m 10.12.2.2 gke-node-ks-node-ks-pool-495b75fa-wsf5 none>

      terraform-nodejs-6fd8498cb5-d9zkr 0/1 Pending 0 14m none> none> none>

      terraform-nodejs-6fd8498cb5-phk8x 0/1 Pending 0 14m none> none> none>

      After creating a service account, add the key and check it:

      essh