As a cloud engineer, you are tasked with maintaining a critical application that adds features frequently. You need to roll out new features and updates while minimizing user impact and application downtime. The application is containerized and deployed on Kubernetes. Sometimes, you need to scale up and down in response to changes in user demand. You also want to control traffic to the application from the internet. As you maintain and update the application, some of your considerations include the following:
How do you create a deployment manifest for your cluster?
How can you manually scale the number of pods in your deployment?
How can you create a service that controls inbound traffic to your application?
How can you roll out updates with minimal disruption to users?
How can you execute canary deployments to test new features before completing a full rollout?
As a cloud professional familiar with using Azure Kubernetes Service (AKS), you probably have used YAML manifest files to execute Kubernetes deployments on AKS. You have probably used DevOps pipelines to make application code and containers available to your Kubernetes clusters.
To scale the number of pods in response to demand, you can manually alter the manifest file or use kubectl commands. To control inbound traffic to your application, you would build the plan for load balancer deployment in the DevOps pipeline, then execute. When you need to update the container image, you would run the DevOps pipeline to make the container image available, then use kubectl commands to trigger a deployment rollout.
To execute a canary update, you would need to install Prometheus, configure the pipeline, add the manifest file, and execute the deployment pipeline.
With this in mind, now you will explore how to create deployment manifests for Google Kubernetes Engine (GKE) to create, scale, and update deployments.
Overview
In this lab, you explore the basics of using deployment manifests. Manifests are files that contain configurations required for a deployment that can be used across different Pods. Manifests are easy to change.
Objectives
In this lab, you learn how to perform the following tasks:
Create deployment manifests, deploy to cluster, and verify Pod rescheduling as nodes are disabled
Trigger manual scaling up and down of Pods in deployments
Trigger deployment rollout (rolling update to new version) and rollbacks
Perform a Canary deployment
Lab setup
Access the lab
For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.
Sign in to Qwiklabs using an incognito window.
Note the lab's access time (for example, 1:15:00), and make sure you can finish within that time.
There is no pause feature. You can restart if needed, but you have to start at the beginning.
When ready, click Start lab.
Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.
Click Open Google Console.
Click Use another account and copy/paste credentials for this lab into the prompts.
If you use other credentials, you'll receive errors or incur charges.
Accept the terms and skip the recovery resource page.
After you complete the initial sign-in steps, the project dashboard appears.
Activate Google Cloud Shell
Google Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud.
Google Cloud Shell provides command-line access to your Google Cloud resources.
In Cloud console, on the top right toolbar, click the Open Cloud Shell button.
Click Continue.
It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. For example:
gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
You can list the active account name with this command:
Change to the directory that contains the sample files for this lab:
cd ~/ak8s/Deployments/
Create a deployment manifest
You will create a deployment using a sample deployment manifest called nginx-deployment.yaml that has been provided for you. This deployment is configured to run three Pod replicas with a single nginx container in each Pod listening on TCP port 80:
To deploy your manifest, execute the following command:
kubectl apply -f ./nginx-deployment.yaml
To view a list of deployments, execute the following command:
kubectl get deployments
The output should look like this example.
Output:
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 0/3 3 0 3s
Wait a few seconds, and repeat the command until the number listed for CURRENT deployments reported by the command matches the number of DESIRED deployments.
The final output should look like the example.
Output:
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 42s
Click Check my progress to verify the objective.
Create and deploy manifest nginx deployment
Task 2. Manually scale up and down the number of Pods in deployments
Sometimes, you want to shut down a Pod instance. Other times, you want ten Pods running. In Kubernetes, you can scale a specific Pod to the desired number of instances. To shut them down, you scale to zero.
In this task, you scale Pods up and down in the Google Cloud Console and Cloud Shell.
Scale Pods up and down in the console
Switch to the Google Cloud Console tab.
On the Navigation menu ( ), click Kubernetes Engine > Workloads.
Click nginx-deployment (your deployment) to open the Deployment details page.
At the top, click ACTIONS > Scale > Edit Replicas.
Type 1 and click SCALE.
This action scales down your cluster. You should see the Pod status being updated under Managed Pods. You might have to click Refresh.
Scale Pods up and down in the shell
Switch back to the Cloud Shell browser tab.
In the Cloud Shell, to view a list of Pods in the deployments, execute the following command:
kubectl get deployments
Output:
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 1/1 1 1 3m
To scale the Pod back up to three replicas, execute the following command:
To view a list of Pods in the deployments, execute the following command:
kubectl get deployments
Output:
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 4m
Task 3. Trigger a deployment rollout and a deployment rollback
A deployment's rollout is triggered if and only if the deployment's Pod template (that is, .spec.template) is changed, for example, if the labels or container images of the template are updated. Other updates, such as scaling the deployment, do not trigger a rollout.
In this task, you trigger deployment rollout, and then you trigger deployment rollback.
Trigger a deployment rollout
To update the version of nginx in the deployment, execute the following command:
kubectl set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 --record
This updates the container image in your Deployment to nginx v1.9.1.
To view the rollout status, execute the following command:
kubectl rollout status deployment.v1.apps/nginx-deployment
The output should look like the example.
Output:
Waiting for rollout to finish: 1 out of 3 new replicas updated...
Waiting for rollout to finish: 1 out of 3 new replicas updated...
Waiting for rollout to finish: 1 out of 3 new replicas updated...
Waiting for rollout to finish: 2 out of 3 new replicas updated...
Waiting for rollout to finish: 2 out of 3 new replicas updated...
Waiting for rollout to finish: 2 out of 3 new replicas updated...
Waiting for rollout to finish: 1 old replicas pending termination...
Waiting for rollout to finish: 1 old replicas pending termination...
deployment "nginx-deployment" successfully rolled out
To verify the change, get the list of deployments:
kubectl get deployments
The output should look like the example.
Output:
NAME READY UP-TO-DATE AVAILABLE AGE
nginx-deployment 3/3 3 3 6m
Click Check my progress to verify the objective.
Update version of nginx in the deployment
View the rollout history of the deployment:
kubectl rollout history deployment nginx-deployment
The output should look like the example. Your output might not be an exact match.
View the details of the latest deployment revision:
kubectl rollout history deployment/nginx-deployment --revision=3
The output should look like the example. Your output might not be an exact match but it will show that the current revision has rolled back to nginx:1.7.9.
In this task, you create and verify a service that controls inbound traffic to an application. Services can be configured as ClusterIP, NodePort or LoadBalancer types. In this lab, you configure a LoadBalancer.
Define service types in the manifest
A manifest file called service-nginx.yaml that deploys a LoadBalancer service type has been provided for you. This service is configured to distribute inbound traffic on TCP port 60000 to port 80 on any containers that have the label app: nginx.
In the Cloud Shell, to deploy your manifest, execute the following command:
kubectl apply -f ./service-nginx.yaml
This manifest defines a service and applies it to Pods that correspond to the selector. In this case, the manifest is applied to the nginx container that you deployed in task 1. This service also applies to any other Pods with the app: nginx label, including any that are created after the service.
Verify the LoadBalancer creation
To view the details of the nginx service, execute the following command:
kubectl get service nginx
The output should look like the example.
Output:
NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
nginx 10.X.X.X X.X.X.X 60000/TCP run=nginx 1m
When the external IP appears, open http://[EXTERNAL_IP]:60000/ in a new browser tab to see the server being served through network load balancing.
Note: It may take a few seconds before the ExternalIP field is populated for your service. This is normal. Just re-run the kubectl get services nginx command every few seconds until the field is populated.
Click Check my progress to verify the objective.
Deploy manifest file that deploys LoadBalancer service type
Task 5. Perform a canary deployment
A canary deployment is a separate deployment used to test a new version of your application. A single service targets both the canary and the normal deployments. And it can direct a subset of users to the canary version to mitigate the risk of new releases.
The manifest file nginx-canary.yaml that is provided for you deploys a single pod running a newer version of nginx than your main deployment. In this task, you create a canary deployment using this new deployment file:
The manifest for the nginx Service you deployed in the previous task uses a label selector to target the Pods with the app: nginx label. Both the normal deployment and this new canary deployment have the app: nginx label. Inbound connections will be distributed by the service to both the normal and canary deployment Pods. The canary deployment has fewer replicas (Pods) than the normal deployment, and thus it is available to fewer users than the normal deployment.
Create the canary deployment based on the configuration file:
kubectl apply -f nginx-canary.yaml
When the deployment is complete, verify that both the nginx and the nginx-canary deployments are present:
kubectl get deployments
Switch back to the browser tab that is connected to the external LoadBalancer service ip and refresh the page. You should continue to see the standard Welcome to nginx page.
Switch back to the Cloud Shell and scale down the primary deployment to 0 replicas:
Verify that the only running replica is now the Canary deployment:
kubectl get deployments
Switch back to the browser tab that is connected to the external LoadBalancer service ip and refresh the page. You should continue to see the standard Welcome to nginx page showing that the Service is automatically balancing traffic to the canary deployment.
Click Check my progress to verify the objective.
Create a Canary Deployment
Session affinity
The service configuration used in the lab does not ensure that all requests from a single client will always connect to the same Pod. Each request is treated separately and can connect to either the normal nginx deployment or to the nginx-canary deployment.
This potential to switch between different versions may cause problems if there are significant changes in functionality in the canary release. To prevent this you can set the sessionAffinity field to ClientIP in the specification of the service if you need a client's first request to determine which Pod will be used for all subsequent connections.
In this lab, you discovered how GKE uses the manifest file to deploy, scale, and execute a canary update to your application. Here are some of the key similarities and differences between GKE and AKS.
Similarities:
GKE and EKS are both managed Kubernetes services that allow developers to deploy, manage, and scale containerized applications.
The basic structure and syntax of YAML files for Kubernetes deployments is the same for GKE as for AKS.
AKS and GKE both use kubectl commands.
Differences:
In GKE you can specify the attributes of your clusters, pods, and management without needing to create a DevOps pipeline as you would do for AKS deployments. This is because GKE has the backend infrastructure needed to perform the provisioning tasks using the manifest file.
End your lab
When you have completed your lab, click End Lab. Google Cloud Skills Boost removes the resources you’ve used and cleans the account for you.
You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.
The number of stars indicates the following:
1 star = Very dissatisfied
2 stars = Dissatisfied
3 stars = Neutral
4 stars = Satisfied
5 stars = Very satisfied
You can close the dialog box if you don't want to provide feedback.
For feedback, suggestions, or corrections, please use the Support tab.
Copyright 2022 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.
Lab membuat project dan resource Google Cloud untuk jangka waktu tertentu
Lab memiliki batas waktu dan tidak memiliki fitur jeda. Jika lab diakhiri, Anda harus memulainya lagi dari awal.
Di kiri atas layar, klik Start lab untuk memulai
Gunakan penjelajahan rahasia
Salin Nama Pengguna dan Sandi yang diberikan untuk lab tersebut
Klik Open console dalam mode pribadi
Login ke Konsol
Login menggunakan kredensial lab Anda. Menggunakan kredensial lain mungkin menyebabkan error atau dikenai biaya.
Setujui persyaratan, dan lewati halaman resource pemulihan
Jangan klik End lab kecuali jika Anda sudah menyelesaikan lab atau ingin mengulanginya, karena tindakan ini akan menghapus pekerjaan Anda dan menghapus project
Konten ini tidak tersedia untuk saat ini
Kami akan memberi tahu Anda melalui email saat konten tersedia
Bagus!
Kami akan menghubungi Anda melalui email saat konten tersedia
Satu lab dalam satu waktu
Konfirmasi untuk mengakhiri semua lab yang ada dan memulai lab ini
Gunakan penjelajahan rahasia untuk menjalankan lab
Gunakan jendela Samaran atau browser pribadi untuk menjalankan lab ini. Langkah ini akan mencegah konflik antara akun pribadi Anda dan akun Siswa yang dapat menyebabkan tagihan ekstra pada akun pribadi Anda.
Architecting with Google Kubernetes Engine: Creating Kubernetes Engine Deployments