Petunjuk dan persyaratan penyiapan lab
Lindungi akun dan progres Anda. Selalu gunakan jendela browser pribadi dan kredensial lab untuk menjalankan lab ini.

Manage and Secure Distributed Services on GKE with Cloud Service Mesh

Lab 1 jam 30 menit universal_currency_alt 7 Kredit show_chart Advanced
info Lab ini mungkin menggabungkan alat AI untuk mendukung pembelajaran Anda.
Konten ini belum dioptimalkan untuk perangkat seluler.
Untuk pengalaman terbaik, kunjungi kami dengan komputer desktop menggunakan link yang dikirim melalui email.

GSP1242

Google Cloud Self-Paced Labs

Overview

Cloud Service Mesh is based on the open source Istio technology. A distributed service is a Kubernetes Service that acts as a single logical service. These services are more resilient than Kubernetes services because they operate on multiple Kubernetes clusters in the same namespace. A distributed service remains operational even if one or more GKE clusters are down, as long as the healthy clusters serve the expected load.

GKE private clusters allow you to configure the nodes and API server as private resources available only on the Virtual Private Cloud (VPC) network. Running distributed services in GKE private clusters gives enterprises secure and reliable services.

This lab teaches you how to run distributed services on multiple Google Kubernetes Engine (GKE) clusters in Google Cloud. In addition, you learn how to expose a distributed service using Multi Cluster Ingress and Cloud Service Mesh.

Objectives

In this lab, you learn how to perform the following tasks:

  • Create three GKE clusters.
  • Configure two of the GKE clusters as private clusters.
  • Configure one GKE cluster (gke-ingress) as the central configuration cluster.
  • Configure networking (NAT Gateways, Cloud Router, and firewall rules) to allow inter-cluster and egress traffic from the two private GKE clusters.
  • Configure authorized networks to allow API service access from Cloud Shell to the two private GKE clusters.
  • Deploy and configure multi-cluster Cloud Service Mesh to the two private clusters in multi-primary mode.
  • Deploy the Cymbal Bank application on the two private clusters.

Scenario

In this lab, you explore how to deploy the Cymbal Bank sample application on two GKE private clusters. Cymbal Bank is a sample microservices application that consists of multiple microservices and SQL databases that simulate an online banking app. The application consists of a web frontend that clients can access, and several backend services such as balance, ledger, and account services that simulate a bank.

The application includes two PostgreSQL databases that are installed in Kubernetes as StatefulSets. One database is used for transactions, while the other database is used for user accounts. All services except the two databases run as distributed services. This means that Pods for all services run in both application clusters (in the same namespace), and Cloud Service Mesh is configured so that each service appears as a single logical service.

Setup and requirements

Before you click the Start Lab button

Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources are made available to you.

This hands-on lab lets you do the lab activities in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials you use to sign in and access Google Cloud for the duration of the lab.

To complete this lab, you need:

  • Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito (recommended) or private browser window to run this lab. This prevents conflicts between your personal account and the student account, which may cause extra charges incurred to your personal account.
  • Time to complete the lab—remember, once you start, you cannot pause a lab.
Note: Use only the student account for this lab. If you use a different Google Cloud account, you may incur charges to that account.

How to start your lab and sign in to the Google Cloud Console

  1. Click the Start Lab button. If you need to pay for the lab, a pop-up opens for you to select your payment method. On the left is a panel populated with the temporary credentials that you must use for this lab.

    Open Google Console

  2. Copy the username, and then click Open Google Console. The lab spins up resources, and then opens another tab that shows the Sign in page.

    Sign in

    Tip: Open the tabs in separate windows, side-by-side.

  3. In the Sign in page, paste the username that you copied from the Connection Details panel. Then copy and paste the password.

    Important: You must use the credentials from the Connection Details panel. Do not use your Qwiklabs credentials. If you have your own Google Cloud account, do not use it for this lab (avoids incurring charges).

  4. Click through the subsequent pages:

    • Accept the terms and conditions.
    • Do not add recovery options or two-factor authentication (because this is a temporary account).
    • Do not sign up for free trials.

After a few moments, the Cloud Console opens in this tab.

Activate Cloud Shell

Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.

In the Cloud Console, in the top right toolbar, click the Activate Cloud Shell button.

Cloud Shell icon

Click Continue.

cloudshell_continue.png

It takes a few moments to provision and connect to the environment. When you are connected, you are already authenticated, and the project is set to your PROJECT_ID. For example:

Cloud Shell Terminal

gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.

You can list the active account name with this command:

gcloud auth list

(Output)

Credentialed accounts: - <myaccount>@<mydomain>.com (active)

(Example output)

Credentialed accounts: - google1623327_student@qwiklabs.net

You can list the project ID with this command:

gcloud config list project

(Output)

[core] project = <project_ID>

(Example output)

[core] project = qwiklabs-gcp-44776a13dea667a6

Task 1. Enable the required APIs

Start by enabling the required APIs, as well as the Cloud Service Mesh Fleet, for your project.

  1. Enable the required GKE Hub and Cloud Service Mesh APIs using the following command:
gcloud services enable \ --project={{{primary_project.project_id|PROJECT_ID}}} \ container.googleapis.com \ mesh.googleapis.com \ gkehub.googleapis.com
  1. Enable the Cloud Service Mesh Fleet for your project using the following command:
gcloud container fleet mesh enable --project={{{primary_project.project_id|PROJECT_ID}}}

Click Check my progress to verify the objective. Enable the required APIs

Task 2. Create the private GKE clusters

Now, you prepare your environment and create private GKE clusters before installing the Cloud Service Mesh.

Prepare networking for private GKE clusters

  1. In Cloud Shell, create and reserve an external IP address for the NAT gateway using the following command:
gcloud compute addresses create {{{primary_project.default_region | REGION}}}-nat-ip \ --project={{{primary_project.project_id|PROJECT_ID}}} \ --region={{{primary_project.default_region | REGION}}}
  1. Store the IP address and name of the IP address in a variable using the following commands:
export NAT_REGION_1_IP_ADDR=$(gcloud compute addresses describe {{{primary_project.default_region | REGION}}}-nat-ip \ --project={{{primary_project.project_id|PROJECT_ID}}} \ --region={{{primary_project.default_region | REGION}}} \ --format='value(address)') export NAT_REGION_1_IP_NAME=$(gcloud compute addresses describe {{{primary_project.default_region | REGION}}}-nat-ip \ --project={{{primary_project.project_id|PROJECT_ID}}} \ --region={{{primary_project.default_region | REGION}}} \ --format='value(name)')
  1. Create a Cloud NAT gateway in the region of the private GKE clusters using the following commands:
gcloud compute routers create rtr-{{{primary_project.default_region | REGION}}} \ --network=default \ --region {{{primary_project.default_region | REGION}}} gcloud compute routers nats create nat-gw-{{{primary_project.default_region | REGION}}} \ --router=rtr-{{{primary_project.default_region | REGION}}} \ --region {{{primary_project.default_region | REGION}}} \ --nat-external-ip-pool=${NAT_REGION_1_IP_NAME} \ --nat-all-subnet-ip-ranges \ --enable-logging
  1. Execute the following command to create a firewall rule that allows Pod-to-Pod communication and Pod-to-API server communication:
gcloud compute firewall-rules create all-pods-and-master-ipv4-cidrs \ --project {{{primary_project.project_id|PROJECT_ID}}} \ --network default \ --allow all \ --direction INGRESS \ --source-ranges 172.16.0.0/28,172.16.1.0/28,172.16.2.0/28,0.0.0.0/0

Pod-to-Pod communication allows the distributed services to communicate with each other across GKE clusters. Pod-to-API server communication lets the Cloud Service Mesh control plane query GKE clusters for service discovery.

  1. Retrieve the updated Cloud Shell and lab-setup VM public IP address using the following command:
export CLOUDSHELL_IP=$(dig +short myip.opendns.com @resolver1.opendns.com) export LAB_VM_IP=$(gcloud compute instances describe lab-setup --format='get(networkInterfaces[0].accessConfigs[0].natIP)' --zone={{{primary_project.default_zone | ZONE}}})

Create two private GKE clusters

In this section, you create two private clusters that have authorized networks. Configure the clusters to allow access from the Pod IP CIDR range (for the Cloud Service Mesh control plane) from Cloud Shell, so that you can access the clusters from your terminal.

  1. Create the first GKE cluster (with the --async flag to avoid waiting for the first cluster to provision) with authorized networks using the following command:
gcloud container clusters create cluster1 \ --project {{{primary_project.project_id|PROJECT_ID}}} \ --zone={{{primary_project.default_zone | ZONE}}} \ --machine-type "e2-standard-4" \ --num-nodes "2" --min-nodes "2" --max-nodes "2" \ --enable-ip-alias --enable-autoscaling \ --workload-pool={{{primary_project.project_id|PROJECT_ID}}}.svc.id.goog \ --enable-private-nodes \ --master-ipv4-cidr=172.16.0.0/28 \ --enable-master-authorized-networks \ --master-authorized-networks $NAT_REGION_1_IP_ADDR/32,$CLOUDSHELL_IP/32,$LAB_VM_IP/32 --async
  1. Create the second GKE cluster with authorized networks using the following command:
gcloud container clusters create cluster2 \ --project {{{primary_project.project_id|PROJECT_ID}}} \ --zone={{{primary_project.default_zone | ZONE}}} \ --machine-type "e2-standard-4" \ --num-nodes "2" --min-nodes "2" --max-nodes "2" \ --enable-ip-alias --enable-autoscaling \ --workload-pool={{{primary_project.project_id|PROJECT_ID}}}.svc.id.goog \ --enable-private-nodes \ --master-ipv4-cidr=172.16.1.0/28 \ --enable-master-authorized-networks \ --master-authorized-networks $NAT_REGION_1_IP_ADDR/32,$CLOUDSHELL_IP/32,$LAB_VM_IP/32 Note: It can take up to 10 minutes to provision the GKE clusters.
  1. Verify that both the clusters are in a running state using the following command:
gcloud container clusters list
  1. Run the following commands to connect to both clusters and generate entries in the kubeconfig file:
touch ~/asm-kubeconfig && export KUBECONFIG=~/asm-kubeconfig gcloud container clusters get-credentials cluster1 --zone {{{primary_project.default_zone | ZONE}}} gcloud container clusters get-credentials cluster2 --zone {{{primary_project.default_zone | ZONE}}}
  1. Rename the cluster contexts for convenience using the following commands:
kubectl config rename-context \ gke_{{{primary_project.project_id|PROJECT_ID}}}_{{{primary_project.default_zone | ZONE}}}_cluster1 cluster1 kubectl config rename-context \ gke_{{{primary_project.project_id|PROJECT_ID}}}_{{{primary_project.default_zone | ZONE}}}_cluster2 cluster2
  1. Confirm that both cluster contexts are properly renamed and configured using the following command:
kubectl config get-contexts --output="name"
  1. Register your clusters to a fleet using the following commands:
gcloud container fleet memberships register cluster1 --gke-cluster={{{primary_project.default_zone | ZONE}}}/cluster1 --enable-workload-identity gcloud container fleet memberships register cluster2 --gke-cluster={{{primary_project.default_zone | ZONE}}}/cluster2 --enable-workload-identity

Update the authorized networks for the clusters

Note: The Cloud Shell IP address is also part of the authorized networks, which allows you to access and manage clusters from your Cloud Shell terminal.

Cloud Shell public-facing IP addresses are dynamic, so each time you start Cloud Shell, you might get a different public IP address. When you get a new IP address, you lose access to the clusters, as the new IP address isn't part of the authorized networks for the two clusters.

You need to perform the following steps every time you start a new Cloud Shell session during this lab.

If you lose access to the clusters, update the clusters' authorized networks to include the new Cloud Shell IP address:

  1. Set the environment variables using the following commands:
export CLOUDSHELL_IP=$(dig +short myip.opendns.com @resolver1.opendns.com) export LAB_VM_IP=$(gcloud compute instances describe lab-setup --format='get(networkInterfaces[0].accessConfigs[0].natIP)' --zone={{{primary_project.default_zone | ZONE}}}) export NAT_REGION_1_IP_ADDR=$(gcloud compute addresses describe {{{primary_project.default_region | REGION}}}-nat-ip \ --project={{{primary_project.project_id|PROJECT_ID}}} \ --region={{{primary_project.default_region | REGION}}} \ --format='value(address)')
  1. Update the authorized networks for the two clusters using the following commands:
gcloud container clusters update cluster1 \ --zone={{{primary_project.default_zone | ZONE}}} \ --enable-master-authorized-networks \ --master-authorized-networks $NAT_REGION_1_IP_ADDR/32,$CLOUDSHELL_IP/32,$LAB_VM_IP/32 gcloud container clusters update cluster2 \ --zone={{{primary_project.default_zone | ZONE}}} \ --enable-master-authorized-networks \ --master-authorized-networks $NAT_REGION_1_IP_ADDR/32,$CLOUDSHELL_IP/32,$LAB_VM_IP/32

Click Check my progress to verify the objective. Prepare your environment

Task 3. Install Cloud Service Mesh

In this task, you install Cloud Service Mesh on the two GKE clusters and configure the clusters for cross-cluster service discovery.

  1. Install Cloud Service Mesh on both clusters with the fleet API using the following command:
gcloud container fleet mesh update --management automatic --memberships cluster1,cluster2
  1. After the managed Cloud Service Mesh is enabled on the clusters, set a watch for the mesh to be installed using the following command:
watch -g "gcloud container fleet mesh describe | grep 'code: REVISION_READY'"

This command automatically exits and returns to the command prompt once REVISION_READY is detected in the output.

Note: It can take up to 10 minutes to install Cloud Service Mesh on both clusters.
  1. Verify the status by running the following command:
gcloud container fleet mesh describe
  1. Install Cloud Service Mesh ingress gateways for both clusters using the following commands:
kubectl --context=cluster1 create namespace asm-ingress kubectl --context=cluster1 label namespace asm-ingress istio-injection=enabled --overwrite kubectl --context=cluster2 create namespace asm-ingress kubectl --context=cluster2 label namespace asm-ingress istio-injection=enabled --overwrite cat <<'EOF' > asm-ingress.yaml apiVersion: v1 kind: Service metadata: name: asm-ingressgateway namespace: asm-ingress spec: type: LoadBalancer selector: asm: ingressgateway ports: - port: 80 name: http - port: 443 name: https --- apiVersion: apps/v1 kind: Deployment metadata: name: asm-ingressgateway namespace: asm-ingress spec: selector: matchLabels: asm: ingressgateway template: metadata: annotations: # This is required to tell GKE Service Mesh to inject the gateway with the # required configuration. inject.istio.io/templates: gateway labels: asm: ingressgateway spec: containers: - name: istio-proxy image: auto # The image will automatically update each time the pod starts. --- apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: asm-ingressgateway-sds namespace: asm-ingress rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get", "watch", "list"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: asm-ingressgateway-sds namespace: asm-ingress roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: asm-ingressgateway-sds subjects: - kind: ServiceAccount name: default EOF kubectl --context=cluster1 apply -f asm-ingress.yaml kubectl --context=cluster2 apply -f asm-ingress.yaml
  1. Verify that the Cloud Service Mesh ingress gateways are deployed using the following commands:
kubectl --context=cluster1 get pod,service -n asm-ingress kubectl --context=cluster2 get pod,service -n asm-ingress

The output for both clusters should look as follows.

Output:

NAME READY STATUS RESTARTS AGE pod/asm-ingressgateway-5894744dbd-zxlgc 1/1 Running 0 84s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/asm-ingressgateway LoadBalancer 10.16.2.131 34.102.100.138 80:30432/TCP,443:30537/TCP 92s Note: The ingress gateway pods may not immediately display a READY status of 1/1. It can take around 15–20 minutes for full initialization. Please wait and proceed only once all pods in both clusters show READY 1/1. Install Cloud Service Mesh

After the Cloud Service Mesh control plane and ingress gateways are installed for both clusters, cross-cluster service discovery is enabled with the fleet API. Cross-cluster service discovery allows the two clusters to discover service endpoints from the remote cluster. Distributed services run on multiple clusters in the same namespace.

The clusters and Cloud Service Mesh are now configured.

Task 4. Deploy the Cymbal Bank application

Now, it's time to deploy the application and access Cymbal Bank.

  1. Clone the Cymbal Bank GitHub repository (called "Bank of Anthos" in the repository) using the following command:
git clone https://github.com/GoogleCloudPlatform/bank-of-anthos.git ${HOME}/bank-of-anthos
  1. Create and label a bank-of-anthos namespace in both clusters using the following commands. The label allows automatic injection of the sidecar Envoy proxies in every pod within the labeled namespace:
kubectl create --context=cluster1 namespace bank-of-anthos kubectl label --context=cluster1 namespace bank-of-anthos istio-injection=enabled kubectl create --context=cluster2 namespace bank-of-anthos kubectl label --context=cluster2 namespace bank-of-anthos istio-injection=enabled
  1. Deploy the Cymbal Bank application to both clusters in the bank-of-anthos namespace using the following commands:
kubectl --context=cluster1 -n bank-of-anthos apply -f ${HOME}/bank-of-anthos/extras/jwt/jwt-secret.yaml kubectl --context=cluster2 -n bank-of-anthos apply -f ${HOME}/bank-of-anthos/extras/jwt/jwt-secret.yaml kubectl --context=cluster1 -n bank-of-anthos apply -f ${HOME}/bank-of-anthos/kubernetes-manifests kubectl --context=cluster2 -n bank-of-anthos apply -f ${HOME}/bank-of-anthos/kubernetes-manifests

The Kubernetes services need to be in both clusters for service discovery. When a service in one of the clusters tries to make a request, it first performs a DNS lookup for the hostname to get the IP address. In GKE, the kube-dns server running in the cluster handles this lookup, so a configured service definition is required.

  1. Delete the StatefulSets from one cluster so that the two PostgreSQL databases exist in only one of the clusters using the following commands:
kubectl --context=cluster2 -n bank-of-anthos delete statefulset accounts-db kubectl --context=cluster2 -n bank-of-anthos delete statefulset ledger-db

Make sure that all pods are running in both clusters.

  1. Get Pods from cluster1 using the following command:
kubectl --context=cluster1 -n bank-of-anthos get pod

The output should resemble the following.

Output:

NAME READY STATUS RESTARTS AGE accounts-db-0 2/2 Running 0 9m54s balancereader-c5d664b4c-xmkrr 2/2 Running 0 9m54s contacts-7fd8c5fb6-wg9xn 2/2 Running 1 9m53s frontend-7b7fb9b665-m7cw7 2/2 Running 1 9m53s ledger-db-0 2/2 Running 0 9m53s ledgerwriter-7b5b6db66f-xhbp4 2/2 Running 0 9m53s loadgenerator-7fb54d57f8-g5lz5 2/2 Running 0 9m52s transactionhistory-7fdb998c5f-vqh5w 2/2 Running 1 9m52s userservice-76996974f5-4wlpf 2/2 Running 1 9m52s Note: Wait to proceed until all pods show a running status in both clusters.
  1. Get pods from cluster2 using the following command:
kubectl --context=cluster2 -n bank-of-anthos get pod

The output should resemble the following.

Output:

NAME READY STATUS RESTARTS AGE balancereader-c5d664b4c-bn2pl 2/2 Running 0 9m54s contacts-7fd8c5fb6-kv8cp 2/2 Running 0 9m53s frontend-7b7fb9b665-bdpp4 2/2 Running 0 9m53s ledgerwriter-7b5b6db66f-297c2 2/2 Running 0 9m52s loadgenerator-7fb54d57f8-tj44v 2/2 Running 0 9m52s transactionhistory-7fdb998c5f-xvmtn 2/2 Running 0 9m52s userservice-76996974f5-mg7t6 2/2 Running 0 9m51s

Make sure that all Pods are running in both clusters.

  1. Deploy the Cloud Service Mesh configs to both clusters using the following command:
cat <<'EOF' > asm-vs-gateway.yaml apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: asm-ingressgateway namespace: asm-ingress spec: selector: asm: ingressgateway servers: - port: number: 80 name: http protocol: HTTP hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: frontend namespace: bank-of-anthos spec: hosts: - "*" gateways: - asm-ingress/asm-ingressgateway http: - route: - destination: host: frontend port: number: 80 EOF kubectl --context=cluster1 apply -f asm-vs-gateway.yaml kubectl --context=cluster2 apply -f asm-vs-gateway.yaml

This command creates a Gateway in the asm-ingress namespace and VirtualService in the bank-of-anthos namespaces for the frontend service, which allows you to ingress traffic to the frontend service.

Gateways are generally owned by the platform admins or the network admins team. Therefore, this Gateway resource is created in the Ingress Gateway namespace owned by the platform admin and could be used in other namespaces via their own VirtualService entries. This is known as a Shared Gateway model.

Access Cymbal Bank

To access the Cymbal Bank application, use the asm-ingressgateway service public IP address from either cluster.

  1. Retrieve the asm-ingressgateway IP addresses from both clusters using the following commands:
kubectl --context cluster1 \ --namespace asm-ingress get svc asm-ingressgateway -o jsonpath='{.status.loadBalancer}' | grep "ingress" kubectl --context cluster2 \ --namespace asm-ingress get svc asm-ingressgateway -o jsonpath='{.status.loadBalancer}' | grep "ingress"
  1. Open a new web browser tab and go to either IP address from the previous output. The Cymbal Bank frontend should be displayed.

    If you were to log in, deposit funds to your account, or transfer funds to other accounts, this would be the page where you'd action it.

The application should now be fully functional.

Click Check my progress to verify the objective. Deploy the Cymbal Bank application

Task 5. Visualize distributed services

Now, discover how you can visualize the requests from the clusters within both regions.

  1. To view your services, from the console, go to Kubernetes Engine, and select Service Mesh.

You can view services in List view or in a Topology view.

  • The List view shows all of your distributed services running in a tabular format.
  • The Topology view allows you to explore a service topology graph visualization showing your mesh's services and their relationships.
  1. In the List view, click the frontend distributed service. When you click an individual service, a detailed view of the service along with connected services is displayed. In the service details view, you can create SLOs and view a historical timeline of the service by clicking Show Timeline.

  2. To view golden signals, go to Metrics in the left-hand navigation menu.

  3. In the Traffic chart, click Breakdown By, and then select Cluster.

The results display the requests per second from both clusters in the two regions. The distributed service is healthy and both endpoints are serving traffic.

  1. To view the topology of your Cloud Service Mesh, go to Kubernetes Engine in the left-hand navigation menu, and select Service Mesh.

  2. To view additional data, hover your mouse pointer over the frontend service in Topology View. This displays information like requests per second to and from the frontend to other services.

  3. To view more details, click Expand on the frontend service. A Service and a Workload are displayed. You can further expand workload into two Deployments:

    • Expand the deployments into ReplicaSets.
    • Expand the ReplicaSets into Pods.

When you expand all elements of the frontend service in Topology View, the distributed frontend service is listed, which is essentially a Service and two Pods—as indicated by the Legend.

Congratulations!

You have successfully run distributed services on multiple GKE clusters in Google Cloud and observed the services using Cloud Service Mesh.

Next steps / Learn more

Manual Last Updated March 31, 2026

Lab Last Tested March 31, 2026

Copyright 2026 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.

Sebelum memulai

  1. Lab membuat project dan resource Google Cloud untuk jangka waktu tertentu
  2. Lab memiliki batas waktu dan tidak memiliki fitur jeda. Jika lab diakhiri, Anda harus memulainya lagi dari awal.
  3. Di kiri atas layar, klik Start lab untuk memulai

Gunakan penjelajahan rahasia

  1. Salin Nama Pengguna dan Sandi yang diberikan untuk lab tersebut
  2. Klik Open console dalam mode pribadi

Login ke Konsol

  1. Login menggunakan kredensial lab Anda. Menggunakan kredensial lain mungkin menyebabkan error atau dikenai biaya.
  2. Setujui persyaratan, dan lewati halaman resource pemulihan
  3. Jangan klik End lab kecuali jika Anda sudah menyelesaikan lab atau ingin mengulanginya, karena tindakan ini akan menghapus pekerjaan Anda dan menghapus project

Konten ini tidak tersedia untuk saat ini

Kami akan memberi tahu Anda melalui email saat konten tersedia

Bagus!

Kami akan menghubungi Anda melalui email saat konten tersedia

Satu lab dalam satu waktu

Konfirmasi untuk mengakhiri semua lab yang ada dan memulai lab ini

Gunakan penjelajahan rahasia untuk menjalankan lab

Menggunakan jendela Samaran atau browser pribadi adalah cara terbaik untuk menjalankan lab ini. Langkah ini akan mencegah konflik antara akun pribadi Anda dan akun Siswa, yang dapat menyebabkan tagihan ekstra pada akun pribadi Anda.