GSP1241

Overview
Google Kubernetes Engine (GKE) Enterprise edition comes with two features to help administrators streamline and automate the GKE Enterprise resource management process:
-
Config Sync is a GitOps-driven service that automates the synchronization of configurations stored in a Git repository with the Kubernetes cluster.
-
Policy Controller checks, audits, and enforces your clusters' compliance with policies related to security, regulations, or business rules.
Using Config Sync and Policy Controller together allows for automated management of Kubernetes cluster configuration and policy enforcement. This integrated approach simplifies cluster management, strengthens security posture, and ensures continuous compliance, allowing you to confidently manage Kubernetes deployments across your fleet.
In this lab, you will use Config Sync to automate configuration and Policy Controller to enforce policies on GKE Enterprise resources. This provides an efficient and secure way to maintain Kubernetes infrastructure.
Objectives
In this lab, you learn how to perform the following tasks:
- Configure Policy Controller and Config Sync
- Deploy a sample application on two GKE clusters using Policy Controller and Config Sync
- Configure Policy Controller constraints on the clusters and view violations
- Create a custom constraint template and constraint
- Resolve the constraint violations by creating the required resources on the GKE clusters
Setup and requirements
Before you click the Start Lab button
Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources are made available to you.
This hands-on lab lets you do the lab activities in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials you use to sign in and access Google Cloud for the duration of the lab.
To complete this lab, you need:
- Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito (recommended) or private browser window to run this lab. This prevents conflicts between your personal account and the student account, which may cause extra charges incurred to your personal account.
- Time to complete the lab—remember, once you start, you cannot pause a lab.
Note: Use only the student account for this lab. If you use a different Google Cloud account, you may incur charges to that account.
How to start your lab and sign in to the Google Cloud console
-
Click the Start Lab button. If you need to pay for the lab, a dialog opens for you to select your payment method.
On the left is the Lab Details pane with the following:
- The Open Google Cloud console button
- Time remaining
- The temporary credentials that you must use for this lab
- Other information, if needed, to step through this lab
-
Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).
The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
Note: If you see the Choose an account dialog, click Use Another Account.
-
If necessary, copy the Username below and paste it into the Sign in dialog.
{{{user_0.username | "Username"}}}
You can also find the Username in the Lab Details pane.
-
Click Next.
-
Copy the Password below and paste it into the Welcome dialog.
{{{user_0.password | "Password"}}}
You can also find the Password in the Lab Details pane.
-
Click Next.
Important: You must use the credentials the lab provides you. Do not use your Google Cloud account credentials.
Note: Using your own Google Cloud account for this lab may incur extra charges.
-
Click through the subsequent pages:
- Accept the terms and conditions.
- Do not add recovery options or two-factor authentication (because this is a temporary account).
- Do not sign up for free trials.
After a few moments, the Google Cloud console opens in this tab.
Note: To access Google Cloud products and services, click the Navigation menu or type the service or product name in the Search field.
Activate Cloud Shell
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.
-
Click Activate Cloud Shell
at the top of the Google Cloud console.
-
Click through the following windows:
- Continue through the Cloud Shell information window.
- Authorize Cloud Shell to use your credentials to make Google Cloud API calls.
When you are connected, you are already authenticated, and the project is set to your Project_ID, . The output contains a line that declares the Project_ID for this session:
Your Cloud Platform project in this session is set to {{{project_0.project_id | "PROJECT_ID"}}}
gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
- (Optional) You can list the active account name with this command:
gcloud auth list
- Click Authorize.
Output:
ACTIVE: *
ACCOUNT: {{{user_0.username | "ACCOUNT"}}}
To set the active account, run:
$ gcloud config set account `ACCOUNT`
- (Optional) You can list the project ID with this command:
gcloud config list project
Output:
[core]
project = {{{project_0.project_id | "PROJECT_ID"}}}
Note: For full documentation of gcloud, in Google Cloud, refer to the gcloud CLI overview guide.
Task 1. Create GKE clusters and enable the GKE Service Mesh
In this task, you complete some prework to make the subsequent sections easier to work through. This includes setting environment variables, copying the necessary lab files, and creating contexts for both of the GKE clusters.
Enable the required GKE enterprise APIs
- Enable the required APIs:
gcloud services enable \
--project={{{primary_project.project_id|PROJECT_ID}}} \
anthos.googleapis.com \
anthosconfigmanagement.googleapis.com \
container.googleapis.com \
stackdriver.googleapis.com \
monitoring.googleapis.com \
cloudtrace.googleapis.com \
logging.googleapis.com \
meshca.googleapis.com \
meshtelemetry.googleapis.com \
meshconfig.googleapis.com \
multiclustermetering.googleapis.com \
multiclusteringress.googleapis.com \
multiclusterservicediscovery.googleapis.com \
iamcredentials.googleapis.com \
iam.googleapis.com \
gkeconnect.googleapis.com \
gkehub.googleapis.com \
compute.googleapis.com \
sourcerepo.googleapis.com \
osconfig.googleapis.com
gcloud services enable \
--project={{{primary_project.project_id|PROJECT_ID}}} \
trafficdirector.googleapis.com \
networkservices.googleapis.com \
mesh.googleapis.com \
cloudresourcemanager.googleapis.com
Create two GKE clusters
- Create the first GKE cluster (with the
--async flag to avoid waiting for the first cluster to provision) with authorized networks:
gcloud container clusters create "gke-cluster-1" \
--node-locations {{{primary_project.default_zone|ZONE}}} \
--location {{{primary_project.default_region|REGION}}} \
--num-nodes "2" --min-nodes "2" --max-nodes "2" \
--workload-pool "{{{primary_project.project_id|PROJECT_ID}}}.svc.id.goog" \
--enable-ip-alias \
--machine-type "e2-standard-4" \
--node-labels mesh_id=proj-{{{primary_project.startup_script.project_number | PROJECT_NUMBER}}} \
--labels mesh_id=proj-{{{primary_project.startup_script.project_number | PROJECT_NUMBER}}} \
--fleet-project={{{primary_project.project_id|PROJECT_ID}}} --async
- Create a second cluster named
gke-cluster-2:
gcloud container clusters create "gke-cluster-2" \
--node-locations {{{primary_project.default_zone|ZONE}}} \
--location {{{primary_project.default_region|REGION}}} \
--num-nodes "2" --min-nodes "2" --max-nodes "2" \
--workload-pool "{{{primary_project.project_id|PROJECT_ID}}}.svc.id.goog" \
--enable-ip-alias \
--machine-type "e2-standard-4" \
--node-labels mesh_id=proj-{{{primary_project.startup_script.project_number | PROJECT_NUMBER}}} \
--labels mesh_id=proj-{{{primary_project.startup_script.project_number | PROJECT_NUMBER}}} \
--fleet-project={{{primary_project.project_id|PROJECT_ID}}}
Note: It can take up to 10 minutes to provision the GKE clusters.
- Verify that both the clusters are in running state:
gcloud container clusters list
- Create a WORKDIR to store all associated files for this tutorial:
mkdir -p secure-gke && cd secure-gke && export WORKDIR=$(pwd)
Enable the GKE Service Mesh fleet feature
In this section, you install GKE Service Mesh on the two GKE clusters and configure the clusters for cross-cluster service discovery.
For gke-cluster-1
- Enable mesh:
gcloud storage cp -r gs://spls/gsp1241/k8s/ ~
gcloud beta container hub mesh enable --project={{{primary_project.project_id|PROJECT_ID}}}
- Get cluster credentials:
gcloud container clusters get-credentials gke-cluster-1 --zone {{{primary_project.default_region|REGION}}}
- Verify CRD is established in the cluster:
for NUM in {1..60} ; do
kubectl get crd | grep controlplanerevisions.mesh.cloud.google.com && break
sleep 10
done
kubectl wait --for=condition=established crd controlplanerevisions.mesh.cloud.google.com --timeout=10m
The output should be similar to the following:
controlplanerevisions.mesh.cloud.google.com 2024-03-18T16:03:10Z
customresourcedefinition.apiextensions.k8s.io/controlplanerevisions.mesh.cloud.google.com condition met
Note: It can take up to 10 minutes to established CRD.
- Apply the mesh_id label:
gcloud container clusters update gke-cluster-1 \
--project {{{primary_project.project_id|PROJECT_ID}}} \
--region {{{primary_project.default_region|REGION}}} \
--update-labels=mesh_id=proj-{{{primary_project.startup_script.project_number | PROJECT_NUMBER}}}
- Create the istio namespace and apply Control Plane CR:
kubectl apply -f ~/k8s/namespace-istio-system.yaml
kubectl apply -f ~/k8s/controlplanerevision-asm-managed.yaml
- Verify that the control plane is provisioned:
kubectl wait --for=condition=ProvisioningFinished controlplanerevision asm-managed -n istio-system --timeout 600s
The output should be similar to the following:
controlplanerevision.mesh.cloud.google.com/asm-managed condition met
Note: It can take up to 10 minutes to meet the condition. Re-run the command if you do not get the expected output.
- Create the ASM gateway namespace and apply ASM Gateway:
kubectl apply -f ~/k8s/namespace-asm-gateways.yaml
kubectl apply -f ~/k8s/asm-ingressgateway.yaml
For gke-cluster-2
- Get the cluster credentials:
gcloud container clusters get-credentials gke-cluster-2 --zone {{{primary_project.default_region|REGION}}}
- Verify CRD is established in the cluster:
for NUM in {1..60} ; do
kubectl get crd | grep controlplanerevisions.mesh.cloud.google.com && break
sleep 10
done
kubectl wait --for=condition=established crd controlplanerevisions.mesh.cloud.google.com --timeout=10m
The output should be similar to the following:
controlplanerevisions.mesh.cloud.google.com 2024-03-18T16:03:10Z
customresourcedefinition.apiextensions.k8s.io/controlplanerevisions.mesh.cloud.google.com condition met
Note: It can take up to 10 minutes to established CRD.
- Apply the mesh_id label:
gcloud container clusters update gke-cluster-2 \
--project {{{primary_project.project_id|PROJECT_ID}}} \
--region {{{primary_project.default_region|REGION}}} \
--update-labels=mesh_id=proj-{{{primary_project.startup_script.project_number | PROJECT_NUMBER}}}
- Create istio namespace and apply Control Plane CR:
kubectl apply -f ~/k8s/namespace-istio-system.yaml
kubectl apply -f ~/k8s/controlplanerevision-asm-managed.yaml
- Verify that the control plane is provisioned:
kubectl wait --for=condition=ProvisioningFinished controlplanerevision asm-managed -n istio-system --timeout 600s
The output should be similar to the following:
controlplanerevision.mesh.cloud.google.com/asm-managed condition met
Note: It can take up to 10 minutes to met the condition. Re-run the command if you do not get the expected output.
- Create the ASM gateway namespace and apply ASM Gateway:
kubectl apply -f ~/k8s/namespace-asm-gateways.yaml
kubectl apply -f ~/k8s/asm-ingressgateway.yaml
Prepare the Config Sync git repository via the Configuration Management.
- Create an IAM policy binding between the Kubernetes service account and the Google service account.
gcloud --project={{{primary_project.project_id|PROJECT_ID}}} iam service-accounts add-iam-policy-binding \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:{{{primary_project.project_id|PROJECT_ID}}}.svc.id.goog[config-management-system/root-reconciler]" \
asm-reader-sa@{{{primary_project.project_id|PROJECT_ID}}}.iam.gserviceaccount.com
The Kubernetes service account is not created until you configure Config Sync for the first time. This binding lets the Config Sync Kubernetes service account act as the Google service account.
- Configure your Git client:
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
Click Check my progress to verify the objective.
Perform initial set up task
Task 2. Install Config Sync
With Config Sync, you can manage Kubernetes resources with configuration files stored in a source of truth. Config Sync supports Git repositories, OCI images, and Helm charts as a source of truth. This page shows you how to enable and configure Config Sync so that it syncs from your root repository.
-
In the Google Cloud console, go to Kubernetes Engine > Config.
-
In the Dashboard tabbed page, click Install Config Sync.
-
In Config Sync, leave all the fields with their default values.
-
In the Available clusters table, select both clusters and click Install Config Sync.
After a few minutes, go to the Settings tab. You should see Status is enabled for both clusters.
-
On the Dashboard tabbed page, click Deploy Package.
-
In the Select clusters for package deployment table, select both clusters, then click Continue.
-
Leave Package hosted on Git selected, then click Continue.
-
In the Package name field, enter root-sync.
Leave the Sync type as Cluster scoped sync.
- In the Repository URL field, enter the following url:
https://source.developers.google.com/p/{{{primary_project.project_id|PROJECT_ID}}}/r/acm-repo
-
In the Branch field, enter main.
-
In advanced settings, enter the following values:
| Parameter |
Value |
| Authentication type |
Workload Identity |
| GCP service account email |
asm-reader-sa@.iam.gserviceaccount.com |
| Source format |
Hierarchy |
Leave all other fields with their default values.
- Click Deploy Package.
Note: Upon deployment of the package, the Sync status column displays Error. This is expected behaviour, as Task 3 involves pushing configuration files to the repository.
- Get cluster credentials for your GKE Clusters:
touch ~/secure-gke/asm-kubeconfig && export KUBECONFIG=~/secure-gke/asm-kubeconfig
gcloud container clusters get-credentials gke-cluster-1 --zone {{{primary_project.default_region|REGION}}}
gcloud container clusters get-credentials gke-cluster-2 --zone {{{primary_project.default_region|REGION}}}
kubectl config rename-context gke_{{{primary_project.project_id|PROJECT_ID}}}_{{{primary_project.default_region|REGION}}}_gke-cluster-1 gke-cluster-1
kubectl config rename-context gke_{{{primary_project.project_id|PROJECT_ID}}}_{{{primary_project.default_region|REGION}}}_gke-cluster-2 gke-cluster-2
kubectl config get-contexts
- Download the latest version of the nomos client:
gsutil cp gs://config-management-release/released/latest/linux_amd64/nomos ~/secure-gke/nomos && chmod +x ~/secure-gke/nomos
export NOMOS=~/secure-gke/nomos
$NOMOS version
You may see an error message stating that the cluster cannot be contacted. If so, just rerun the $NOMOS version command, as this is usually a timing issue.
The output should be similar to the following:
CURRENT CLUSTER_CONTEXT_NAME COMPONENT VERSION
v1.17.2-rc.1
gke-cluster-1 config-management v1.17.2-rc.1
* gke-cluster-2 config-management v1.17.2-rc.1
Click Check my progress to verify the objective.
Install Config Sync
Task 3. Deploy an app via Config Sync
In this task, you use Config Sync to deploy an application. Note that most customers have their own preferred CI/CD tools for deploying applications. GKE Config Sync is recommended and commonly used for Gitops driven configuration management for Kubernetes resources. In the following tasks, you will learn how to use Config Sync to deploy Kubernetes resources and policies.
- Deploy the Cymbal Bank application (it is called "Bank of Anthos" in the repository at the moment):
gcloud source repos clone acm-repo --project={{{primary_project.project_id|PROJECT_ID}}}
gcloud storage cp -r gs://spls/gsp1241/acm-repo/ ~/secure-gke/
cd ~/secure-gke/acm-repo/
- Push the code to the main branch:
git checkout -b main
git add .
git status
git commit -am "Cymbal Bank application deployment"
git push -u origin main
Note: It can take up to 2 minutes to create a namespace in a cluster. Please re-run the following service account commands if you get the namespaces not found error.
- Create service account for namespaces in each cluster.
- For cluster
gke-cluster-1:
gcloud container clusters get-credentials gke-cluster-1 --zone {{{primary_project.default_region|REGION}}} --project {{{primary_project.project_id|PROJECT_ID}}}
kubectl create serviceaccount bank-of-anthos --namespace balance-reader
kubectl create serviceaccount bank-of-anthos --namespace contacts
kubectl create serviceaccount bank-of-anthos --namespace frontend
kubectl create serviceaccount bank-of-anthos --namespace ledger-writer
kubectl create serviceaccount bank-of-anthos --namespace transaction-history
kubectl create serviceaccount bank-of-anthos --namespace userservice
- For Cluster
gke-cluster-2:
gcloud container clusters get-credentials gke-cluster-2 --zone {{{primary_project.default_region|REGION}}} --project {{{primary_project.project_id|PROJECT_ID}}}
kubectl create serviceaccount bank-of-anthos --namespace balance-reader
kubectl create serviceaccount bank-of-anthos --namespace contacts
kubectl create serviceaccount bank-of-anthos --namespace frontend
kubectl create serviceaccount bank-of-anthos --namespace ledger-writer
kubectl create serviceaccount bank-of-anthos --namespace transaction-history
kubectl create serviceaccount bank-of-anthos --namespace userservice
-
In the console, go to the Config Sync Dashboard.
-
Click the Packages tabbed page.
In the root-sync field, you should see Sync status and Reconcile status are Synced and Current.
Note: It can take up to 2-5 minutes to sync the status.
- Access the application by browsing to the ASM ingressgateway external IP address:
export ASM_INGRESS_IP_CLUSTER_1=$(kubectl --context=gke-cluster-1 -n asm-gateways get svc asm-ingressgateway -ojsonpath='{.status.loadBalancer.ingress[].ip}')
echo -e "ASM_INGRESS_IP_CLUSTER_1 is ${ASM_INGRESS_IP_CLUSTER_1}"
export ASM_INGRESS_IP_CLUSTER_2=$(kubectl --context=gke-cluster-2 -n asm-gateways get svc asm-ingressgateway -ojsonpath='{.status.loadBalancer.ingress[].ip}')
echo -e "ASM_INGRESS_IP_CLUSTER_2 is ${ASM_INGRESS_IP_CLUSTER_2}"
- Test each of these IPs in your web browser; you should see the login screen.
Now that Config Sync is synced to a repository, it continuously reconciles the state of your clusters with the configs in the repository.
Click Check my progress to verify the objective.
Deploy an app via Config Sync
Task 4. Install policy controller in the fleet
In this task, you install the Policy Controller. Policy Controller checks, audits, and enforces your clusters' compliance with policies related to security, regulations, or business rules.
-
In the console, go to Kubernetes Engine > Policy page under the Posture Management section.
-
Click Configure policy controller.
-
Click Configure, then Confirm.
-
In Settings, click Sync with fleet settings
-
Select both clusters.
-
Click Sync to fleet setings, then Confirm.
On the Policy Controller Settings tab, you will find that the Policy Controller is installed and configured on your clusters. This can take several minutes.
Note: The Installed status can take several minutes to display.
Click Check my progress to verify the objective.
Install policy controller in the fleet
Task 5. Deploying policies in dry-run mode
While you used Config Sync to deploy a sample application to your GKE clusters, the application is running in an unsecure manner. This means that there is no encryption (or authentation) configured between the services, no NetworkPolicies between namespaces, and no authorization policies.
In this task, you deploy policies on the GKE cluster in dry-run mode to see how they could help to increase your cluster's security posture. Applying contraints in dry dryrun enables Policy Controller to report violations in the status.violations.
Configuring Strict mTLS
Strict mTLS: Achieve end-to-end encryption between all services in your cluster by requiring mesh-wide strict mTLS using ASM.
This constraint requires that ASM authentication Policy specify peers with STRICT mutual TLS.
- Create the constraint and commit it to the ACM repo:
mkdir ~/secure-gke/acm-repo/cluster
gcloud storage cp -r gs://spls/gsp1241/cluster/constraint-gke-1-mtls-strict.yaml ~/secure-gke/acm-repo/cluster
cd ~/secure-gke/acm-repo/
git add . && git commit -am "constraint strict mtls dry-run on cluster 2"
git push -u origin main
-
Navigate to the Config Dashboard and click the Package tabbed page to see the Sync status.
-
Click on the Refresh button a few times to check for the latest sync status.
In the root-sync field, you should see the Sync status and Reconcile status are Synced and Current.
- Once the clusters have synchronized, inspect the violation:
kubectl config use-context gke-cluster-2
kubectl --context=gke-cluster-2 get policystrictonly policy-strict-constraint -ojsonpath='{.status.violations}' | jq
The output should be similar to the following:
[
{
"enforcementAction": "dryrun",
"group": "security.istio.io",
"kind": "PeerAuthentication",
"message": "spec.mtls.mode must be set to `STRICT`",
"name": "default",
"namespace": "istio-system",
"version": "v1beta1"
}
]
As mentioned previously, since the policy is in dry-run mode, it provides the violation in the resource status.
Configuring Destination Rule TLS
Destination Rule mTLS: Achieve enforcement of per-Service encryption by enforcing ASM Destination Rules to have STRICT mTLS configured.
This constraint prohibits disabling TLS for all hosts and host subsets in Istio DestinationRules.
- Update the constraint template for the new API and deploy the constraint with the new constraint template. Additionally, deploy a destination rule that disables TLS, which will be a violation of the new constraint:
gcloud storage cp gs://spls/gsp1241/cluster/constraint-gke-1-mtls-destinationrule.yaml gs://spls/gsp1241/cluster/constrainttemplate-destinationruletlsenabledbeta.yaml ~/secure-gke/acm-repo/cluster
cd ~/secure-gke/acm-repo/
git add . && git commit -am "constraint destinationrule enabled dry-run on cluster 1"
git push -u origin main
-
Go to the Config Dashboard and click the Package tabbed page to view the Sync status.
-
Click on the Refresh button a few times to check for the latest sync status.
In the root-sync field, you should see the Sync status and Reconcile status are Synced and Current.
You may see a temporary error stating that no CustomResourceDefinition is defined for the type DestinationRuleTLSEnabledBeta.constraints.gatekeeper.sh.
This error should resolve itself after a few minutes.
- Inspect the DestinationRule is the frontend namespace:
kubectl config use-context gke-cluster-1
kubectl --context=gke-cluster-1 -n frontend get destinationrule destrule-mtls-disable -ojsonpath={.spec} | jq
When you deployed Cymbal Bank, you deployed a DestinationRule with mTLS disabled
The output should be similar to the following:
{
"host": "frontend",
"trafficPolicy": {
"loadBalancer": {
"simple": "LEAST_CONN"
},
"tls": {
"mode": "DISABLE"
}
}
}
Note that TLS mode is set to DISABLE. This should trigger a violation of the policy you just implemented.
- View the violation:
kubectl --context=gke-cluster-1 get destinationruletlsenabledbeta destinationrule-mtls-enabled -ojsonpath='{.status.violations}' | jq
The output should be similar to the following:
[
{
"enforcementAction": "dryrun",
"group": "networking.istio.io",
"kind": "DestinationRule",
"message": "spec.trafficPolicy.tls.mode == DISABLE for host(s): frontend",
"name": "destrule-mtls-disable",
"namespace": "frontend",
"version": "v1beta1"
}
]
Click Check my progress to verify the objective.
Deploying policies in Dry Run mode
Task 6. Resolving the policy controller violations
There are currently two policy constraints on both clusters. These are as follows:
-
Strict mTLS: Achieve end-to-end encryption between all services in your cluster by requiring mesh wide strict mTLS using ASM.
-
Destination Rule mTLS: Achieve enforcement of per-Service encryption by enforcing ASM Destination Rules to have STRICT mTLS configured.
In this section, you resolve all the violations.
Enabling end-to-end mTLS
- Inspect the violation:
kubectl --context=gke-cluster-2 get policystrictonly policy-strict-constraint -ojsonpath='{.status.violations}' | jq
The output should be similar to the following:
[
{
"enforcementAction": "dryrun",
"group": "security.istio.io",
"kind": "PeerAuthentication",
"message": "spec.mtls.mode must be set to `STRICT`",
"name": "default",
"namespace": "istio-system",
"version": "v1beta1"
}
]
- Resolve the violation by enabling
STRICT mTLS PeerAuthentication resource in the istio-system namespace:
rm -rf ~/secure-gke/acm-repo/namespaces/asm/istio-system/peerauthentication-mtls-disable.yaml
gcloud storage cp gs://spls/gsp1241/cluster/peerauthentication-mtls-strict.yaml ~/secure-gke/acm-repo/namespaces/asm/istio-system/
cd ~/secure-gke/acm-repo/
git add . && git commit -am "enable STRICT mTLS mesh wide"
git push -u origin main
-
Go to the Config Dashboard and click the Package tabbed page to see the Sync status.
-
Click on the Refresh button a few times to check for the latest sync status.
In the root-sync field, you should see Sync status and Reconcile status are Synced and Current:
kubectl --context=gke-cluster-2 -n istio-system get peerauthentication default -ojsonpath='{.spec}' | jq
The output should be similar to the following:
{
"mtls": {
"mode": "STRICT"
}
}
The mode will change from DISABLE to STRICT.
- Inspect the constraint for violations:
kubectl --context=gke-cluster-2 get policystrictonly policy-strict-constraint -ojsonpath='{.status.violations}' | jq
The output is empty. This means this constraint is no longer in violation.
Note: It can take a few minutes for changes to reflect.
Enforcing per Service encryption
- Inspect the violation:
kubectl --context=gke-cluster-1 get destinationruletlsenabledbeta destinationrule-mtls-enabled -ojsonpath='{.status.violations}' | jq
The output should be similar to the following:
[
{
"enforcementAction": "dryrun",
"group": "networking.istio.io",
"kind": "DestinationRule",
"message": "spec.trafficPolicy.tls.mode == DISABLE for host(s): frontend",
"name": "destrule-mtls-disable",
"namespace": "frontend",
"version": "v1beta1"
}
]
- Resolve the violation by enabling the
STRICT mTLS DestinationRule resource in the frontend namespace:
rm -rf ~/secure-gke/acm-repo/namespaces/bank-of-anthos/frontend/destinationrule-mtls-disabled.yaml
gcloud storage cp gs://spls/gsp1241/cluster/destinationrule-mtls-istio-mutual.yaml ~/secure-gke/acm-repo//namespaces/bank-of-anthos/frontend/
cd ~/secure-gke/acm-repo/
git add . && git commit -am "enable ISTIO_MUTUAL TLS for frontend Service"
git push -u origin main
-
Go to the Config Dashboard and click the Package tabbed page to see the sync status.
-
Click on the Refresh button a few times to check for the latest sync status.
In the root-sync field, you should see the Sync status and Reconcile status are Synced and Current:
kubectl --context=gke-cluster-1 -n frontend get destinationrule destrule-mtls-istio-mutual -ojsonpath='{.spec}' | jq
The output should be similar to the following:
{
"host": "frontend",
"trafficPolicy": {
"loadBalancer": {
"simple": "LEAST_CONN"
},
"tls": {
"mode": "ISTIO_MUTUAL"
}
}
}
You may initially see an error stating that the DestinationRule destrule-mtls-istio-mutual was not found. This error will resolve once the repo syncs to the cluster.
- Inspect the constraint for violations:
kubectl --context=gke-cluster-1 get destinationruletlsenabledbeta destinationrule-mtls-enabled -ojsonpath='{.status.violations}' | jq
The output is empty, meaning this constraint is no longer in violation.
Note: It can take couple of minutes for changes to reflect.
Resolving the policy controller violations
Congratulations!
In this lab, you learned to automate configuration with Config Sync, and enforce policies with Policy Controller.
Manual Last Updated February 4, 2026
Lab Last Tested February 4, 2026
Copyright 2026 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.