Understanding and Combining GKE Autoscaling Strategies Reviews
18903 reviews
HARI MEGHANA K. · Reviewed over 1 year ago
clear as sky
Michael Pintanta Dwides P. · Reviewed over 1 year ago
Sree Ramachandra G. · Reviewed over 1 year ago
Ashwinder S. · Reviewed over 1 year ago
Aleix H. · Reviewed over 1 year ago
Yashpalsinh J. · Reviewed over 1 year ago
Michał B. · Reviewed over 1 year ago
Aniket K. · Reviewed over 1 year ago
Manuel Eiras C. · Reviewed over 1 year ago
Internet and resource issues.
Dave H. · Reviewed over 1 year ago
Justin K. · Reviewed over 1 year ago
Eduardo A. · Reviewed over 1 year ago
A lot of these commands take much longer to finish than the lab recommends. For example, it's taking 5-8 full minutes for things that this lab says should take "a minute or two". It makes the lab feel very choppy.
Michelle C. · Reviewed over 1 year ago
EL MEHDI A. · Reviewed over 1 year ago
Maciej C. · Reviewed over 1 year ago
mreza f. · Reviewed over 1 year ago
Fatokunbo S. · Reviewed over 1 year ago
Alexander L. · Reviewed over 1 year ago
Muhammad Umer R. · Reviewed over 1 year ago
Dinesh J. · Reviewed over 1 year ago
Ravi S. · Reviewed over 1 year ago
Andrew Borg Ning C. · Reviewed over 1 year ago
Pod is blocking scale down because it doesn’t have enough Pod Disruption Budget (PDB) Details Scale down of underutilized node is blocked because it has a Pod running on it which doesn’t have enough PDB to allow eviction of the pod. Refer to logs for more details. Recommended actions Review PDB rules of Pod in the log event and update the rules if necessary. { "insertId": "c92d7e13-7e0d-4cad-841d-acc2a404551d@a1", "jsonPayload": { "noDecisionStatus": { "noScaleDown": { "nodes": [ { "reason": { "parameters": [ "event-exporter-gke-7d996c57bf-vs6fl" ], "messageId": "no.scale.down.node.pod.not.enough.pdb" }, "node": { "name": "gke-scaling-demo-default-pool-32fbf8c9-vkks", "cpuRatio": 40, "memRatio": 7, "mig": { "name": "gke-scaling-demo-default-pool-32fbf8c9-grp", "nodepool": "default-pool", "zone": "us-east1-d" } } } ], "nodesTotalCount": 1 }, "measureTime": "1715246108" } }, "resource": { "type": "k8s_cluster", "labels": { "cluster_name": "scaling-demo", "location": "us-east1-d", "project_id": "qwiklabs-gcp-01-fb3dcc713c69" } }, "timestamp": "2024-05-09T09:15:08.242073284Z", "logName": "projects/qwiklabs-gcp-01-fb3dcc713c69/logs/container.googleapis.com%2Fcluster-autoscaler-visibility", "receiveTimestamp": "2024-05-09T09:15:09.270697750Z" } i needed to change maxUnavailable: to 3 and then node went down by 1 number
Naveen V. · Reviewed over 1 year ago
Arvind Kumar M. · Reviewed over 1 year ago
Guda M. · Reviewed over 1 year ago
We do not ensure the published reviews originate from consumers who have purchased or used the products. Reviews are not verified by Google.