Understanding and Combining GKE Autoscaling Strategies Reviews
18903 reviews
HARI MEGHANA K. · Reviewed أكثر من سنة ago
clear as sky
Michael Pintanta Dwides P. · Reviewed أكثر من سنة ago
Sree Ramachandra G. · Reviewed أكثر من سنة ago
Ashwinder S. · Reviewed أكثر من سنة ago
Aleix H. · Reviewed أكثر من سنة ago
Yashpalsinh J. · Reviewed أكثر من سنة ago
Michał B. · Reviewed أكثر من سنة ago
Aniket K. · Reviewed أكثر من سنة ago
Manuel Eiras C. · Reviewed أكثر من سنة ago
Internet and resource issues.
Dave H. · Reviewed أكثر من سنة ago
Justin K. · Reviewed أكثر من سنة ago
Eduardo A. · Reviewed أكثر من سنة ago
A lot of these commands take much longer to finish than the lab recommends. For example, it's taking 5-8 full minutes for things that this lab says should take "a minute or two". It makes the lab feel very choppy.
Michelle C. · Reviewed أكثر من سنة ago
EL MEHDI A. · Reviewed أكثر من سنة ago
Maciej C. · Reviewed أكثر من سنة ago
mreza f. · Reviewed أكثر من سنة ago
Fatokunbo S. · Reviewed أكثر من سنة ago
Alexander L. · Reviewed أكثر من سنة ago
Muhammad Umer R. · Reviewed أكثر من سنة ago
Dinesh J. · Reviewed أكثر من سنة ago
Ravi S. · Reviewed أكثر من سنة ago
Andrew Borg Ning C. · Reviewed أكثر من سنة ago
Pod is blocking scale down because it doesn’t have enough Pod Disruption Budget (PDB) Details Scale down of underutilized node is blocked because it has a Pod running on it which doesn’t have enough PDB to allow eviction of the pod. Refer to logs for more details. Recommended actions Review PDB rules of Pod in the log event and update the rules if necessary. { "insertId": "c92d7e13-7e0d-4cad-841d-acc2a404551d@a1", "jsonPayload": { "noDecisionStatus": { "noScaleDown": { "nodes": [ { "reason": { "parameters": [ "event-exporter-gke-7d996c57bf-vs6fl" ], "messageId": "no.scale.down.node.pod.not.enough.pdb" }, "node": { "name": "gke-scaling-demo-default-pool-32fbf8c9-vkks", "cpuRatio": 40, "memRatio": 7, "mig": { "name": "gke-scaling-demo-default-pool-32fbf8c9-grp", "nodepool": "default-pool", "zone": "us-east1-d" } } } ], "nodesTotalCount": 1 }, "measureTime": "1715246108" } }, "resource": { "type": "k8s_cluster", "labels": { "cluster_name": "scaling-demo", "location": "us-east1-d", "project_id": "qwiklabs-gcp-01-fb3dcc713c69" } }, "timestamp": "2024-05-09T09:15:08.242073284Z", "logName": "projects/qwiklabs-gcp-01-fb3dcc713c69/logs/container.googleapis.com%2Fcluster-autoscaler-visibility", "receiveTimestamp": "2024-05-09T09:15:09.270697750Z" } i needed to change maxUnavailable: to 3 and then node went down by 1 number
Naveen V. · Reviewed أكثر من سنة ago
Arvind Kumar M. · Reviewed أكثر من سنة ago
Guda M. · Reviewed أكثر من سنة ago
We do not ensure the published reviews originate from consumers who have purchased or used the products. Reviews are not verified by Google.