关于“瞭解及整合 GKE 自動調度資源策略”的评价

评论

HARI MEGHANA K. · 评论over 1 year之前

clear as sky

Michael Pintanta Dwides P. · 评论over 1 year之前

Sree Ramachandra G. · 评论over 1 year之前

Ashwinder S. · 评论over 1 year之前

Aleix H. · 评论over 1 year之前

Yashpalsinh J. · 评论over 1 year之前

Michał B. · 评论over 1 year之前

Aniket K. · 评论over 1 year之前

Manuel Eiras C. · 评论over 1 year之前

Internet and resource issues.

Dave H. · 评论over 1 year之前

Justin K. · 评论over 1 year之前

Eduardo A. · 评论over 1 year之前

A lot of these commands take much longer to finish than the lab recommends. For example, it's taking 5-8 full minutes for things that this lab says should take "a minute or two". It makes the lab feel very choppy.

Michelle C. · 评论over 1 year之前

EL MEHDI A. · 评论over 1 year之前

Maciej C. · 评论over 1 year之前

mreza f. · 评论over 1 year之前

Fatokunbo S. · 评论over 1 year之前

Alexander L. · 评论over 1 year之前

Muhammad Umer R. · 评论over 1 year之前

Dinesh J. · 评论over 1 year之前

Ravi S. · 评论over 1 year之前

Andrew Borg Ning C. · 评论over 1 year之前

Pod is blocking scale down because it doesn’t have enough Pod Disruption Budget (PDB) Details Scale down of underutilized node is blocked because it has a Pod running on it which doesn’t have enough PDB to allow eviction of the pod. Refer to logs for more details. Recommended actions Review PDB rules of Pod in the log event and update the rules if necessary. { "insertId": "c92d7e13-7e0d-4cad-841d-acc2a404551d@a1", "jsonPayload": { "noDecisionStatus": { "noScaleDown": { "nodes": [ { "reason": { "parameters": [ "event-exporter-gke-7d996c57bf-vs6fl" ], "messageId": "no.scale.down.node.pod.not.enough.pdb" }, "node": { "name": "gke-scaling-demo-default-pool-32fbf8c9-vkks", "cpuRatio": 40, "memRatio": 7, "mig": { "name": "gke-scaling-demo-default-pool-32fbf8c9-grp", "nodepool": "default-pool", "zone": "us-east1-d" } } } ], "nodesTotalCount": 1 }, "measureTime": "1715246108" } }, "resource": { "type": "k8s_cluster", "labels": { "cluster_name": "scaling-demo", "location": "us-east1-d", "project_id": "qwiklabs-gcp-01-fb3dcc713c69" } }, "timestamp": "2024-05-09T09:15:08.242073284Z", "logName": "projects/qwiklabs-gcp-01-fb3dcc713c69/logs/container.googleapis.com%2Fcluster-autoscaler-visibility", "receiveTimestamp": "2024-05-09T09:15:09.270697750Z" } i needed to change maxUnavailable: to 3 and then node went down by 1 number

Naveen V. · 评论over 1 year之前

Arvind Kumar M. · 评论over 1 year之前

Guda M. · 评论over 1 year之前

我们无法确保发布的评价来自已购买或已使用产品的消费者。评价未经 Google 核实。