Traitement de données sans serveur avec Dataflow : écrire un pipeline ETL à l'aide d'Apache Beam et de Dataflow (Python) avis
11472 avis
Luis Antonio C. · Examiné il y a 6 jours
Jyoti S. · Examiné il y a 6 jours
Luis Antonio C. · Examiné il y a 6 jours
Luis Antonio C. · Examiné il y a 6 jours
Could not get the pipeline to run due to multiple errors. Even the solution would not run. Very disappointing to be kicked out of the lab and lose progress before being able to resolve / troubleshoot. Errors were around: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1017)'))': /computeMetadata/v1/instance/service-accounts/default/?recursive=true and Compute Engine Medadata server unavailable.
Mariette D. · Examiné il y a 7 jours
too complex, needs more direction on what to do
Oscar O. · Examiné il y a 8 jours
Daniela L. · Examiné il y a 8 jours
problema con la infrectuctura (falta de espacio)
Gabriela C. · Examiné il y a 8 jours
Luis Antonio C. · Examiné il y a 9 jours
Could not complete Part1 Task 6 run the pipeline (using the provided Solution code) due to the following error: raise BeamIOError("Match operation failed", exceptions) apache_beam.io.filesystem.BeamIOError: Match operation failed with exceptions {'gs://qwiklabs-gcp-00-f5855126f119/events.json': RefreshError(TransportError("Failed to retrieve https://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Engine metadata service. Compute Engine Metadata server unavailable. Last exception: HTTPSConnectionPool(host='metadata.google.internal', port=443): Max retries exceeded with url: /computeMetadata/v1/instance/service-accounts/default/?recursive=true (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1017)')))"))}
Lingmin M. · Examiné il y a 9 jours
Qwiklabs Dataflow Lab Fix Summary Problems SSL/Metadata auth failure — google-auth 2.44.0+ broke the default HTTP transport, causing CERTIFICATE_VERIFY_FAILED errors when the notebook tried to reach GCP's metadata server Zone resource exhaustion — n1-standard-1 (Dataflow's default) was completely unavailable across all us-central1 zones for Qwiklabs accounts Pipeline exits early — p.run() doesn't wait for completion, causing silent failures argparse rejects extra flags — parse_args() blocks passing extra Beam arguments like --worker_machine_type Fixes 1. Fix SSL auth error bashexport GCE_METADATA_MTLS_MODE=none 2. Use e2-standard-2 machine type Add to your run command: bash--worker_machine_type=e2-standard-2 3. Fix pipeline code In my_pipeline.py: python# Change this: opts = parser.parse_args() options = PipelineOptions() p.run() # To this: opts, pipeline_args = parser.parse_known_args() options = PipelineOptions(pipeline_args) p.run().wait_until_finish() 4. Full working command bashcd $BASE_DIR export PROJECT_ID=$(gcloud config get-value project) export GCE_METADATA_MTLS_MODE=none python3 my_pipeline.py \ --project=${PROJECT_ID} \ --region=us-central1 \ --stagingLocation=gs://$PROJECT_ID/staging/ \ --tempLocation=gs://$PROJECT_ID/temp/ \ --runner=DataflowRunner \ --worker_machine_type=e2-standard-2 5. For the Dataflow Template UI Under Optional Parameters → uncheck "Use default machine type" → Series: E2 → Machine type: e2-standard-2
David O. · Examiné il y a 9 jours
console did not open
Nihal Hussain M. · Examiné il y a 9 jours
Mara Malina F. · Examiné il y a 9 jours
VERY BAD>> I am unable to run my jobs are with DataflowRunner as I am always getting resources contraints errors.. job is not able to spn up us-cerntral1 region.. I am getting same error in all labs which requires to submit jobs on dataflow. I am able to run with DirectRunner. Please help in this as I have spent too many hours but end up getting same error again and again
Mallikarjunarao G. · Examiné il y a 10 jours
couldnt finisih it lots of errors in the step by step or resources available
Sebastián P. · Examiné il y a 10 jours
couldn't finish because dataflow job could get the resources to actually run. Stupid and a waste of my time. still can't run due to limited resources
Tyler W. · Examiné il y a 10 jours
couldn't finish because dataflow job could get the resources to actually run. Stupid and a waste of my time.
Tyler W. · Examiné il y a 10 jours
1. There are 2 issues running the lab: - lab 1: apache_beam.io.filesystem.BeamIOError: Match operation failed with exceptions {'gs://qwiklabs-gcp-04-9d1f7241cd59/events.json': RefreshError(TransportError("Failed to retrieve https://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Engine metadata service. Compute Engine Metadata server unavailable. Last exception: HTTPSConnectionPool(host='metadata.google.internal', port=443): Max retries exceeded with url: /computeMetadata/v1/instance/service-accounts/default/?recursive=true (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate - lab 2 Startup of the worker pool in us-central1 failed to bring up any of the desired 1 workers P.S. Not working http.client transport only supports the http scheme, httpswas specified P.S.S. export GCE_METADATA_MTLS_MODE=none resolves an issue with certificate. But still insufficient resources to run the job.
Igor P. · Examiné il y a 10 jours
Luis Antonio C. · Examiné il y a 11 jours
Lingmin M. · Examiné il y a 11 jours
Lingmin M. · Examiné il y a 11 jours
Poor instructions - terrible!
Leighton C. · Examiné il y a 12 jours
My conclusion from this lab is that I should not use or depend on this type of platform for production loads. It if does not even work in a controlled lab, then production is a no-go! ERROR:apache_beam.runners.dataflow.dataflow_runner:2026-04-01T09:25:07.346Z: JOB_MESSAGE_ERROR: Startup of the worker pool in us-central1 failed to bring up any of the desired 1 workers. This is likely a quota issue or a Compute Engine stockout. The service will retry. For troubleshooting steps, see https://cloud.google.com/dataflow/docs/guides/common-errors#worker-pool-failure for help troubleshooting. ZONE_RESOURCE_POOL_EXHAUSTED: Instance 'my-pipeline-1775035379230-04010223-mm24-harness-86c2' creation failed: The zone 'projects/qwiklabs-gcp-02-e142b19585b8/zones/us-central1-a' does not have enough resources available to fulfill the request. Try a different zone, or try again later.
Mikael W. · Examiné il y a 12 jours
Luis Antonio C. · Examiné il y a 12 jours
In the Python script, allow users to pass alternative machine types, so there is no possibility of being unable to complete the lab due to insufficient resources being available (for N1?)
Felix V. · Examiné il y a 12 jours
Nous ne pouvons pas certifier que les avis publiés proviennent de consommateurs qui ont acheté ou utilisé les produits. Les avis ne sont pas vérifiés par Google.