Opiniones sobre Procesamiento de datos sin servidores con Dataflow: Cómo escribir una canalización de ETL con Apache Beam y Dataflow (Python)

11472 opiniones

Luis Antonio C. · Se revisó hace 6 días

Jyoti S. · Se revisó hace 6 días

Luis Antonio C. · Se revisó hace 6 días

Luis Antonio C. · Se revisó hace 6 días

Could not get the pipeline to run due to multiple errors. Even the solution would not run. Very disappointing to be kicked out of the lab and lose progress before being able to resolve / troubleshoot. Errors were around: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1017)'))': /computeMetadata/v1/instance/service-accounts/default/?recursive=true and Compute Engine Medadata server unavailable.

Mariette D. · Se revisó hace 7 días

too complex, needs more direction on what to do

Oscar O. · Se revisó hace 8 días

Daniela L. · Se revisó hace 8 días

problema con la infrectuctura (falta de espacio)

Gabriela C. · Se revisó hace 8 días

Luis Antonio C. · Se revisó hace 9 días

Could not complete Part1 Task 6 run the pipeline (using the provided Solution code) due to the following error: raise BeamIOError("Match operation failed", exceptions) apache_beam.io.filesystem.BeamIOError: Match operation failed with exceptions {'gs://qwiklabs-gcp-00-f5855126f119/events.json': RefreshError(TransportError("Failed to retrieve https://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Engine metadata service. Compute Engine Metadata server unavailable. Last exception: HTTPSConnectionPool(host='metadata.google.internal', port=443): Max retries exceeded with url: /computeMetadata/v1/instance/service-accounts/default/?recursive=true (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1017)')))"))}

Lingmin M. · Se revisó hace 9 días

Qwiklabs Dataflow Lab Fix Summary Problems SSL/Metadata auth failure — google-auth 2.44.0+ broke the default HTTP transport, causing CERTIFICATE_VERIFY_FAILED errors when the notebook tried to reach GCP's metadata server Zone resource exhaustion — n1-standard-1 (Dataflow's default) was completely unavailable across all us-central1 zones for Qwiklabs accounts Pipeline exits early — p.run() doesn't wait for completion, causing silent failures argparse rejects extra flags — parse_args() blocks passing extra Beam arguments like --worker_machine_type Fixes 1. Fix SSL auth error bashexport GCE_METADATA_MTLS_MODE=none 2. Use e2-standard-2 machine type Add to your run command: bash--worker_machine_type=e2-standard-2 3. Fix pipeline code In my_pipeline.py: python# Change this: opts = parser.parse_args() options = PipelineOptions() p.run() # To this: opts, pipeline_args = parser.parse_known_args() options = PipelineOptions(pipeline_args) p.run().wait_until_finish() 4. Full working command bashcd $BASE_DIR export PROJECT_ID=$(gcloud config get-value project) export GCE_METADATA_MTLS_MODE=none python3 my_pipeline.py \ --project=${PROJECT_ID} \ --region=us-central1 \ --stagingLocation=gs://$PROJECT_ID/staging/ \ --tempLocation=gs://$PROJECT_ID/temp/ \ --runner=DataflowRunner \ --worker_machine_type=e2-standard-2 5. For the Dataflow Template UI Under Optional Parameters → uncheck "Use default machine type" → Series: E2 → Machine type: e2-standard-2

David O. · Se revisó hace 9 días

console did not open

Nihal Hussain M. · Se revisó hace 9 días

Mara Malina F. · Se revisó hace 9 días

VERY BAD>> I am unable to run my jobs are with DataflowRunner as I am always getting resources contraints errors.. job is not able to spn up us-cerntral1 region.. I am getting same error in all labs which requires to submit jobs on dataflow. I am able to run with DirectRunner. Please help in this as I have spent too many hours but end up getting same error again and again

Mallikarjunarao G. · Se revisó hace 10 días

couldnt finisih it lots of errors in the step by step or resources available

Sebastián P. · Se revisó hace 10 días

couldn't finish because dataflow job could get the resources to actually run. Stupid and a waste of my time. still can't run due to limited resources

Tyler W. · Se revisó hace 10 días

couldn't finish because dataflow job could get the resources to actually run. Stupid and a waste of my time.

Tyler W. · Se revisó hace 10 días

1. There are 2 issues running the lab: - lab 1: apache_beam.io.filesystem.BeamIOError: Match operation failed with exceptions {'gs://qwiklabs-gcp-04-9d1f7241cd59/events.json': RefreshError(TransportError("Failed to retrieve https://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true from the Google Compute Engine metadata service. Compute Engine Metadata server unavailable. Last exception: HTTPSConnectionPool(host='metadata.google.internal', port=443): Max retries exceeded with url: /computeMetadata/v1/instance/service-accounts/default/?recursive=true (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate - lab 2 Startup of the worker pool in us-central1 failed to bring up any of the desired 1 workers P.S. Not working http.client transport only supports the http scheme, httpswas specified P.S.S. export GCE_METADATA_MTLS_MODE=none resolves an issue with certificate. But still insufficient resources to run the job.

Igor P. · Se revisó hace 10 días

Luis Antonio C. · Se revisó hace 11 días

Lingmin M. · Se revisó hace 11 días

Lingmin M. · Se revisó hace 11 días

Poor instructions - terrible!

Leighton C. · Se revisó hace 12 días

My conclusion from this lab is that I should not use or depend on this type of platform for production loads. It if does not even work in a controlled lab, then production is a no-go! ERROR:apache_beam.runners.dataflow.dataflow_runner:2026-04-01T09:25:07.346Z: JOB_MESSAGE_ERROR: Startup of the worker pool in us-central1 failed to bring up any of the desired 1 workers. This is likely a quota issue or a Compute Engine stockout. The service will retry. For troubleshooting steps, see https://cloud.google.com/dataflow/docs/guides/common-errors#worker-pool-failure for help troubleshooting. ZONE_RESOURCE_POOL_EXHAUSTED: Instance 'my-pipeline-1775035379230-04010223-mm24-harness-86c2' creation failed: The zone 'projects/qwiklabs-gcp-02-e142b19585b8/zones/us-central1-a' does not have enough resources available to fulfill the request. Try a different zone, or try again later.

Mikael W. · Se revisó hace 12 días

Luis Antonio C. · Se revisó hace 12 días

In the Python script, allow users to pass alternative machine types, so there is no possibility of being unable to complete the lab due to insufficient resources being available (for N1?)

Felix V. · Se revisó hace 12 días

No garantizamos que las opiniones publicadas provengan de consumidores que hayan comprado o utilizado los productos. Google no verifica las opiniones.