Running Apache Spark jobs on Cloud Dataproc Reviews

45988 reviews

Leonardo H. · Reviewed over 1 year ago

Hitesh K. · Reviewed over 1 year ago

Vishnuvardhan P. · Reviewed over 1 year ago

Mallikarjun S. · Reviewed over 1 year ago

kishore m. · Reviewed over 1 year ago

Leandro G. · Reviewed over 1 year ago

Digvijay P. · Reviewed over 1 year ago

Thomas G. · Reviewed over 1 year ago

Olivier M. · Reviewed over 1 year ago

Sotiria S. · Reviewed over 1 year ago

Sushrut A. · Reviewed over 1 year ago

Yuliia M. · Reviewed over 1 year ago

Apporv D. · Reviewed over 1 year ago

ok

Phoutthakone B. · Reviewed over 1 year ago

There where some erros in the last part.

SERGIO ENRIQUE Y. · Reviewed over 1 year ago

excellent experience

Diego C. · Reviewed over 1 year ago

Luigino N. · Reviewed over 1 year ago

Hello, I found an issue where I had to replace the name "sparktodp" by the correct name of my cluster ("cluster-e93e") in the command and script below: export DP_STORAGE="gs://$(gcloud dataproc clusters describe sparktodp --region=us-east1 --format=json | jq -r '.config.configBucket')" #!/bin/bash gcloud dataproc jobs submit pyspark \ --cluster sparktodp \ --region us-east1 \ spark_analysis.py \ -- --bucket=$1

Crhistian S. · Reviewed over 1 year ago

Great learning!

Jitendra J. · Reviewed over 1 year ago

Sandra C. · Reviewed over 1 year ago

Himanshu S. · Reviewed over 1 year ago

Nayan P. · Reviewed over 1 year ago

Dominik F. · Reviewed over 1 year ago

Pankaj G. · Reviewed over 1 year ago

R S. · Reviewed over 1 year ago

We do not ensure the published reviews originate from consumers who have purchased or used the products. Reviews are not verified by Google.