Machine Learning with Spark on Google Cloud Dataproc Reviews
10348 reviews
Thanks
Andres M. · Reviewed over 6 years ago
Luz M. · Reviewed over 6 years ago
Sirley Cristina R. · Reviewed over 6 years ago
Ibrahim A. · Reviewed over 6 years ago
the DataLab part of the lab didn't work - File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/mllib/__init__.py", line 28, in <module> ImportError: No module named numpy
Tomasz W. · Reviewed over 6 years ago
Amine Riad R. · Reviewed over 6 years ago
Mohamed Z. · Reviewed over 6 years ago
there are many errors
康史 三. · Reviewed over 6 years ago
wijaya p. · Reviewed over 6 years ago
error occurs at >>> sc = SparkContext('local', 'logistic')
Jing X. · Reviewed over 6 years ago
THIS LAB NEEDS TO BE FIXED!!! THE NOTEBOOK OPENS IN PYTHON 3 THEN THROWS AN ERROR - WHEN YOU TRY TO RERUN YOU GET A SPARK CALL ERROR - THEN YOU CANNOT GET CREDIT FOR A WORKING NOTEBOOK! HAS HAPPENED MULTIPLE TIMES! PLEASE FIX THIS FOR THE SAKE OF EVERYONE ELSE! ValueErrorTraceback (most recent call last) <ipython-input-2-0ad623086190> in <module>() 4 from pyspark import SparkContext 5 ----> 6 sc = SparkContext('local', 'logistic') 7 spark = SparkSession .builder .appName("Logistic regression w/ Spark ML") .getOrCreate() 8 /usr/lib/spark/python/lib/pyspark.zip/pyspark/context.py in __init__(self, master, appName, sparkHome, pyFiles, environment, batchSize, serializer, conf, gateway, jsc, profiler_cls) 127 " note this option will be removed in Spark 3.0") 128 --> 129 SparkContext._ensure_initialized(self, gateway=gateway, conf=conf) 130 try: 131 self._do_init(master, appName, sparkHome, pyFiles, environment, batchSize, serializer, /usr/lib/spark/python/lib/pyspark.zip/pyspark/context.py in _ensure_initialized(cls, instance, gateway, conf) 326 " created by %s at %s:%s " 327 % (currentAppName, currentMaster, --> 328 callsite.function, callsite.file, callsite.linenum)) 329 else: 330 SparkContext._active_spark_context = instance ValueError: Cannot run multiple SparkContexts at once; existing SparkContext(app=pyspark-shell, master=yarn) created by getOrCreate at /usr/lib/spark/python/pyspark/shell.py:45
Daniel L. · Reviewed over 6 years ago
THIS LAB NEEDS TO BE FIXED!!! THE NOTEBOOK OPENS IN PYTHON 3 THEN THROWS AN ERROR - WHEN YOU TRY TO RERUN YOU GET A SPARK CALL ERROR - THEN YOU CANNOT GET CREDIT FOR A WORKING NOTEBOOK! HAS HAPPENED MULTIPLE TIMES! PLEASE FIX THIS FOR THE SAKE OF EVERYONE ELSE!
Daniel L. · Reviewed over 6 years ago
Lab failed to recognize the completion of the checkpoints for the jupyter notebook and the updates as set for scoring.
Christian S. · Reviewed over 6 years ago
Omkar S. · Reviewed over 6 years ago
Omkar S. · Reviewed over 6 years ago
THIS LAB NEEDS TO BE FIXED!!! THE NOTEBOOK OPENS IN PYTHON 3 THEN THROWS AN ERROR - WHEN YOU TRY TO RERUN YOU GET A SPARK CALL ERROR - THEN YOU CANNOT GET CREDIT FOR A WORKING NOTEBOOK! HAS HAPPENED MULTIPLE TIMES! PLEASE FIX THIS FOR THE SAKE OF EVERYONE ELSE!
Daniel L. · Reviewed over 6 years ago
the logistic_regression.ipynb has ALOT OF ERRORS (no parenthesis for the print function, tried fix that tiny error and i found MORE errors but bigger that time), scoring system didn't work when i created a new jupyter notebook. kindely please fix these errors and the scoring system connected with the Datalab.
Shrouk A. · Reviewed over 6 years ago
Balaji R. · Reviewed over 6 years ago
It has an error in the notebooks path. Rename the 07 spark directory. It should be: /datalab/notebooks/data-science-on-gcp/07_sparkml (
Osvaldo B. · Reviewed over 6 years ago
taewan k. · Reviewed over 6 years ago
David M. · Reviewed over 6 years ago
PRATIK KUMAR B. · Reviewed over 6 years ago
I have an issue with this lab. This is the third time I work on it and still I face the issue. When I reach in the jyputer note book to this cell: lrmodel = LogisticRegressionWithLBFGS.train(examples, intercept=True) I get an error : Py4JJavaError: An error occurred while calling z:org.apache.spark.api.python.PythonRDD.runJob. : org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 14.0 failed 1 times, most recent failure: Lost task 0.0 in stage 14.0 (TID 14, localhost, executor driver): org.apache.spark.api.python.PythonException: Traceback (most recent call last): File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/worker.py", line 240, in main func, profiler, deserializer, serializer = read_command(pickleSer, infile) File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/worker.py", line 60, in read_command command = serializer._read_with_length(file) File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 171, in _read_with_length return self.loads(obj) File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/serializers.py", line 569, in loads return pickle.loads(obj) File "/usr/lib/spark/python/lib/pyspark.zip/pyspark/mllib/__init__.py", line 28, in <module>
Mohammad H. · Reviewed over 6 years ago
nice
Iulian J. · Reviewed over 6 years ago
nice
Iulian J. · Reviewed over 6 years ago
We do not ensure the published reviews originate from consumers who have purchased or used the products. Reviews are not verified by Google.