[DEPRECATED] TFX on Cloud AI Platform Pipelines Reviews
4231 reviews
Jabilli G. · Reviewed over 4 years ago
Pipeline didn't run
Polina T. · Reviewed over 4 years ago
Swaroop D. · Reviewed over 4 years ago
Felix F. · Reviewed over 4 years ago
XIN Z. · Reviewed over 4 years ago
Found errors
Takashi a S. · Reviewed over 4 years ago
David m. · Reviewed over 4 years ago
kubeflow-pipelines-1 does not get detected as completed
Andrew H. · Reviewed over 4 years ago
First, there is a small bug: the Notebook tells me to create a AI PlatformPipeline with as application name: "... provide an App instance name such as "tfx" or "mlops""; however, Qwiklabs checks for a Pipeline instance called "kubeflow-pipelines-1" Second, the job we have to start via the UI takes about 50 minutes to finish. In my opinion it doesn't make sense at all to wait for that. I've been able to start a run via the UI, which was the objective; so why don't check for just that? If you really want to check for a successful run, then please use an example that doesn't take 50 minutes to complete.
Leonard P. · Reviewed over 4 years ago
Elbby S. · Reviewed over 4 years ago
Elbby S. · Reviewed over 4 years ago
Suruchi P. · Reviewed over 4 years ago
Suruchi P. · Reviewed over 4 years ago
SD SAD SAD
Gopal p. · Reviewed over 4 years ago
Abhiraam E. · Reviewed over 4 years ago
Hasan M. · Reviewed over 4 years ago
Bruno V. · Reviewed over 4 years ago
Беха Т. · Reviewed over 4 years ago
Marta S. · Reviewed over 4 years ago
Didn't expect the last step to take so long
Vaida S. · Reviewed over 4 years ago
coun't finish last level because 2021-08-02 13:22:34.508304: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /usr/local/lib 2021-08-02 13:22:34.508349: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine. INFO:absl:tensorflow_ranking is not available: No module named 'tensorflow_ranking' INFO:absl:tensorflow_text is not available: No module named 'tensorflow_text' INFO:absl:Running driver for CsvExampleGen INFO:absl:MetadataStore with gRPC connection initialized INFO:absl:select span and version = (0, None) INFO:absl:latest span and version = (0, None) INFO:absl:Adding KFP pod name tfx-covertype-continuous-training-b57qj-1354349148 to execution INFO:absl:Running executor for CsvExampleGen INFO:absl:Attempting to infer TFX Python dependency for beam INFO:absl:Copying all content from install dir /tfx-src/tfx to temp dir /tmp/tmpxzziyhoo/build/tfx INFO:absl:Generating a temp setup file at /tmp/tmpxzziyhoo/build/tfx/setup.py INFO:absl:Creating temporary sdist package, logs available at /tmp/tmpxzziyhoo/build/tfx/setup.log INFO:absl:Added --extra_package=/tmp/tmpxzziyhoo/build/tfx/dist/tfx_ephemeral-0.25.0.tar.gz to beam args INFO:absl:Generating examples. INFO:absl:Processing input csv data gs://workshop-datasets/covertype/small/* to TFExample. INFO:apache_beam.internal.gcp.auth:Setting socket default timeout to 60 seconds. INFO:apache_beam.internal.gcp.auth:socket default timeout is 60.0 seconds. INFO:apache_beam.io.gcp.gcsio:Starting the size estimation of the input INFO:oauth2client.transport:Attempting refresh to obtain initial access_token INFO:apache_beam.io.gcp.gcsio:Finished listing 1 files in 0.09226322174072266 seconds. INFO:apache_beam.runners.portability.stager:Downloading source distribution of the SDK from PyPi INFO:apache_beam.runners.portability.stager:Executing command: ['/usr/bin/python', '-m', 'pip', 'download', '--dest', '/tmp/tmppn8fpdem', 'apache-beam==2.25.0', '--no-deps', '--no-binary', ':all:'] WARNING: You are using pip version 20.2; however, version 21.2.2 is available. You should consider upgrading via the '/usr/bin/python -m pip install --upgrade pip' command. INFO:apache_beam.runners.portability.stager:Staging SDK sources from PyPI: dataflow_python_sdk.tar INFO:apache_beam.runners.portability.stager:Downloading binary distribution of the SDK from PyPi INFO:apache_beam.runners.portability.stager:Executing command: ['/usr/bin/python', '-m', 'pip', 'download', '--dest', '/tmp/tmppn8fpdem', 'apache-beam==2.25.0', '--no-deps', '--only-binary', ':all:', '--python-version', '37', '--implementation', 'cp', '--abi', 'cp37m', '--platform', 'manylinux1_x86_64'] WARNING: You are using pip version 20.2; however, version 21.2.2 is available. You should consider upgrading via the '/usr/bin/python -m pip install --upgrade pip' command. INFO:apache_beam.runners.portability.stager:Staging binary distribution of the SDK from PyPI: apache_beam-2.25.0-cp37-cp37m-manylinux1_x86_64.whl WARNING:root:Make sure that locally built Python SDK docker image has Python 3.7 interpreter. INFO:root:Using Python SDK docker image: apache/beam_python3.7_sdk:2.25.0. If the image is not available at local, we will try to pull from hub.docker.com INFO:apache_beam.runners.dataflow.internal.apiclient:Defaulting to the temp_location as staging_location: gs://qwiklabs-gcp-00-f66e604c32e7-kubeflowpipelines-default//beam/tmp INFO:apache_beam.io.gcp.gcsio:Starting the size estimation of the input INFO:apache_beam.io.gcp.gcsio:Finished listing 1 files in 0.07517623901367188 seconds. INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://qwiklabs-gcp-00-f66e604c32e7-kubeflowpipelines-default//beam/tmp/beamapp-root-0802132249-290815.1627910569.291189/pipeline.pb... INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://qwiklabs-gcp-00-f66e604c32e7-kubeflowpipelines-default//beam/tmp/beamapp-root-0802132249-290815.1627910569.291189/pipeline.pb in 0 seconds. INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://qwiklabs-gcp-00-f66e604c32e7-kubeflowpipelines-default//beam/tmp/beamapp-root-0802132249-290815.1627910569.291189/tfx_ephemeral-0.25.0.tar.gz... INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://qwiklabs-gcp-00-f66e604c32e7-kubeflowpipelines-default//beam/tmp/beamapp-root-0802132249-290815.1627910569.291189/tfx_ephemeral-0.25.0.tar.gz in 0 seconds. INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://qwiklabs-gcp-00-f66e604c32e7-kubeflowpipelines-default//beam/tmp/beamapp-root-0802132249-290815.1627910569.291189/extra_packages.txt... INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://qwiklabs-gcp-00-f66e604c32e7-kubeflowpipelines-default//beam/tmp/beamapp-root-0802132249-290815.1627910569.291189/extra_packages.txt in 0 seconds. INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://qwiklabs-gcp-00-f66e604c32e7-kubeflowpipelines-default//beam/tmp/beamapp-root-0802132249-290815.1627910569.291189/dataflow_python_sdk.tar... INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://qwiklabs-gcp-00-f66e604c32e7-kubeflowpipelines-default//beam/tmp/beamapp-root-0802132249-290815.1627910569.291189/dataflow_python_sdk.tar in 0 seconds. INFO:apache_beam.runners.dataflow.internal.apiclient:Starting GCS upload to gs://qwiklabs-gcp-00-f66e604c32e7-kubeflowpipelines-default//beam/tmp/beamapp-root-0802132249-290815.1627910569.291189/apache_beam-2.25.0-cp37-cp37m-manylinux1_x86_64.whl... INFO:apache_beam.runners.dataflow.internal.apiclient:Completed GCS upload to gs://qwiklabs-gcp-00-f66e604c32e7-kubeflowpipelines-default//beam/tmp/beamapp-root-0802132249-290815.1627910569.291189/apache_beam-2.25.0-cp37-cp37m-manylinux1_x86_64.whl in 0 seconds. INFO:apache_beam.runners.dataflow.internal.apiclient:Create job: <Job createTime: '2021-08-02T13:22:51.968914Z' currentStateTime: '1970-01-01T00:00:00Z' id: '2021-08-02_06_22_51-2891285387010563265' location: 'us-central1' name: 'beamapp-root-0802132249-290815' projectId: 'qwiklabs-gcp-00-f66e604c32e7' stageStates: [] startTime: '2021-08-02T13:22:51.968914Z' steps: [] tempFiles: [] type: TypeValueValuesEnum(JOB_TYPE_BATCH, 1)> INFO:apache_beam.runners.dataflow.internal.apiclient:Created job with id: [2021-08-02_06_22_51-2891285387010563265] INFO:apache_beam.runners.dataflow.internal.apiclient:Submitted job: 2021-08-02_06_22_51-2891285387010563265 INFO:apache_beam.runners.dataflow.internal.apiclient:To access the Dataflow monitoring console, please navigate to https://console.cloud.google.com/dataflow/jobs/us-central1/2021-08-02_06_22_51-2891285387010563265?project=qwiklabs-gcp-00-f66e604c32e7 INFO:apache_beam.runners.dataflow.dataflow_runner:Job 2021-08-02_06_22_51-2891285387010563265 is in state JOB_STATE_PENDING INFO:apache_beam.runners.dataflow.dataflow_runner:2021-08-02T13:22:54.639Z: JOB_MESSAGE_DETAILED: Autoscaling is enabled for job 2021-08-02_06_22_51-2891285387010563265. The number of workers will be between 1 and 1000. INFO:apache_beam.runners.dataflow.dataflow_runner:2021-08-02T13:22:54.693Z: JOB_MESSAGE_DETAILED: Autoscaling was automatically enabled for job 2021-08-02_06_22_51-2891285387010563265. INFO:apache_beam.runners.dataflow.dataflow_runner:2021-08-02T13:22:55.639Z: JOB_MESSAGE_ERROR: Staged package apache_beam-2.25.0-cp37-cp37m-manylinux1_x86_64.whl at location 'gs://qwiklabs-gcp-00-f66e604c32e7-kubeflowpipelines-default/beam/tmp/beamapp-root-0802132249-290815.1627910569.291189/apache_beam-2.25.0-cp37-cp37m-manylinux1_x86_64.whl' is inaccessible. INFO:apache_beam.runners.dataflow.dataflow_runner:2021-08-02T13:22:55.711Z: JOB_MESSAGE_ERROR: Staged package dataflow_python_sdk.tar at location 'gs://qwiklabs-gcp-00-f66e604c32e7-kubeflowpipelines-default/beam/tmp/beamapp-root-0802132249-290815.1627910569.291189/dataflow_python_sdk.tar' is inaccessible. INFO:apache_beam.runners.dataflow.dataflow_runner:2021-08-02T13:22:55.768Z: JOB_MESSAGE_ERROR: Staged package extra_packages.txt at location 'gs://qwiklabs-gcp-00-f66e604c32e7-kubeflowpipelines-default/beam/tmp/beamapp-root-0802132249-290815.1627910569.291189/extra_packages.txt' is inaccessible. INFO:apache_beam.runners.dataflow.dataflow_runner:2021-08-02T13:22:55.829Z: JOB_MESSAGE_ERROR: Staged package tfx_ephemeral-0.25.0.tar.gz at location 'gs://qwiklabs-gcp-00-f66e604c32e7-kubeflowpipelines-default/beam/tmp/beamapp-root-0802132249-290815.1627910569.291189/tfx_ephemeral-0.25.0.tar.gz' is inaccessible. INFO:apache_beam.runners.dataflow.dataflow_runner:2021-08-02T13:22:55.862Z: JOB_MESSAGE_ERROR: Workflow failed. Causes: One or more access checks for temp location or staged files failed. Please refer to other error messages for details. For more information on security and permissions, please see https://cloud.google.com/dataflow/security-and-permissions
Artem P. · Reviewed over 4 years ago
RuntimeError: Only one file per dir is supported: schema.
Muneer A. · Reviewed over 4 years ago
I had a problem of library : "Could not load dynamic library 'libcudart.so.10.1' "
Sophie M. · Reviewed over 4 years ago
don't work because mistake in code 2020-10-13 06:52:58.573 ERROR grpc._server - Exception calling application: (401) Reason: Unauthorized
Artem P. · Reviewed over 4 years ago
LTI S. · Reviewed over 4 years ago
We do not ensure the published reviews originate from consumers who have purchased or used the products. Reviews are not verified by Google.