Відгуки: TFX Standard Components Walkthrough
4445 відгуків
Could not get python imports to work
Abdul Z. · Відгук надано більше 2 років тому
it still doesn't work! Now Jupiterlab won't open.
Paul M. · Відгук надано більше 2 років тому
The lab is not compatible with python 3 anymore... the lab can not be completed
Tamás P. · Відгук надано більше 2 років тому
Dariusz Ł. · Відгук надано більше 2 років тому
Codes not running or compatible to latest versions
suvojyoti c. · Відгук надано більше 2 років тому
Tanai A. · Відгук надано більше 2 років тому
Error on `pip install tensorflow_model_analysis` gcc: build/src.linux-x86_64-3.10/numpy/core/src/multiarray/scalartypes.c numpy/core/src/multiarray/scalartypes.c.src: In function ‘float_arrtype_hash’: numpy/core/src/multiarray/scalartypes.c.src:2967:27: error: incompatible type for argument 1 of ‘_Py_HashDouble’ 2967 | return _Py_HashDouble((double) PyArrayScalar_VAL(obj, @name@)); In file included from /opt/conda/include/python3.10/Python.h:77, from numpy/core/src/multiarray/scalartypes.c.src:3: /opt/conda/include/python3.10/pyhash.h:10:38: note: expected ‘PyObject *’ {aka ‘struct _object *’} but argument is of type ‘double’ 10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double); | ^~~~~~~~~~ numpy/core/src/multiarray/scalartypes.c.src:2967:12: error: too few arguments to function ‘_Py_HashDouble’ 2967 | return _Py_HashDouble((double) PyArrayScalar_VAL(obj, @name@)); | ^~~~~~~~~~~~~~ In file included from /opt/conda/include/python3.10/Python.h:77, from numpy/core/src/multiarray/scalartypes.c.src:3: /opt/conda/include/python3.10/pyhash.h:10:23: note: declared here 10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double); | ^~~~~~~~~~~~~~ numpy/core/src/multiarray/scalartypes.c.src: In function ‘cfloat_arrtype_hash’: numpy/core/src/multiarray/scalartypes.c.src:2975:31: error: incompatible type for argument 1 of ‘_Py_HashDouble’ 2975 | hashreal = _Py_HashDouble((double) | ^~~~~~~~ | | | double 2976 | PyArrayScalar_VAL(obj, C@name@).real); | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In file included from /opt/conda/include/python3.10/Python.h:77, from numpy/core/src/multiarray/scalartypes.c.src:3: /opt/conda/include/python3.10/pyhash.h:10:38: note: expected ‘PyObject *’ {aka ‘struct _object *’} but argument is of type ‘double’ 10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double); | ^~~~~~~~~~ numpy/core/src/multiarray/scalartypes.c.src:2975:16: error: too few arguments to function ‘_Py_HashDouble’ 2975 | hashreal = _Py_HashDouble((double) | ^~~~~~~~~~~~~~ In file included from /opt/conda/include/python3.10/Python.h:77, from numpy/core/src/multiarray/scalartypes.c.src:3: /opt/conda/include/python3.10/pyhash.h:10:23: note: declared here 10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double); | ^~~~~~~~~~~~~~ numpy/core/src/multiarray/scalartypes.c.src:2981:31: error: incompatible type for argument 1 of ‘_Py_HashDouble’ 2981 | hashimag = _Py_HashDouble((double) | ^~~~~~~~ | | | double 2982 | PyArrayScalar_VAL(obj, C@name@).imag); | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In file included from /opt/conda/include/python3.10/Python.h:77, from numpy/core/src/multiarray/scalartypes.c.src:3: /opt/conda/include/python3.10/pyhash.h:10:38: note: expected ‘PyObject *’ {aka ‘struct _object *’} but argument is of type ‘double’ 10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double); | ^~~~~~~~~~ numpy/core/src/multiarray/scalartypes.c.src:2981:16: error: too few arguments to function ‘_Py_HashDouble’ 2981 | hashimag = _Py_HashDouble((double) | ^~~~~~~~~~~~~~ In file included from /opt/conda/include/python3.10/Python.h:77, from numpy/core/src/multiarray/scalartypes.c.src:3: /opt/conda/include/python3.10/pyhash.h:10:23: note: declared here 10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double); | ^~~~~~~~~~~~~~ numpy/core/src/multiarray/scalartypes.c.src: In function ‘longdouble_arrtype_hash’: numpy/core/src/multiarray/scalartypes.c.src:2967:27: error: incompatible type for argument 1 of ‘_Py_HashDouble’ 2967 | return _Py_HashDouble((double) PyArrayScalar_VAL(obj, @name@)); In file included from /opt/conda/include/python3.10/Python.h:77, from numpy/core/src/multiarray/scalartypes.c.src:3: /opt/conda/include/python3.10/pyhash.h:10:38: note: expected ‘PyObject *’ {aka ‘struct _object *’} but argument is of type ‘double’ 10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double); | ^~~~~~~~~~ numpy/core/src/multiarray/scalartypes.c.src:2967:12: error: too few arguments to function ‘_Py_HashDouble’ 2967 | return _Py_HashDouble((double) PyArrayScalar_VAL(obj, @name@)); | ^~~~~~~~~~~~~~ In file included from /opt/conda/include/python3.10/Python.h:77, from numpy/core/src/multiarray/scalartypes.c.src:3: /opt/conda/include/python3.10/pyhash.h:10:23: note: declared here 10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double); | ^~~~~~~~~~~~~~ numpy/core/src/multiarray/scalartypes.c.src: In function ‘clongdouble_arrtype_hash’: numpy/core/src/multiarray/scalartypes.c.src:2975:31: error: incompatible type for argument 1 of ‘_Py_HashDouble’ 2975 | hashreal = _Py_HashDouble((double) | ^~~~~~~~ | | | double 2976 | PyArrayScalar_VAL(obj, C@name@).real); | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In file included from /opt/conda/include/python3.10/Python.h:77, from numpy/core/src/multiarray/scalartypes.c.src:3: /opt/conda/include/python3.10/pyhash.h:10:38: note: expected ‘PyObject *’ {aka ‘struct _object *’} but argument is of type ‘double’ 10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double); | ^~~~~~~~~~ numpy/core/src/multiarray/scalartypes.c.src:2975:16: error: too few arguments to function ‘_Py_HashDouble’ 2975 | hashreal = _Py_HashDouble((double) | ^~~~~~~~~~~~~~ In file included from /opt/conda/include/python3.10/Python.h:77, from numpy/core/src/multiarray/scalartypes.c.src:3: /opt/conda/include/python3.10/pyhash.h:10:23: note: declared here 10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double); | ^~~~~~~~~~~~~~ numpy/core/src/multiarray/scalartypes.c.src:2981:31: error: incompatible type for argument 1 of ‘_Py_HashDouble’ 2981 | hashimag = _Py_HashDouble((double) | ^~~~~~~~ | | | double 2982 | PyArrayScalar_VAL(obj, C@name@).imag); | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In file included from /opt/conda/include/python3.10/Python.h:77, from numpy/core/src/multiarray/scalartypes.c.src:3: /opt/conda/include/python3.10/pyhash.h:10:38: note: expected ‘PyObject *’ {aka ‘struct _object *’} but argument is of type ‘double’ 10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double); | ^~~~~~~~~~ numpy/core/src/multiarray/scalartypes.c.src:2981:16: error: too few arguments to function ‘_Py_HashDouble’ 2981 | hashimag = _Py_HashDouble((double) | ^~~~~~~~~~~~~~ In file included from /opt/conda/include/python3.10/Python.h:77, from numpy/core/src/multiarray/scalartypes.c.src:3: /opt/conda/include/python3.10/pyhash.h:10:23: note: declared here 10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double); | ^~~~~~~~~~~~~~ numpy/core/src/multiarray/scalartypes.c.src: In function ‘half_arrtype_hash’: numpy/core/src/multiarray/scalartypes.c.src:2997:27: error: incompatible type for argument 1 of ‘_Py_HashDouble’ 2997 | return _Py_HashDouble(npy_half_to_double(PyArrayScalar_VAL(obj, Half))); | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | | | double In file included from /opt/conda/include/python3.10/Python.h:77, from numpy/core/src/multiarray/scalartypes.c.src:3: /opt/conda/include/python3.10/pyhash.h:10:38: note: expected ‘PyObject *’ {aka ‘struct _object *’} but argument is of type ‘double’ 10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double); | ^~~~~~~~~~ numpy/core/src/multiarray/scalartypes.c.src:2997:12: error: too few arguments to function ‘_Py_HashDouble’ 2997 | return _Py_HashDouble(npy_half_to_double(PyArrayScalar_VAL(obj, Half))); | ^~~~~~~~~~~~~~ In file included from /opt/conda/include/python3.10/Python.h:77, from numpy/core/src/multiarray/scalartypes.c.src:3: /opt/conda/include/python3.10/pyhash.h:10:23: note: declared here 10 | PyAPI_FUNC(Py_hash_t) _Py_HashDouble(PyObject *, double); | ^~~~~~~~~~~~~~ numpy/core/src/multiarray/scalartypes.c.src: In function ‘longdouble_arrtype_hash’: numpy/core/src/multiarray/scalartypes.c.src:2968:1: warning: control reaches end of non-void function [-Wreturn-type] 2968 | } | ^ numpy/core/src/multiarray/scalartypes.c.src: In function ‘float_arrtype_hash’: numpy/core/src/multiarray/scalartypes.c.src:2968:1: warning: control reaches end of non-void function [-Wreturn-type] 2968 | } | ^ numpy/core/src/multiarray/scalartypes.c.src: In function ‘half_arrtype_hash’: numpy/core/src/multiarray/scalartypes.c.src:2998:1: warning: control reaches end of non-void function [-Wreturn-type] 2998 | } | ^ gcc: numpy/core/src/multiarray/scalarapi.c gcc: numpy/core/src/multiarray/vdot.c gcc: numpy/core/src/multiarray/nditer_pywrap.c gcc: numpy/core/src/umath/umathmodule.c gcc: build/src.linux-x86_64-3.10/numpy/core/src/umath/clip.c gcc: numpy/core/src/umath/reduction.c gcc: build/src.linux-x86_64-3.10/numpy/core/src/umath/loops.c gcc: numpy/core/src/umath/override.c gcc: numpy/core/src/umath/ufunc_object.c gcc: numpy/core/src/npymath/npy_math.c gcc: build/src.linux-x86_64-3.10/numpy/core/src/npymath/ieee754.c gcc: build/src.linux-x86_64-3.10/numpy/core/src/npymath/npy_math_complex.c gcc: numpy/core/src/npymath/halffloat.c gcc: numpy/core/src/common/array_assign.c gcc: numpy/core/src/common/mem_overlap.c gcc: numpy/core/src/umath/extobj.c gcc: numpy/core/src/common/npy_longdouble.c gcc: build/src.linux-x86_64-3.10/numpy/core/src/umath/scalarmath.c gcc: numpy/core/src/common/ucsnarrow.c gcc: numpy/core/src/common/ufunc_override.c gcc: numpy/core/src/common/numpyos.c gcc: build/src.linux-x86_64-3.10/numpy/core/src/common/npy_cpu_features.c gcc: numpy/core/src/umath/ufunc_type_resolution.c gcc: numpy/core/src/multiarray/mapping.c gcc: numpy/core/src/multiarray/methods.c gcc: build/src.linux-x86_64-3.10/numpy/core/src/umath/matmul.c error: Command "gcc -pthread -B /opt/conda/compiler_compat -Wno-unused-result -Wsign-compare -DNDEBUG -fwrapv -O2 -Wall -fPIC -O2 -isystem /opt/conda/include -fPIC -O2 -isystem /opt/conda/include -fPIC -DNPY_INTERNAL_BUILD=1 -DHAVE_NPY_CONFIG_H=1 -D_FILE_OFFSET_BITS=64 -D_LARGEFILE_SOURCE=1 -D_LARGEFILE64_SOURCE=1 -Ibuild/src.linux-x86_64-3.10/numpy/core/src/umath -Ibuild/src.linux-x86_64-3.10/numpy/core/src/npymath -Ibuild/src.linux-x86_64-3.10/numpy/core/src/common -Inumpy/core/include -Ibuild/src.linux-x86_64-3.10/numpy/core/include/numpy -Inumpy/core/src/common -Inumpy/core/src -Inumpy/core -Inumpy/core/src/npymath -Inumpy/core/src/multiarray -Inumpy/core/src/umath -Inumpy/core/src/npysort -I/opt/conda/include/python3.10 -Ibuild/src.linux-x86_64-3.10/numpy/core/src/common -Ibuild/src.linux-x86_64-3.10/numpy/core/src/npymath -c build/src.linux-x86_64-3.10/numpy/core/src/multiarray/scalartypes.c -o build/temp.linux-x86_64-3.10/build/src.linux-x86_64-3.10/numpy/core/src/multiarray/scalartypes.o -MMD -MF build/temp.linux-x86_64-3.10/build/src.linux-x86_64-3.10/numpy/core/src/multiarray/scalartypes.o.d" failed with exit status 1 [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. ERROR: Failed building wheel for numpy Failed to build numpy ERROR: Could not build wheels for numpy, which is required to install pyproject.toml-based projects [end of output] note: This error originates from a subprocess, and is likely not a problem with pip.
ChengCheng T. · Відгук надано більше 2 років тому
lab does not work with current notebook settings
Guendalina C. · Відгук надано більше 2 років тому
Tamás P. · Відгук надано більше 2 років тому
Faulty instructions, there is no AI Platform, there is no create instance on Vertex AI Pipelines page. Tensorflow data validation library is not installed and installing permission is denied.
Arslan A. · Відгук надано більше 2 років тому
Arvindh R. · Відгук надано більше 2 років тому
--------------------------------------------------------------------------- ModuleNotFoundError Traceback (most recent call last) Cell In[1], line 7 4 import time 6 import tensorflow as tf ----> 7 import tensorflow_data_validation as tfdv
Skandarajan R. · Відгук надано більше 2 років тому
Carl M. · Відгук надано більше 2 років тому
please update the lab for latest version of tensorflow
Arvindh R. · Відгук надано більше 2 років тому
Federico F. · Відгук надано більше 2 років тому
Carl M. · Відгук надано більше 2 років тому
Mahdieh K. · Відгук надано більше 2 років тому
it still doesn't work! After 45 minutes received this error: Error encountered while creating a cluster. Please try again or select an existing cluster. Failure Reason: Error while creating the new cluster - Error: Cluster not in running state
Paul M. · Відгук надано більше 2 років тому
Eduardo J. · Відгук надано більше 2 років тому
I coul not create any pipeline it gave me an error. I spent 2 hours waiting for the cluster to be created and no success
Niloofar M. · Відгук надано більше 2 років тому
Marco M. · Відгук надано більше 2 років тому
Aleksei K. · Відгук надано більше 2 років тому
Doesn't work the deploy of kubeflow pipelines. Creating "custer-1" in zone "us-central1-a" doesm't finish
Ivan R. · Відгук надано більше 2 років тому
Ivan R. · Відгук надано більше 2 років тому
Anas K. · Відгук надано більше 2 років тому
Ми не гарантуємо, що опубліковані відгуки написали клієнти, які придбали продукти чи скористалися ними. Відгуки не перевіряються Google.