Veilu Muthu
成为会员时间:2024
黄金联赛
29779 积分
成为会员时间:2024
使用 Google 的智能体开发套件 (ADK) 构建、配置和运行您的第一个 AI 智能体,将您对智能体的理解转化为实际应用。 在本实操课程中,您将设置一个完整的 ADK 开发环境,使用 Python 代码和 YAML 配置两种方式创建智能体,并通过多个界面运行智能体。您还将学习定义智能体行为的核心参数,将您在课程 1 中学到的知识应用到实际代码中。
这是一节入门级微课程,旨在解释什么是生成式 AI、它的用途以及与传统机器学习方法的区别。该课程还介绍了可以帮助您开发自己的生成式 AI 应用的各种 Google 工具。
本课程是 Google Cloud 数据分析认证的第一门课程(共五门)。在本课程中,您将认识云数据分析领域,并了解云数据分析师在数据获取、存储、处理和可视化方面的角色和职责。您将探索 BigQuery 和 Cloud Storage 等基于 Google Cloud 的工具的架构,以及如何使用这些工具有效地设计数据结构,以及展示和报告数据。
This course helps learners create a study plan for the PDE (Professional Data Engineer) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.
完成入门级技能徽章课程“从 BigQuery 数据中挖掘数据洞见”,展示您在以下方面的技能: 编写 SQL 查询、查询公共表、将示例数据加载到 BigQuery 中、 在 BigQuery 中使用查询验证器排查常见的语法错误,以及通过连接到 BigQuery 数据在 Looker Studio 中 创建报告。
完成 BigQuery 流式数据分析技能徽章课程,赢取技能徽章 。在这门课程中,您将结合使用 Pub/Sub、Dataflow 和 BigQuery 来实现流式 数据分析。
完成中级技能徽章课程利用 BigQuery ML 构建预测模型时的数据工程处理, 展示自己在以下方面的技能:利用 Dataprep by Trifacta 构建 BigQuery 数据转换流水线; 利用 Cloud Storage、Dataflow 和 BigQuery 构建提取、转换和加载 (ETL) 工作流; 以及利用 BigQuery ML 构建机器学习模型。
完成用 Google Data Cloud 共享数据技能徽章课程,赢取技能 徽章。您将获得使用 Google Cloud 数据共享合作伙伴 的实操经验,这些合作伙伴拥有专有数据集, 客户可将其用于自己的分析应用场景。客户订阅这些数据集,可在自己的 平台上查询,然后使用自己的数据集加以扩充,并使用自己的可视化 工具,用于面向客户的信息中心。
完成入门级技能徽章课程在 Google Cloud 上为机器学习 API 准备数据,展示以下技能: 使用 Dataprep by Trifacta 清理数据、在 Dataflow 中运行数据流水线、在 Dataproc 中创建集群和运行 Apache Spark 作业,以及调用机器学习 API,包括 Cloud Natural Language API、Google Cloud Speech-to-Text API 和 Video Intelligence API。
This course consists of a series of labs, designed to provide the learner hands-on experience performing a variety of tasks pertaining to setup and maintenance of their Google VPC networks.
完成在 Google Cloud 上实施云安全基础措施技能徽章中级课程, 展示自己在以下方面的技能:使用 Identity and Access Management (IAM) 创建和分配角色; 创建和管理服务账号;跨虚拟私有云 (VPC) 网络实现专用连接; 使用 Identity-Aware Proxy 限制应用访问权限; 使用 Cloud Key Management Service (KMS) 管理密钥和加密数据;创建专用 Kubernetes 集群。
This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.
完成中级技能徽章课程使用 BigQuery 构建数据仓库,展示以下技能: 联接数据以创建新表、排查联接故障、使用并集附加数据、创建日期分区表, 以及在 BigQuery 中使用 JSON、数组和结构体。
In this course you will get hands-on in order to work through real-world challenges faced when building streaming data pipelines. The primary focus is on managing continuous, unbounded data with Google Cloud products.
Race to the finish line with the Arcade June Speedrun and pick up valuable skills along with an exclusive Google Cloud Credential. Get hands-on experience with APIs, learn how to build a serverless app, and more! No prior experience needed.
In this intermediate course, you will learn to design, build, and optimize robust batch data pipelines on Google Cloud. Moving beyond fundamental data handling, you will explore large-scale data transformations and efficient workflow orchestration, essential for timely business intelligence and critical reporting. Get hands-on practice using Dataflow for Apache Beam and Serverless for Apache Spark (Dataproc Serverless) for implementation, and tackle crucial considerations for data quality, monitoring, and alerting to ensure pipeline reliability and operational excellence. A basic knowledge of data warehousing, ETL/ELT, SQL, Python, and Google Cloud concepts is recommended.
While the traditional approaches of using data lakes and data warehouses can be effective, they have shortcomings, particularly in large enterprise environments. This course introduces the concept of a data lakehouse and the Google Cloud products used to create one. A lakehouse architecture uses open-standard data sources and combines the best features of data lakes and data warehouses, which addresses many of their shortcomings.
Incorporating machine learning into data pipelines increases the ability to extract insights from data. This course covers ways machine learning can be included in data pipelines on Google Cloud. For little to no customization, this course covers AutoML. For more tailored machine learning capabilities, this course introduces Notebooks and BigQuery machine learning (BigQuery ML). Also, this course covers how to productionalize machine learning solutions by using Vertex AI.