Ricardo Quibao
成为会员时间:2022
白银联赛
13600 积分
成为会员时间:2022
完成入门级技能徽章课程创建和管理 AlloyDB 实例,展示您在以下方面的技能:执行核心 AlloyDB 操作 和任务、从 PostgreSQL 迁移到 AlloyDB、管理 AlloyDB 数据库,以及 使用 AlloyDB 列式引擎加速分析查询。
完成入门级技能徽章课程创建和管理 Bigtable 实例,展示以下方面的技能:创建实例、设计架构、 查询数据,以及在 Bigtable 中执行管理任务,包括监控性能、配置节点自动扩缩和复制。
完成“创建和管理 Cloud SQL for PostgreSQL 实例”这一入门级的技能徽章课程,展示您在以下方面的技能: 迁移、配置和管理 Cloud SQL for PostgreSQL 实例及数据库。
This course is intended to give architects, engineers, and developers the skills required to help enterprise customers architect, plan, execute, and test database migration projects. Through a combination of presentations, demos, and hands-on labs participants move databases to Google Cloud while taking advantage of various services. This course covers how to move on-premises, enterprise databases like SQL Server to Google Cloud (Compute Engine and Cloud SQL) and Oracle to Google Cloud bare metal.
完成用 Database Migration Service 将 MySQL 数据迁移至 Cloud SQL 这一入门级的技能徽章课程,展示您在以下方面的技能: 使用 Database Migration Service 中提供的不同作业类型和连接选项,将 MySQL 数据迁移到 Cloud SQL; 以及在运行 Database Migration Service 作业时 迁移 MySQL 用户数据。
完成创建和管理 Cloud Spanner 实例 这一入门级技能徽章课程,展示您在以下方面的技能: 创建 Cloud Spanner 实例和数据库并与之互动; 使用各种方法加载 Cloud Spanner 数据库; 备份 Cloud Spanner 数据库;定义架构并了解查询计划; 部署连接到 Cloud Spanner 实例的现代 Web 应用。
“Google Cloud 基础知识:核心基础设施”介绍在使用 Google Cloud 时会遇到的重要概念和术语。本课程通过视频和实操实验来介绍并比较 Google Cloud 的多种计算和存储服务,并提供重要的资源和政策管理工具。
This course helps learners create a study plan for the PDE (Professional Data Engineer) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.
完成中级技能徽章课程使用 BigQuery 构建数据仓库,展示以下技能: 联接数据以创建新表、排查联接故障、使用并集附加数据、创建日期分区表, 以及在 BigQuery 中使用 JSON、数组和结构体。
This 1-week, accelerated on-demand course builds upon Google Cloud Platform Big Data and Machine Learning Fundamentals. Through a combination of video lectures, demonstrations, and hands-on labs, you'll learn to build streaming data pipelines using Google cloud Pub/Sub and Dataflow to enable real-time decision making. You will also learn how to build dashboards to render tailored output for various stakeholder audiences.
This course empowers you to develop scalable, performant LookML (Looker Modeling Language) models that provide your business users with the standardized, ready-to-use data that they need to answer their questions. Upon completing this course, you will be able to start building and maintaining LookML models to curate and manage data in your organization’s Looker instance.
In this course, you learn how to do the kind of data exploration and analysis in Looker that would formerly be done primarily by SQL developers or analysts. Upon completion of this course, you will be able to leverage Looker's modern analytics platform to find and explore relevant content in your organization’s Looker instance, ask questions of your data, create new metrics as needed, and build and share visualizations and dashboards to facilitate data-driven decision making.
在本课程中,您将获得在 Looker 中应用高级 LookML 概念 的实践经验。您将学习如何使用 Liquid 自定义和创建动态 维度和测量、创建动态 SQL 派生表和自定义原生 派生表,并运用扩展功能来模块化你的 LookML 代码
In this quest, you will get hands-on experience with LookML in Looker. You will learn how to write LookML code to create new dimensions and measures, create derived tables and join them to Explores, filter Explores, and define caching policies in LookML.
完成为 Looker 信息中心和报告准备数据入门级技能徽章课程, 展现您在以下方面的技能:对数据进行过滤、排序和透视;将来自不同 Looker 探索的结果合并; 以及使用函数和运算符构建 Looker 信息中心和报告以用于数据分析和可视化。
In the last installment of the Dataflow course series, we will introduce the components of the Dataflow operational model. We will examine tools and techniques for troubleshooting and optimizing pipeline performance. We will then review testing, deployment, and reliability best practices for Dataflow pipelines. We will conclude with a review of Templates, which makes it easy to scale Dataflow pipelines to organizations with hundreds of users. These lessons will help ensure that your data platform is stable and resilient to unanticipated circumstances.
In this second installment of the Dataflow course series, we are going to be diving deeper on developing pipelines using the Beam SDK. We start with a review of Apache Beam concepts. Next, we discuss processing streaming data using windows, watermarks and triggers. We then cover options for sources and sinks in your pipelines, schemas to express your structured data, and how to do stateful transformations using State and Timer APIs. We move onto reviewing best practices that help maximize your pipeline performance. Towards the end of the course, we introduce SQL and Dataframes to represent your business logic in Beam and how to iteratively develop pipelines using Beam notebooks.
This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.
Incorporating machine learning into data pipelines increases the ability to extract insights from data. This course covers ways machine learning can be included in data pipelines on Google Cloud. For little to no customization, this course covers AutoML. For more tailored machine learning capabilities, this course introduces Notebooks and BigQuery machine learning (BigQuery ML). Also, this course covers how to productionalize machine learning solutions by using Vertex AI.
In this course you will get hands-on in order to work through real-world challenges faced when building streaming data pipelines. The primary focus is on managing continuous, unbounded data with Google Cloud products.
完成中级技能徽章课程利用 BigQuery ML 构建预测模型时的数据工程处理, 展示自己在以下方面的技能:利用 Dataprep by Trifacta 构建 BigQuery 数据转换流水线; 利用 Cloud Storage、Dataflow 和 BigQuery 构建提取、转换和加载 (ETL) 工作流; 以及利用 BigQuery ML 构建机器学习模型。
完成入门级技能徽章课程在 Google Cloud 上为机器学习 API 准备数据,展示以下技能: 使用 Dataprep by Trifacta 清理数据、在 Dataflow 中运行数据流水线、在 Dataproc 中创建集群和运行 Apache Spark 作业,以及调用机器学习 API,包括 Cloud Natural Language API、Google Cloud Speech-to-Text API 和 Video Intelligence API。
完成入门级技能徽章课程为 Compute Engine 实现云负载均衡,展示以下方面的技能: 在 Compute Engine 中创建和部署虚拟机 以及配置网络和应用负载均衡器。
In this intermediate course, you will learn to design, build, and optimize robust batch data pipelines on Google Cloud. Moving beyond fundamental data handling, you will explore large-scale data transformations and efficient workflow orchestration, essential for timely business intelligence and critical reporting. Get hands-on practice using Dataflow for Apache Beam and Serverless for Apache Spark (Dataproc Serverless) for implementation, and tackle crucial considerations for data quality, monitoring, and alerting to ensure pipeline reliability and operational excellence. A basic knowledge of data warehousing, ETL/ELT, SQL, Python, and Google Cloud concepts is recommended.
While the traditional approaches of using data lakes and data warehouses can be effective, they have shortcomings, particularly in large enterprise environments. This course introduces the concept of a data lakehouse and the Google Cloud products used to create one. A lakehouse architecture uses open-standard data sources and combines the best features of data lakes and data warehouses, which addresses many of their shortcomings.
This course introduces the Google Cloud big data and machine learning products and services that support the data-to-AI lifecycle. It explores the processes, challenges, and benefits of building a big data pipeline and machine learning models with Vertex AI on Google Cloud.