Bryan Au
成为会员时间:2024
钻石联赛
33975 积分
成为会员时间:2024
In the last installment of the Dataflow course series, we will introduce the components of the Dataflow operational model. We will examine tools and techniques for troubleshooting and optimizing pipeline performance. We will then review testing, deployment, and reliability best practices for Dataflow pipelines. We will conclude with a review of Templates, which makes it easy to scale Dataflow pipelines to organizations with hundreds of users. These lessons will help ensure that your data platform is stable and resilient to unanticipated circumstances.
In this second installment of the Dataflow course series, we are going to be diving deeper on developing pipelines using the Beam SDK. We start with a review of Apache Beam concepts. Next, we discuss processing streaming data using windows, watermarks and triggers. We then cover options for sources and sinks in your pipelines, schemas to express your structured data, and how to do stateful transformations using State and Timer APIs. We move onto reviewing best practices that help maximize your pipeline performance. Towards the end of the course, we introduce SQL and Dataframes to represent your business logic in Beam and how to iteratively develop pipelines using Beam notebooks.
This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.
Incorporating machine learning into data pipelines increases the ability to extract insights from data. This course covers ways machine learning can be included in data pipelines on Google Cloud. For little to no customization, this course covers AutoML. For more tailored machine learning capabilities, this course introduces Notebooks and BigQuery machine learning (BigQuery ML). Also, this course covers how to productionalize machine learning solutions by using Vertex AI.
In this course you will get hands-on in order to work through real-world challenges faced when building streaming data pipelines. The primary focus is on managing continuous, unbounded data with Google Cloud products.
In this intermediate course, you will learn to design, build, and optimize robust batch data pipelines on Google Cloud. Moving beyond fundamental data handling, you will explore large-scale data transformations and efficient workflow orchestration, essential for timely business intelligence and critical reporting. Get hands-on practice using Dataflow for Apache Beam and Serverless for Apache Spark (Dataproc Serverless) for implementation, and tackle crucial considerations for data quality, monitoring, and alerting to ensure pipeline reliability and operational excellence. A basic knowledge of data warehousing, ETL/ELT, SQL, Python, and Google Cloud concepts is recommended.
While the traditional approaches of using data lakes and data warehouses can be effective, they have shortcomings, particularly in large enterprise environments. This course introduces the concept of a data lakehouse and the Google Cloud products used to create one. A lakehouse architecture uses open-standard data sources and combines the best features of data lakes and data warehouses, which addresses many of their shortcomings.
This course introduces the Google Cloud big data and machine learning products and services that support the data-to-AI lifecycle. It explores the processes, challenges, and benefits of building a big data pipeline and machine learning models with Vertex AI on Google Cloud.
This course helps learners create a study plan for the PDE (Professional Data Engineer) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.
在本课程中,您将获得在 Looker 中应用高级 LookML 概念 的实践经验。您将学习如何使用 Liquid 自定义和创建动态 维度和测量、创建动态 SQL 派生表和自定义原生 派生表,并运用扩展功能来模块化你的 LookML 代码
In this quest, you will get hands-on experience with LookML in Looker. You will learn how to write LookML code to create new dimensions and measures, create derived tables and join them to Explores, filter Explores, and define caching policies in LookML.
完成在 Looker 中构建 LookML 对象入门技能徽章课程,展示以下方面的技能: 构建新的维度、测量、视图和派生表;根据需求设置测量的过滤条件和 类型;更新维度和测量; 构建并优化探索;将视图联接到现有探索;并根据业务需求 决定要创建哪些 LookML 对象。
完成为 Looker 信息中心和报告准备数据入门级技能徽章课程, 展现您在以下方面的技能:对数据进行过滤、排序和透视;将来自不同 Looker 探索的结果合并; 以及使用函数和运算符构建 Looker 信息中心和报告以用于数据分析和可视化。
This course empowers you to develop scalable, performant LookML (Looker Modeling Language) models that provide your business users with the standardized, ready-to-use data that they need to answer their questions. Upon completing this course, you will be able to start building and maintaining LookML models to curate and manage data in your organization’s Looker instance.
In this course, you learn how to do the kind of data exploration and analysis in Looker that would formerly be done primarily by SQL developers or analysts. Upon completion of this course, you will be able to leverage Looker's modern analytics platform to find and explore relevant content in your organization’s Looker instance, ask questions of your data, create new metrics as needed, and build and share visualizations and dashboards to facilitate data-driven decision making.
在本新手级课程中,您将了解 Google Cloud 数据分析工作流,以及可用于探索、分析和直观呈现数据并与相关人员共享发现结果的工具。结合案例研究、实操实验、讲座和测验/演示,本课程展示了如何将原始数据集转化为纯净数据,进而转化为实用的可视化图表和信息中心。无论您是已经在从事数据工作并想了解如何通过 Google Cloud 取得成功,还是在寻求职业发展,都可以借助本课程迈出第一步。几乎所有在工作中执行或使用数据分析的人都可以从本课程中受益。