Modupe Ajala
Menjadi anggota sejak 2023
Silver League
37866 poin
Menjadi anggota sejak 2023
This structured course is for developers interested in building intelligent agents using the Agent Development Kit (ADK). It combines hands-on experience, core concepts, and practical application, to provide a comprehensive guide to using ADK. You can also join our community of Google Cloud experts and peers to ask questions, collaborate on answers, and connect with the Googlers making the products you use every day.
Dalam kursus ini, Anda akan belajar tentang data engineering on Google Cloud, peran dan tanggung jawab data engineer, dan bagaimana hal tersebut terhubung dengan penawaran yang disediakan oleh Google Cloud. Anda juga akan mempelajari cara untuk mengatasi tantangan terkait data engineering.
In this second installment of the Dataflow course series, we are going to be diving deeper on developing pipelines using the Beam SDK. We start with a review of Apache Beam concepts. Next, we discuss processing streaming data using windows, watermarks and triggers. We then cover options for sources and sinks in your pipelines, schemas to express your structured data, and how to do stateful transformations using State and Timer APIs. We move onto reviewing best practices that help maximize your pipeline performance. Towards the end of the course, we introduce SQL and Dataframes to represent your business logic in Beam and how to iteratively develop pipelines using Beam notebooks.
While the traditional approaches of using data lakes and data warehouses can be effective, they have shortcomings, particularly in large enterprise environments. This course introduces the concept of a data lakehouse and the Google Cloud products used to create one. A lakehouse architecture uses open-standard data sources and combines the best features of data lakes and data warehouses, which addresses many of their shortcomings.
This course introduces participants to MLOps tools and best practices for deploying, evaluating, monitoring and operating production ML systems on Google Cloud. MLOps is a discipline focused on the deployment, testing, monitoring, and automation of ML systems in production. Machine Learning Engineering professionals use tools for continuous improvement and evaluation of deployed models. They work with (or can be) Data Scientists, who develop models, to enable velocity and rigor in deploying the best performing models.
Kursus ini dikhususkan untuk membekali Anda dengan pengetahuan dan alat yang diperlukan guna mengungkap tantangan unik yang dihadapi oleh tim MLOps saat men-deploy dan mengelola model AI Generatif, serta mengeksplorasi cara Vertex AI memberdayakan tim AI dalam menyederhanakan proses MLOps dan mencapai keberhasilan dalam project AI Generatif.
Kursus ini mengeksplorasi solusi Retrieval-Augmented Generation (RAG) di BigQuery untuk memitigasi halusinasi AI. Kursus ini akan memperkenalkan alur kerja RAG yang mencakup pembuatan embedding, penelusuran ruang vektor, dan pembuatan jawaban yang lebih baik. Kursus ini akan menjelaskan alasan konseptual di balik langkah-langkah ini dan implementasi praktisnya dengan BigQuery. Di akhir kursus, peserta akan dapat membangun pipeline RAG menggunakan BigQuery dan model AI generatif seperti Gemini dan model embedding untuk menangani kasus penggunaan halusinasi AI mereka sendiri.
Kursus ini menunjukkan cara menggunakan model AI/ML untuk tugas-tugas AI generatif di BigQuery. Melalui kasus penggunaan praktis yang melibatkan pengelolaan hubungan pelanggan (CRM), Anda akan mempelajari alur kerja pemecahan masalah bisnis dengan model Gemini. Untuk memudahkan pemahaman, kursus ini juga menyediakan panduan langkah demi langkah melalui solusi coding menggunakan kueri SQL dan notebook Python.
Kursus ini mengeksplorasi Gemini in BigQuery, yakni paket fitur yang didukung AI untuk membantu alur kerja data ke AI. Paket fitur ini meliputi eksplorasi dan persiapan data, pembuatan kode dan pemecahan masalah, serta penemuan dan visualisasi alur kerja. Melalui penjelasan konseptual, kasus penggunaan praktis, dan lab interaktif, kursus ini akan membantu para praktisi data dalam meningkatkan produktivitas mereka dan mempercepat pipeline pengembangan.
Selesaikan badge keahlian tingkat menengah Membangun Data Warehouse dengan BigQuery untuk menunjukkan keterampilan Anda dalam hal berikut: menggabungkan data untuk membuat tabel baru, memecahkan masalah penggabungan, menambahkan data dengan union, membuat tabel berpartisi tanggal, serta menggunakan JSON, array, dan struct di BigQuery.
Dasar-Dasar Google Cloud: Infrastruktur Inti memperkenalkan konsep dan terminologi penting untuk bekerja dengan Google Cloud. Melalui video dan lab interaktif, kursus ini menyajikan dan membandingkan banyak layanan komputasi dan penyimpanan Google Cloud, bersama dengan resource penting dan alat pengelolaan kebijakan.
This Data Analytics course consists of a series of advanced-level labs designed to validate your proficiency in using Google Cloud services. Each lab presents a set of the required tasks that you must complete with minimal assistance. The labs in this course have replaced the previous L300 Data Analytics Challenge Lab. If you have already completed the Challenge Lab as part of your L300 accreditation requirement, it will be carried over and count towards your L300 status. You must score 80% or higher for each lab to complete this course, and fulfill your CEPF L300 Data Analytics requirement. For technical issues with a Challenge Lab, please raise a Buganizer ticket using this CEPF Buganizer template: go/cepfl300labsupport
In this quest, you will get hands-on experience with LookML in Looker. You will learn how to write LookML code to create new dimensions and measures, create derived tables and join them to Explores, filter Explores, and define caching policies in LookML.
Kursus ini memperkenalkan model difusi, yaitu kelompok model machine learning yang belakangan ini menunjukkan potensinya dalam ranah pembuatan gambar. Model difusi mengambil inspirasi dari fisika, khususnya termodinamika. Dalam beberapa tahun terakhir, model difusi menjadi populer baik di dunia industri maupun penelitian. Model difusi mendasari banyak alat dan model pembuatan gambar yang canggih di Google Cloud. Kursus ini memperkenalkan Anda pada teori yang melandasi model difusi dan cara melatih serta men-deploy-nya di Vertex AI.
In this course, you learn how to do the kind of data exploration and analysis in Looker that would formerly be done primarily by SQL developers or analysts. Upon completion of this course, you will be able to leverage Looker's modern analytics platform to find and explore relevant content in your organization’s Looker instance, ask questions of your data, create new metrics as needed, and build and share visualizations and dashboards to facilitate data-driven decision making.
This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.
Incorporating machine learning into data pipelines increases the ability to extract insights from data. This course covers ways machine learning can be included in data pipelines on Google Cloud. For little to no customization, this course covers AutoML. For more tailored machine learning capabilities, this course introduces Notebooks and BigQuery machine learning (BigQuery ML). Also, this course covers how to productionalize machine learning solutions by using Vertex AI.
This course introduces the Google Cloud big data and machine learning products and services that support the data-to-AI lifecycle. It explores the processes, challenges, and benefits of building a big data pipeline and machine learning models with Vertex AI on Google Cloud.
This course, Building Resilient Streaming Analytics Systems on Google Cloud - Locales, is intended for non-English learners. If you want to take this course in English, please enroll in Building Resilient Streaming Analytics Systems on Google Cloud. Processing streaming data is becoming increasingly popular as streaming enables businesses to get real-time metrics on business operations. This course covers how to build streaming data pipelines on Google Cloud. Pub/Sub is described for handling incoming streaming data. The course also covers how to apply aggregations and transformations to streaming data using Dataflow, and how to store processed records to BigQuery or Cloud Bigtable for analysis. Learners will get hands-on experience building streaming data pipeline components on Google Cloud using QwikLabs.
In this intermediate course, you will learn to design, build, and optimize robust batch data pipelines on Google Cloud. Moving beyond fundamental data handling, you will explore large-scale data transformations and efficient workflow orchestration, essential for timely business intelligence and critical reporting. Get hands-on practice using Dataflow for Apache Beam and Serverless for Apache Spark (Dataproc Serverless) for implementation, and tackle crucial considerations for data quality, monitoring, and alerting to ensure pipeline reliability and operational excellence. A basic knowledge of data warehousing, ETL/ELT, SQL, Python, and Google Cloud concepts is recommended.