Markos Muche
成为会员时间:2022
黄金联赛
73775 积分
成为会员时间:2022
這堂課程將說明變換器架構,以及基於變換器的雙向編碼器表示技術 (BERT) 模型,同時帶您瞭解變換器架構的主要組成 (如自我注意力機制) 和如何用架構建立 BERT 模型。此外,也會介紹 BERT 適用的各種任務,像是文字分類、問題回答和自然語言推論。課程預計約 45 分鐘。
本課程概要說明解碼器與編碼器的架構,這種強大且常見的機器學習架構適用於序列對序列的任務,例如機器翻譯、文字摘要和回答問題。您將認識編碼器與解碼器架構的主要元件,並瞭解如何訓練及提供這些模型。在對應的研究室逐步操作說明中,您將學習如何從頭開始使用 TensorFlow 寫程式,導入簡單的編碼器與解碼器架構來產生詩詞。
本課程將介紹注意力機制,說明這項強大技術如何讓類神經網路專注於輸入序列的特定部分。此外,也將解釋注意力的運作方式,以及如何使用注意力來提高各種機器學習任務的成效,包括機器翻譯、文字摘要和回答問題。
本課程將介紹擴散模型,這是一種機器學習模型,近期在圖像生成領域展現亮眼潛力。概念源自物理學,尤其深受熱力學影響。過去幾年來,在學術界和業界都是炙手可熱的焦點。在 Google Cloud 中,擴散模型是許多先進圖像生成模型和工具的基礎。課程將介紹擴散模型背後的理論,並說明如何在 Vertex AI 上訓練和部署這些模型。
這堂初級課程將介紹 Google Cloud 的資料分析工作流程,以及用於探索、分析資料並以圖表呈現的工具。您也能學會如何與相關人員分享自己的發現結果。本課程包含個案研究、實作實驗室、講座、測驗和示範,實際展示如何將原始資料集轉化為清晰的資料,進而呈現出能發揮成效的圖表和資訊主頁。無論您是資料領域從業人員、想瞭解如何透過 Google Cloud 取得成功,或有意在職涯中更上一層樓,本課程都能協助您踏出第一步。絕大多數在工作上執行或運用資料分析的學員,都能從本課程受益。
完成「使用 Dataplex 建構資料網格」技能徽章入門課程,即可證明您具備下列技能:使用 Dataplex 建構資料網格, 以利在 Google Cloud 維護資料安全性,並協助治理和探索資料。您將練習並測試自己的技能,包括在 Dataplex 為資產加上標記、指派 IAM 角色,以及評估資料品質。
In this second installment of the Dataflow course series, we are going to be diving deeper on developing pipelines using the Beam SDK. We start with a review of Apache Beam concepts. Next, we discuss processing streaming data using windows, watermarks and triggers. We then cover options for sources and sinks in your pipelines, schemas to express your structured data, and how to do stateful transformations using State and Timer APIs. We move onto reviewing best practices that help maximize your pipeline performance. Towards the end of the course, we introduce SQL and Dataframes to represent your business logic in Beam and how to iteratively develop pipelines using Beam notebooks.
This course helps learners create a study plan for the PDE (Professional Data Engineer) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.
Enterprise data sharing made easy with Dataplex and Analytics Hub Learn how to share data securely in your lakehouse with minimized data duplication and more data governance through Dataplex and Analytics Hub - enterprise data management made easy. Creating Data Pipelines with Data Fusion In this session, we will explore using Data Fusion to create code-free point and click pipelines that can ETL high-volumes of data with support for popular data sources, including file systems and object stores, relational and NoSQL databases, and SaaS systems.
The third course in this course series is Achieving Advanced Insights with BigQuery. Here we will build on your growing knowledge of SQL as we dive into advanced functions and how to break apart a complex query into manageable steps. We will cover the internal architecture of BigQuery (column-based sharded storage) and advanced SQL topics like nested and repeated fields through the use of Arrays and Structs. Lastly we will dive into optimizing your queries for performance and how you can secure your data through authorized views. After completing this course, enroll in the Applying Machine Learning to your Data with Google Cloud course.
This course covers BigQuery fundamentals for professionals who are familiar with SQL-based cloud data warehouses in Redshift and want to begin working in BigQuery. Through interactive lecture content and hands-on labs, you learn how to provision resources, create and share data assets, ingest data, and optimize query performance in BigQuery. Drawing upon your knowledge of Redshift, you also learn about similarities and differences between Redshift and BigQuery to help you get started with data warehouses in BigQuery. After this course, you can continue your BigQuery journey by completing the skill badge quest titled Build and Optimize Data Warehouses with BigQuery.
This course is part 1 of a 3-course series on Serverless Data Processing with Dataflow. In this first course, we start with a refresher of what Apache Beam is and its relationship with Dataflow. Next, we talk about the Apache Beam vision and the benefits of the Beam Portability framework. The Beam Portability framework achieves the vision that a developer can use their favorite programming language with their preferred execution backend. We then show you how Dataflow allows you to separate compute and storage while saving money, and how identity, access, and management tools interact with your Dataflow pipelines. Lastly, we look at how to implement the right security model for your use case on Dataflow.
完成使用 BigQuery ML 為預測模型進行資料工程技能徽章中階課程, 即可證明自己具備下列知識與技能:運用 Dataprep by Trifacta 建構連至 BigQuery 的資料轉換 pipeline; 使用 Cloud Storage、Dataflow 和 BigQuery 建構「擷取、轉換及載入」(ETL) 工作負載, 以及使用 BigQuery ML 建構機器學習模型。
完成 透過 BigQuery 建構資料倉儲 技能徽章中階課程,即可證明您具備下列技能: 彙整資料以建立新資料表、排解彙整作業問題、利用聯集附加資料、建立依日期分區的資料表, 以及在 BigQuery 使用 JSON、陣列和結構體。
完成 在 Google Cloud 為機器學習 API 準備資料 技能徽章入門課程,即可證明您具備下列技能: 使用 Dataprep by Trifacta 清理資料、在 Dataflow 執行資料管道、在 Dataproc 建立叢集和執行 Apache Spark 工作,以及呼叫機器學習 API,包含 Cloud Natural Language API、Google Cloud Speech-to-Text API 和 Video Intelligence API。
Incorporating machine learning into data pipelines increases the ability to extract insights from data. This course covers ways machine learning can be included in data pipelines on Google Cloud. For little to no customization, this course covers AutoML. For more tailored machine learning capabilities, this course introduces Notebooks and BigQuery machine learning (BigQuery ML). Also, this course covers how to productionalize machine learning solutions by using Vertex AI.
In this course you will get hands-on in order to work through real-world challenges faced when building streaming data pipelines. The primary focus is on managing continuous, unbounded data with Google Cloud products.
In this intermediate course, you will learn to design, build, and optimize robust batch data pipelines on Google Cloud. Moving beyond fundamental data handling, you will explore large-scale data transformations and efficient workflow orchestration, essential for timely business intelligence and critical reporting. Get hands-on practice using Dataflow for Apache Beam and Serverless for Apache Spark (Dataproc Serverless) for implementation, and tackle crucial considerations for data quality, monitoring, and alerting to ensure pipeline reliability and operational excellence. A basic knowledge of data warehousing, ETL/ELT, SQL, Python, and Google Cloud concepts is recommended.
完成「設定 Google Cloud 網路」課程,即可獲得技能徽章。 您將瞭解如何在 Google Cloud Platform 執行基本的網路工作,包括建立自訂網路、新增子網路防火牆規則,還有建立 VM 並測試 VM 之間的通訊延遲。
While the traditional approaches of using data lakes and data warehouses can be effective, they have shortcomings, particularly in large enterprise environments. This course introduces the concept of a data lakehouse and the Google Cloud products used to create one. A lakehouse architecture uses open-standard data sources and combines the best features of data lakes and data warehouses, which addresses many of their shortcomings.
This course introduces the Google Cloud big data and machine learning products and services that support the data-to-AI lifecycle. It explores the processes, challenges, and benefits of building a big data pipeline and machine learning models with Vertex AI on Google Cloud.
This course helps learners create a study plan for the PCA (Professional Cloud Architect) certification exam. Learners explore the breadth and scope of the domains covered in the exam. Learners assess their exam readiness and create their individual study plan.
Want to scale your data analysis efforts without managing database hardware? Learn the best practices for querying and getting insights from your data warehouse with this interactive series of BigQuery labs. BigQuery is Google's fully managed, NoOps, low cost analytics database. With BigQuery you can query terabytes and terabytes of data without having any infrastructure to manage or needing a database administrator. BigQuery uses SQL and can take advantage of the pay-as-you-go model. BigQuery allows you to focus on analyzing data to find meaningful insights.
歡迎參加「開始使用 Google Kubernetes Engine」課程。Kubernetes 是位於應用程式和硬體基礎架構之間的軟體層。如果您對這項技術感興趣,這堂課程可以滿足您的需求。有了 Google Kubernetes Engine,您就能在 Google Cloud 中以代管服務的形式使用 Kubernetes。 本課程的目標在於介紹 Google Kubernetes Engine (常簡稱為 GKE) 的基本概念,以及如何將應用程式容器化,以便在 Google Cloud 中執行。課程首先會初步介紹 Google Cloud,隨後簡介容器、Kubernetes、Kubernetes 架構和 Kubernetes 作業。
這堂課程可讓參加人員瞭解如何使用確實有效的設計模式,在 Google Cloud 中打造相當可靠且效率卓越的解決方案。這堂課程接續了「設定 Google Compute Engine 架構」或「設定 Google Kubernetes Engine 架構」課程的內容,並假設參加人員曾實際運用上述任一課程涵蓋的技術。這堂課程結合了簡報、設計活動和實作研究室,可讓參加人員瞭解如何定義業務和技術需求,並在兩者之間取得平衡,設計出相當可靠、可用性高、安全又符合成本效益的 Google Cloud 部署項目。
這堂隨選密集課程會向參加人員說明 Google Cloud 提供的全方位彈性基礎架構和平台服務,並將重點放在 Compute Engine。這堂課程結合了視訊講座、示範和實作研究室,可讓參加人員探索及部署解決方案元素,例如網路、系統和應用程式服務等基礎架構元件。另外,這堂課也會介紹如何部署實用的解決方案,包括客戶提供的加密金鑰、安全性和存取權管理機制、配額與帳單,以及資源監控功能。
這堂隨選密集課程會向參加人員說明 Google Cloud 提供的全方位彈性基礎架構和平台服務,尤其側重於 Compute Engine。這堂課程結合了視訊講座、示範和實作研究室,可讓參加人員探索及部署解決方案元素,例如網路、虛擬機器和應用程式服務等基礎架構元件。您會瞭解如何透過控制台和 Cloud Shell 使用 Google Cloud。另外,您也能瞭解雲端架構師的職責、基礎架構設計方法,以及具備虛擬私有雲 (VPC)、專案、網路、子網路、IP 位址、路徑和防火牆規則的虛擬網路設定。
「Google Cloud 基礎知識:核心基礎架構」介紹了在使用 Google Cloud 時會遇到的重要概念和術語。本課程會透過影片和實作實驗室,介紹並比較 Google Cloud 的多種運算和儲存服務,同時提供重要的資源和政策管理工具。
This course offers hands-on practice with migrating MySQL data to Cloud SQL using Database Migration Service. You start with an introductory lab that briefly reviews how to get started with Cloud SQL for MySQL, including how to connect to Cloud SQL instances using the Cloud Console. Then, you continue with two labs focused on migrating MySQL databases to Cloud SQL using different job types and connectivity options available in Database Migration Service. The course ends with a lab on migrating MySQL user data when running Database Migration Service jobs.
完成「在 Google Cloud 使用 Terraform 建構基礎架構」技能徽章中階課程, 即可證明自己具備下列知識與技能:使用 Terraform 的基礎架構即程式碼 (IaC) 原則、運用 Terraform 設定佈建及管理 Google Cloud 資源、有效管理狀態 (本機和遠端),以及將 Terraform 程式碼模組化,以利重複使用和管理。
完成 建立 Google Cloud 網路 課程即可獲得技能徽章。這個課程將說明 部署及監控應用程式的多種方法,包括查看 IAM 角色及新增/移除 專案存取權、建立虛擬私有雲網路、部署及監控 Compute Engine VM、編寫 SQL 查詢、在 Compute Engine 部署及監控 VM,以及 使用 Kubernetes 透過多種方法部署應用程式。
本課程介紹 Google Cloud 的 AI 和機器學習 (ML) 功能,著重說明如何開發生成式和預測式 AI 專案。我們也會探討「從資料到 AI」整個生命週期都適用的技術、產品和工具,並透過互動式練習,協助資料科學家、AI 開發人員和機器學習工程師精進專業知識。
只要修完「在 Google Cloud 設定應用程式開發環境」課程,就能獲得技能徽章。 在本課程中,您將學會如何使用以下技術的基本功能,建構和連結以儲存空間為中心的雲端基礎架構:Cloud Storage、Identity and Access Management、Cloud Functions 和 Pub/Sub。
完成「在 Compute Engine 導入 Cloud Load Balancing」技能徽章入門課程,即可證明您具備下列技能: 在 Compute Engine 建立及部署虛擬機器, 以及設定網路和應用程式負載平衡器。
在這堂入門課程,您將實際練習使用 Google Cloud 的基礎工具和服務。本課程包含可選擇觀賞的影片, 針對實驗室涵蓋的概念提供更多背景資訊,協助您複習。「Google Cloud 必備知識」 是適合 Google Cloud 學員的第一堂課, 即使您尚未學習或不熟悉雲端知識, 也能從這堂課獲得實務經驗,並應用於第一項 Google Cloud 專案。不管是撰寫 Cloud Shell 指令 和部署第一部虛擬機器,還是在 Kubernetes Engine 或透過負載平衡執行應用程式, 「Google Cloud 必備知識」都是認識平台基本功能的最佳入門資源。