LaPointe Rick
メンバー加入日: 2025
ダイヤモンド リーグ
7534 ポイント
メンバー加入日: 2025
「生成 AI: 基本概念の理解」は、生成 AI リーダー学習プログラムの 2 つ目のコースです。このコースでは、AI、ML、生成 AI の違いを探り、さまざまなデータタイプが生成 AI によるビジネス課題への対処を可能にする仕組みを理解することで、生成 AI の基本概念を習得します。また、基盤モデルの限界に対処するための Google Cloud の戦略、および責任ある安全な AI の開発と導入における重要な課題に関するインサイトも得られます。
「生成 AI: chatbot を超えて」は、生成 AI リーダー学習プログラムの最初のコースで、前提条件はありません。このコースは、chatbot の基礎的な理解をさらに広げ、組織で実現できる生成 AI の真の可能性を把握することを目的としています。基盤モデルおよびプロンプト エンジニアリングなど、生成 AI の力を活用するうえで重要な概念も紹介します。また、このコースでは、組織において優れた生成 AI 戦略を策定する場合に検討するべき重要事項も見ていきます。
In this Google DeepMind course you will discover the mechanisms of the transformer architecture. You will investigate how transformer language models process prompts to make context-sensitive next-token predictions. Through practical activities you will explore the attention mechanism, visualize attention weights, and encounter advanced concepts like masked attention and multi-head attention. You will also learn other techniques that are necessary to build neural networks that are well-suited to be used as language models. Finally, through activities on values, stakeholder mapping and community engagement, you will practice concrete tools for ensuring AI projects are developed with communities, not just for them.
With this course you will learn how to use different techniques to fine-tune Gemini. Model tuning is an effective way to customize large models like Gemini for your specific tasks. It's a key step to improve the model's quality and efficiency. This course will give an overview of model tuning, describe the tuning options available for Gemini, help you determine when each tuning option should be used and how to perform tuning.
In this Google DeepMind course you will focus on the training process for machine learning models. You will learn how to spot and mitigate issues when training a model, such as overfitting and underfitting. In practical coding labs, you will implement and evaluate the multilayer perceptron for simple classification tasks. This will provide insights into the mechanics of training a neural network model and the backpropagation algorithm. Research case studies will demonstrate how neural networks power real-world models. Additionally, you will consider the broader social impacts of innovation by looking beyond immediate benefits to anticipate potential risks, safety concerns, and further-reaching societal consequences.
In this Google DeepMind course you will learn how to prepare text data for language models to process. You will investigate the tools and techniques used to prepare, structure, and represent text data for language models, with a focus on tokenization and embeddings. You will be encouraged to think critically about the decisions behind data preparation, and what biases within the data may be introduced into models. You will analyze trade-offs, learn how to work with vectors and matrices, how meaning is represented in language models. Finally, you will practice designing a dataset ethically using the Data Cards process, ensuring transparency, accountability, and respect for community values in AI development.
In this Google DeepMind course, you will learn the fundamentals of language models and gain a high-level understanding of the machine learning development pipeline. You will consider the strengths and limitations of traditional n-gram models and advanced transformer models. Practical coding labs will enable you to develop insights into how machine learning models work and how they can be used to generate text and identify patterns in language. Through real-world case studies, you will build an understanding around how research engineers operate. Drawing on these insights you will identify problems that you wish to tackle in your own community and consider how to leverage the power of machine learning responsibly to address these problems within a global and local context.
Build AI agents that can leverage enterprise databases using the MCP Toolbox for Databases. You will define secure database interaction tools, and implement intelligent querying capabilities (leveraging vector embeddings, structured queries).