Chargement...
Aucun résultat.

Google Cloud Skills Boost

Mettez en pratique vos compétences dans la console Google Cloud


Accédez à plus de 700 ateliers et cours

Multimodality with Gemini

Atelier 1 heure universal_currency_alt 5 crédits show_chart Intermédiaire
info Cet atelier peut intégrer des outils d'IA pour vous accompagner dans votre apprentissage.
Accédez à plus de 700 ateliers et cours

GSP1210

Google Cloud self-paced labs logo

Overview

This lab introduces you to Gemini, a family of multimodal generative AI models developed by Google. You use the Gemini API to explore how Gemini Flash can understand and generate responses based on text, images, and video.

Gemini's multimodal capabilities enable it to:

  • Analyze images: Detect objects, understand user interfaces, interpret diagrams, and compare visual similarities and differences.
  • Process videos: Generate descriptions, extract tags and highlights, and answer questions about video content.

You experiment with these features through hands-on tasks using the Gemini API in Vertex AI.

Prerequisites

Before starting this lab, you should be familiar with:

  • Basic Python programming.
  • General API concepts.
  • Running Python code in a Jupyter notebook on Vertex AI Workbench.

Objectives

In this lab, you:

  • Interact with the Gemini API in Vertex AI.
  • Use the Gemini Flash model to analyze images and videos.
  • Provide Gemini with text, image, and video prompts to generate informative responses.
  • Explore practical applications of Gemini's multimodal capabilities.

Setup and requirements

Before you click the Start Lab button

Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources are made available to you.

This hands-on lab lets you do the lab activities in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials you use to sign in and access Google Cloud for the duration of the lab.

To complete this lab, you need:

  • Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito (recommended) or private browser window to run this lab. This prevents conflicts between your personal account and the student account, which may cause extra charges incurred to your personal account.
  • Time to complete the lab—remember, once you start, you cannot pause a lab.
Note: Use only the student account for this lab. If you use a different Google Cloud account, you may incur charges to that account.

How to start your lab and sign in to the Google Cloud console

  1. Click the Start Lab button. If you need to pay for the lab, a dialog opens for you to select your payment method. On the left is the Lab Details pane with the following:

    • The Open Google Cloud console button
    • Time remaining
    • The temporary credentials that you must use for this lab
    • Other information, if needed, to step through this lab
  2. Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).

    The lab spins up resources, and then opens another tab that shows the Sign in page.

    Tip: Arrange the tabs in separate windows, side-by-side.

    Note: If you see the Choose an account dialog, click Use Another Account.
  3. If necessary, copy the Username below and paste it into the Sign in dialog.

    {{{user_0.username | "Username"}}}

    You can also find the Username in the Lab Details pane.

  4. Click Next.

  5. Copy the Password below and paste it into the Welcome dialog.

    {{{user_0.password | "Password"}}}

    You can also find the Password in the Lab Details pane.

  6. Click Next.

    Important: You must use the credentials the lab provides you. Do not use your Google Cloud account credentials. Note: Using your own Google Cloud account for this lab may incur extra charges.
  7. Click through the subsequent pages:

    • Accept the terms and conditions.
    • Do not add recovery options or two-factor authentication (because this is a temporary account).
    • Do not sign up for free trials.

After a few moments, the Google Cloud console opens in this tab.

Note: To access Google Cloud products and services, click the Navigation menu or type the service or product name in the Search field. Navigation menu icon and Search field

Task 1. Open the notebook in Vertex AI Workbench

  1. In the Google Cloud console, on the Navigation menu (Navigation menu icon), click Vertex AI > Workbench.

  2. Find the instance and click on the Open JupyterLab button.

The JupyterLab interface for your Workbench instance opens in a new browser tab.

Task 2. Set up the notebook

  1. Open the file.

  2. In the Select Kernel dialog, choose Python 3 from the list of available kernels.

  3. Run through the Getting Started and the Import libraries sections of the notebook.

    • For Project ID, use , and for Location, use .
Note: You can skip any notebook cells that are noted Colab only. If you experience a 429 response from any of the notebook cell executions, wait 1 minute before running the cell again to proceed.

Task 3. Use the Gemini Flash model

Gemini Flash is a multimodal model that supports multimodal prompts. You can include text, image(s), and video in your prompt requests and get text or code responses.

In this task, run through the specified notebook cells to see how to use the Gemini Flash model. Return here to check your progress as you complete the objectives.

Image understanding across multiple images

One of the Gemini's capabilities is being able to reason across multiple images. In this example, you use Gemini to calculate the total cost of groceries using an image of fruits and a price list.

Run through the Image understanding across multiple images section of the notebook.

Click Check my progress to verify the objective. Image understanding across multiple images

Generating a video description

Gemini can also extract tags throughout a video and retrieve extra information beyond the video contents. In this example, you use Gemini to extract tags and retrieve extra information from different videos:

Run through the Generating a video description section of the notebook.

Click Check my progress to verify the objective. Generating a video description

Audio understanding

Gemini can directly process audio for long-context understanding. In this example, you use Gemini to process audio for long-context understanding:

Run through the Audio understanding section of the notebook.

Click Check my progress to verify the objective. Audio understanding

Reason across a codebase

Gemini can directly process audio for long-context understanding. In this example, you use Gemini to process audio for long-context understanding:

Run through the Reason across a codebase section of the notebook.

Click Check my progress to verify the objective. Reason across a codebase

Video and audio understanding

In this example, you try out Gemini's native multimodal and long-context capabilities on video interleaving with audio inputs.:

Run through the Video and audio understanding section of the notebook.

Click Check my progress to verify the objective. Video and audio understanding

All modalities (images, video, audio, text) at once

Gemini is natively multimodal and supports interleaving of data from different modalities. In this example, you try a mix of audio, visual, text, and code inputs in the same input sequence.

Run through the All modalities (images, video, audio, text) at once section of the notebook.

Click Check my progress to verify the objective. All modalities (images, video, audio, text) at once

Generating recommendations based on provided images

Gemini is capable of image comparison and providing recommendations. This is particularly useful for retail companies who want to provide users product recommendations based on their current setup.

Run through the Generating recommendations based on provided images section of the notebook.

Click Check my progress to verify the objective. Generating recommendations based on provided images

Understand entity relationships in technical diagrams

Gemini has multimodal capabilities that enable it to understand diagrams and take actionable steps, such as optimization or code generation. In this example, you see how Gemini can decipher an entity relationship (ER) diagram, understand the relationships between tables, identify requirements for optimization in a specific environment like BigQuery, and even generate corresponding code.

Run through the Understand entity relationships in technical diagrams section of the notebook.

Click Check my progress to verify the objective. Understand entity relationships in technical diagrams

Compare images for similarities and differences

Gemini can compare images and identify similarities or differences between objects. In this example, you use Gemini to compare two images of the same location and identify the differences between them.

Run through the Compare images for similarities and differences section of the notebook.

Click Check my progress to verify the objective. Compare images for similarities and differences

Congratulations!

You have now completed the lab! In this lab, you learned how to use the Gemini API in Vertex AI to generate text from text and image(s) prompts.

Next steps / learn more

Check out the following resources to learn more about Gemini:

Google Cloud training and certification

...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.

Manual Last Updated May 19, 2025

Lab Last Tested May 19, 2025

Copyright 2025 Google LLC. All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.

Avant de commencer

  1. Les ateliers créent un projet Google Cloud et des ressources pour une durée déterminée.
  2. Les ateliers doivent être effectués dans le délai imparti et ne peuvent pas être mis en pause. Si vous quittez l'atelier, vous devrez le recommencer depuis le début.
  3. En haut à gauche de l'écran, cliquez sur Démarrer l'atelier pour commencer.

Utilisez la navigation privée

  1. Copiez le nom d'utilisateur et le mot de passe fournis pour l'atelier
  2. Cliquez sur Ouvrir la console en navigation privée

Connectez-vous à la console

  1. Connectez-vous à l'aide des identifiants qui vous ont été attribués pour l'atelier. L'utilisation d'autres identifiants peut entraîner des erreurs ou des frais.
  2. Acceptez les conditions d'utilisation et ignorez la page concernant les ressources de récupération des données.
  3. Ne cliquez pas sur Terminer l'atelier, à moins que vous n'ayez terminé l'atelier ou que vous ne vouliez le recommencer, car cela effacera votre travail et supprimera le projet.

Ce contenu n'est pas disponible pour le moment

Nous vous préviendrons par e-mail lorsqu'il sera disponible

Parfait !

Nous vous contacterons par e-mail s'il devient disponible

Un atelier à la fois

Confirmez pour mettre fin à tous les ateliers existants et démarrer celui-ci

Utilisez la navigation privée pour effectuer l'atelier

Ouvrez une fenêtre de navigateur en mode navigation privée pour effectuer cet atelier. Vous éviterez ainsi les conflits entre votre compte personnel et le compte temporaire de participant, qui pourraient entraîner des frais supplémentaires facturés sur votre compte personnel.