实验设置说明和要求
保护您的账号和进度。请务必在无痕浏览器窗口中,使用实验凭证运行此实验。

Serverless Data Analysis with Dataflow: A Simple Dataflow Pipeline (Python)

实验 1 小时 30 分钟 universal_currency_alt 5 个点数 show_chart 高级
info 此实验可能会提供 AI 工具来支持您学习。
此内容尚未针对移动设备进行优化。
为获得最佳体验,请在桌面设备上访问通过电子邮件发送的链接。

Overview

In this lab, you will open a Dataflow project, use pipeline filtering, and execute the pipeline locally and on the cloud.

  • Open Dataflow project
  • Pipeline filtering
  • Execute the pipeline locally and on the cloud

Objective

In this lab, you learn how to write a simple Dataflow pipeline and run it both locally and on the cloud.

  • Setup a Python Dataflow project using Apache Beam
  • Write a simple pipeline in Python
  • Execute the query on the local machine
  • Execute the query on the cloud

Setup

For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.

  1. Sign in to Google Skills using an incognito window.

  2. Note the lab's access time (for example, 1:15:00), and make sure you can finish within that time. There is no pause feature. You can restart if needed, but you have to start at the beginning.

  3. When ready, click Start lab.

  4. Note your lab credentials (Username and Password). You will use them to sign in to the Google Cloud Console.

  5. Click Open Google Console.

  6. Click Use another account and copy/paste credentials for this lab into the prompts. If you use other credentials, you'll receive errors or incur charges.

  7. Accept the terms and skip the recovery resource page.

Check project permissions

Before you begin your work on Google Cloud, you need to ensure that your project has the correct permissions within Identity and Access Management (IAM).

  1. In the Google Cloud console, on the Navigation menu (Navigation menu icon), select IAM & Admin > IAM.

  2. Confirm that the default compute Service Account {project-number}-compute@developer.gserviceaccount.com is present and has the editor role assigned. The account prefix is the project number, which you can find on Navigation menu > Cloud Overview > Dashboard.

Compute Engine default service account name and editor status highlighted on the Permissions tabbed page

Note: If the account is not present in IAM or does not have the editor role, follow the steps below to assign the required role.
  1. In the Google Cloud console, on the Navigation menu, click Cloud Overview > Dashboard.
  2. Copy the project number (e.g. 729328892908).
  3. On the Navigation menu, select IAM & Admin > IAM.
  4. At the top of the roles table, below View by Principals, click Grant Access.
  5. For New principals, type:
{project-number}-compute@developer.gserviceaccount.com
  1. Replace {project-number} with your project number.
  2. For Role, select Project (or Basic) > Editor.
  3. Click Save.

Task 1. Ensure that the Dataflow API is successfully enabled

To ensure access to the necessary API, restart the connection to the Dataflow API.

  1. In the Cloud Console, enter Dataflow API in the top search bar.

  2. Click on the result for Dataflow API.

  3. Click Manage.

  4. Click Disable API.

  5. If asked to confirm, click Disable.

  6. Click Enable.

Task 2. Preparation

Open the SSH terminal and connect to the training VM

You will be running all code from a curated training VM.

  1. In the Console, expand the Navigation menu (Navigation menu icon)and select Compute Engine > VM instances.

  2. Locate the line with the instance called training-vm.

  3. On the far right, under Connect, click on SSH to open a terminal window.

  4. In this lab, you will enter CLI commands on the training-vm.

Download code repository

  • Next you will download a code repository for use in this lab. In the training-vm SSH terminal enter the following:
git clone https://github.com/GoogleCloudPlatform/training-data-analyst Note: If you receive an error -bash: git: command not found, try by running the command sudo apt-get update and then, sudo apt-get -y install git

Create a Cloud Storage bucket

Follow these instructions to create a bucket.

  1. In the Console, on the Navigation menu, click Cloud overview.

  2. Select and copy the Project ID.

For simplicity you will use the Google Skills Project ID, which is already globally unique, as the bucket name.

  1. In the Console, on the Navigation menu, click Cloud Storage > Buckets.
  2. Click Create.
  3. Specify the following, and leave the remaining settings as their defaults:
Property Value (type value or select option as specified)
Name <your unique bucket name (Project ID)>
Location type Multi-Region
Location <Your location>
  1. Click Create.

  2. Record the name of your bucket. You will need it in subsequent tasks.

  3. In the training-vm SSH terminal enter the following to create an environment variable named "BUCKET" and verify that it exists with the echo command:

BUCKET="<your unique bucket name (Project ID)>" echo $BUCKET

You can use $BUCKET in terminal commands. And if you need to enter the bucket name <your-bucket> in a text field in Console, you can quickly retrieve the name with echo $BUCKET.

Task 3. Pipeline filtering

The goal of this lab is to become familiar with the structure of a Dataflow project and learn how to execute a Dataflow pipeline.

  1. Return to the training-vm SSH terminal and navigate to the directory /training-data-analyst/courses/data_analysis/lab2/python and view the file grep.py.

  2. View the file with Nano. Do not make any changes to the code.

cd ~/training-data-analyst/courses/data_analysis/lab2/python nano grep.py
  1. Press Ctrl+X to exit Nano.

Can you answer these questions about the file grep.py?

  • What files are being read?
  • What is the search term?
  • Where does the output go?

There are three transforms in the pipeline:

  • What does the transform do?
  • What does the second transform do?
  • Where does its input come from?
  • What does it do with this input?
  • What does it write to its output?
  • Where does the output go to?
  • What does the third transform do?

Task 4 Execute the pipeline locally

  1. Install pip package and latest Python SDK from PyPI.
sudo apt-get -y install python3-pip pip install apache-beam[gcp]
  1. In the training-vm SSH terminal, locally execute grep.py:
python3 grep.py

The output file will be output.txt. If the output is large enough, it will be sharded into separate parts with names like: output-00000-of-00001.

  1. Locate the correct file by examining the file's time:
ls -al /tmp
  1. Examine the output file(s).

  2. You can replace "-*" below with the appropriate suffix:

cat /tmp/output-*

Does the output seem logical?

Task 5. Execute the pipeline on the cloud

  1. Copy some Java files to the cloud. In the training-vm SSH terminal, enter the following command:
gsutil cp ../javahelp/src/main/java/com/google/cloud/training/dataanalyst/javahelp/*.java gs://$BUCKET/javahelp

Click Check my progress to verify the objective. Copy Java files to the Cloud

  1. Using Nano, edit the Dataflow pipeline in grepc.py:
nano grepc.py
  1. Replace PROJECT and BUCKET with your Project ID and Bucket name.

Example strings before:

PROJECT='cloud-training-demos' BUCKET='cloud-training-demos'

Example strings after edit (use your values):

PROJECT='qwiklabs-gcp-your-value' BUCKET='qwiklabs-gcp-your-value'
  1. Save the file and close Nano by pressing the CTRL+X key, then press Y, and Enter.

  2. Submit the Dataflow job to the cloud:

python3 grepc.py Note: You may ignore the message: WARNING:root:Make sure that locally built Python SDK docker image has Python 3.7 interpreter. Your Dataflow job will start successfully.

Because this is such a small job, running on the cloud will take significantly longer than running it locally (on the order of 7-10 minutes).

  1. Return to the browser tab for Console.

  2. On the Navigation menu, click Dataflow and click on your job to monitor progress.

Example:

Dataflow Job summary displaying the status as succeeded

  1. Wait for the job status to turn to Succeeded.

Click Check my progress to verify the objective. Submit the Dataflow job to the Cloud

  1. Examine the output in the Cloud Storage bucket.

  2. On the Navigation menu, click Cloud Storage > Buckets and click on your bucket.

  3. Click the javahelp directory.

This job will generate the file output.txt. If the file is large enough it will be sharded into multiple parts with names like: output-0000x-of-000y. You can identify the most recent file by name or by the Last modified field.

  1. Click on the file to view it.

Alternatively, you can download the file via the training-vm SSH terminal and view it:

gsutil cp gs://$BUCKET/javahelp/output* . cat output*

End your lab

When you have completed your lab, click End Lab. Google Skills removes the resources you’ve used and cleans the account for you.

You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.

The number of stars indicates the following:

  • 1 star = Very dissatisfied
  • 2 stars = Dissatisfied
  • 3 stars = Neutral
  • 4 stars = Satisfied
  • 5 stars = Very satisfied

You can close the dialog box if you don't want to provide feedback.

For feedback, suggestions, or corrections, please use the Support tab.

Copyright 2026 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.

准备工作

  1. 实验会创建一个 Google Cloud 项目和一些资源,供您使用限定的一段时间
  2. 实验有时间限制,并且没有暂停功能。如果您中途结束实验,则必须重新开始。
  3. 在屏幕左上角,点击开始实验即可开始

使用无痕浏览模式

  1. 复制系统为实验提供的用户名密码
  2. 在无痕浏览模式下,点击打开控制台

登录控制台

  1. 使用您的实验凭证登录。使用其他凭证可能会导致错误或产生费用。
  2. 接受条款,并跳过恢复资源页面
  3. 除非您已完成此实验或想要重新开始,否则请勿点击结束实验,因为点击后系统会清除您的工作并移除该项目

此内容目前不可用

一旦可用,我们会通过电子邮件告知您

太好了!

一旦可用,我们会通过电子邮件告知您

一次一个实验

确认结束所有现有实验并开始此实验

使用无痕浏览模式运行实验

使用无痕模式或无痕浏览器窗口是运行此实验的最佳方式。这可以避免您的个人账号与学生账号之间发生冲突,这种冲突可能导致您的个人账号产生额外费用。