GSP1084

Overview
In Google Cloud, you can use Database Migration Service to migrate PostgreSQL databases to AlloyDB for PostgreSQL. To do this, AlloyDB requires the use of private services access. In this lab environment, you implement this access as a VPC peering connection between your VPC network and the underlying Google Cloud VPC network where your AlloyDB resources reside. Then, you migrate a stand-alone PostgreSQL database (running on a virtual machine) to AlloyDB for PostgreSQL using a continuous Database Migration Service job with VPC peering for connectivity.
To migrate a database using Database Migration Service, certain steps must be performed to prepare the source database. These preparatory tasks, most importantly setting up the pglogical package, have already been completed for you on the source environment.
After you create and run the migration job, you confirm that an initial copy of your database has been successfully migrated to your AlloyDB for PostgreSQL instance. You also explore how continuous migration jobs apply data updates from your source database to your AlloyDB for PostgreSQL instance.
What you'll do
In this lab, you learn how to configure a continuous Database Migration Service job to migrate databases from a PostgreSQL instance to AlloyDB for PostgreSQL. This involves:
- Verify data in the source instance for migration.
- Create a profile for a source connection to a PostgreSQL instance (e.g., stand-alone PostgreSQL).
- Create and start a continuous migration job.
- Confirm data load in the AlloyDB for PostgreSQL instance.
- Propagate a live update to the AlloyDB instance.
Setup and requirements
Before you click the Start Lab button
Read these instructions. Labs are timed and you cannot pause them. The timer, which starts when you click Start Lab, shows how long Google Cloud resources are made available to you.
This hands-on lab lets you do the lab activities in a real cloud environment, not in a simulation or demo environment. It does so by giving you new, temporary credentials you use to sign in and access Google Cloud for the duration of the lab.
To complete this lab, you need:
- Access to a standard internet browser (Chrome browser recommended).
Note: Use an Incognito (recommended) or private browser window to run this lab. This prevents conflicts between your personal account and the student account, which may cause extra charges incurred to your personal account.
- Time to complete the lab—remember, once you start, you cannot pause a lab.
Note: Use only the student account for this lab. If you use a different Google Cloud account, you may incur charges to that account.
How to start your lab and sign in to the Google Cloud console
-
Click the Start Lab button. If you need to pay for the lab, a dialog opens for you to select your payment method.
On the left is the Lab Details pane with the following:
- The Open Google Cloud console button
- Time remaining
- The temporary credentials that you must use for this lab
- Other information, if needed, to step through this lab
-
Click Open Google Cloud console (or right-click and select Open Link in Incognito Window if you are running the Chrome browser).
The lab spins up resources, and then opens another tab that shows the Sign in page.
Tip: Arrange the tabs in separate windows, side-by-side.
Note: If you see the Choose an account dialog, click Use Another Account.
-
If necessary, copy the Username below and paste it into the Sign in dialog.
{{{user_0.username | "Username"}}}
You can also find the Username in the Lab Details pane.
-
Click Next.
-
Copy the Password below and paste it into the Welcome dialog.
{{{user_0.password | "Password"}}}
You can also find the Password in the Lab Details pane.
-
Click Next.
Important: You must use the credentials the lab provides you. Do not use your Google Cloud account credentials.
Note: Using your own Google Cloud account for this lab may incur extra charges.
-
Click through the subsequent pages:
- Accept the terms and conditions.
- Do not add recovery options or two-factor authentication (because this is a temporary account).
- Do not sign up for free trials.
After a few moments, the Google Cloud console opens in this tab.
Note: To access Google Cloud products and services, click the Navigation menu or type the service or product name in the Search field.
Activate Cloud Shell
Cloud Shell is a virtual machine that is loaded with development tools. It offers a persistent 5GB home directory and runs on the Google Cloud. Cloud Shell provides command-line access to your Google Cloud resources.
-
Click Activate Cloud Shell
at the top of the Google Cloud console.
-
Click through the following windows:
- Continue through the Cloud Shell information window.
- Authorize Cloud Shell to use your credentials to make Google Cloud API calls.
When you are connected, you are already authenticated, and the project is set to your Project_ID, . The output contains a line that declares the Project_ID for this session:
Your Cloud Platform project in this session is set to {{{project_0.project_id | "PROJECT_ID"}}}
gcloud is the command-line tool for Google Cloud. It comes pre-installed on Cloud Shell and supports tab-completion.
- (Optional) You can list the active account name with this command:
gcloud auth list
- Click Authorize.
Output:
ACTIVE: *
ACCOUNT: {{{user_0.username | "ACCOUNT"}}}
To set the active account, run:
$ gcloud config set account `ACCOUNT`
- (Optional) You can list the project ID with this command:
gcloud config list project
Output:
[core]
project = {{{project_0.project_id | "PROJECT_ID"}}}
Note: For full documentation of gcloud, in Google Cloud, refer to the gcloud CLI overview guide.
Task 1. Verify data in the source instance for migration
In this task, you connect and verify data in the postgres database on the pg14-source VM instance.
Verify data in source instance
-
On the Navigation menu (
), under Compute Engine click VM instances.
-
For the instance named pg14-source, in the Connect column, click SSH to open a terminal window.
-
Use the following command to launch the PostgreSQL (psql) client:
sudo -u postgres psql
The psql terminal prompt opens. It looks similar to what's shown below:
psql (14.5 (Debian 14.5-1.pgdg110+1))
Type "help" for help.
- Input and run the following SQL command to see the HR related tables in the postgres database:
\dt
- Run the following queries to determine the row counts for each table:
select count (*) as countries_row_count from countries;
select count (*) as departments_row_count from departments;
select count (*) as employees_row_count from employees;
select count (*) as jobs_row_count from jobs;
select count (*) as locations_row_count from locations;
select count (*) as regions_row_count from regions;
The source table row counts are as follows:
| Name |
Rows |
| countries |
25 |
| departments |
27 |
| employees |
107 |
| jobs |
19 |
| locations |
23 |
| regions |
4 |
-
Type \q to exit the psql client.
-
Type exit to close the terminal window.
Task 2. Create a Database Migration Service connection profile for a stand-alone PostgreSQL database
In this task, you create a connection profile for the PostgreSQL source instance.
Get the connectivity and deployment information for the PostgreSQL source instance
You need the internal IP address of the source database instance to migrate the database to AlloyDB.
-
Still on the VM instances page, locate the pg14-source instance.
-
Record the Internal IP (e.g., 10.128.15.208).
Create a new connection profile for the PostgreSQL source instance
A connection profile stores information about the source database instance (e.g., stand-alone PostgreSQL). Database Migration Service uses the connection profile to migrate data from the source database to the destination database. After you create a connection profile, you can reused it across migration jobs.
In this step, you create a new connection profile for the PostgreSQL source instance.
-
In the Google Cloud console, on the Navigation menu (
), click View all products.
-
In the Databases category, click Database Migration.
-
In the left pane, click Connection profiles, and then click Create profile.
-
Set the following fields as shown below. Leave all other fields as the defaults.
| Fields |
Value |
| Source engine |
PostgreSQL |
| Destination engine |
Cloud SQL for PostgreSQL |
| Choose the profile type to create |
Source |
| Connection profile name |
pg14-source |
| Region |
|
| PostgreSQL to PostgreSQL |
Click Define
|
| PostgreSQL to PostgreSQL |
Enter the internal IP for the PostgreSQL source instance that you previously recorded (e.g., 10.128.15.208) |
| Port |
5432 |
| Username |
postgres |
| Password |
Change3Me |
-
Click Save
-
Click Create.
A new connection profile named pg14-source shows in the Connections profile list.
Click Check my progress to verify the objective.
Create a connection profile for the PostgreSQL source instance
Task 3. Create and start a continuous migration job
When you create a new migration job, you first define the source database instance using a previously created connection profile. You then create a new destination database instance and configure connectivity between the source and destination instances.
In this task, you use the migration job interface to create a new AlloyDB for PostgreSQL cluster and set it as the destination for the continuous migration job from the PostgreSQL source instance.
Create a new continuous migration job
In this step you create a new continuous migration job.
-
Still in the Database migration page, click Migration jobs in the left pane.
-
Click Create migration job.
Get started
- Set the following fields, leave all other settings as the defaults.
| Field |
Value |
| Migration job name |
postgres-to-alloydb |
| Source database engine |
PostgreSQL |
| Destination database engine |
AlloyDB for PostgreSQL |
| Destination region |
|
- Click Save & continue.
Define a source
-
For Select source connection profile, select the pg14-source connection profile you created a few steps ago.
-
Click Save & continue.
Define a destination
-
For Type of destination cluster, select Existing cluster.
-
For Cluster ID, select alloydb-target-cluster.
-
Click Select & continue.
-
When prompted to confirm, type alloydb-target-cluster to confirm, then click Confirm & continue.
Define connectivity method
-
For Connectivity method, select VPC peering.
-
Click Configure & continue.
Configure migration databases
-
Set Databases to migrate to All databases.
-
Click Save & continue.
Test and create migration job
-
The Database Migration Service wizard is now on the Test and create your migration job step.
-
Click Test job.
-
After a successful test, click Create & start job.
Note: You must click Create & start job for your job to start. The other link only creates and saves the job details.
- If prompted to confirm, click Create & start.
The postgres-to-alloydb details page opens.
Review the status of the continuous migration job
- In the postgres-to-alloydb details page review the migration job Status.
- If you have not started the job, the status shows as Not started. You can choose to start or delete the job.
- After the job has started, the status shows as Starting and then transition to Running to indicate that the initial database dump is in progress.
- The job status moves to Running and the phase is CDC once the initial load is complete and DMS is available for continuous operations.
- When the job status changes to Running and the phase is CDC, proceed to the next task.
Task 4. Confirm data load in the AlloyDB for PostgreSQL instance
Check the AlloyDB for PostgreSQL instance
- On the Google Cloud console, on the Navigation menu (
), click View All products. In the Databases category, click AlloyDB for PostgreSQL, and then click Clusters to examine the cluster list.
The cluster is named alloydb-target-cluster and the instance is named alloydb-target-instance.
-
Click alloydb-target-cluster, and then in the left pane, click Connectivity.
-
Record the Private IP address, for example, 10.24.0.2.
Notice that if you click Copy to clipboard to copy the Private IP address, the port number is included, for example, 10.24.0.2:5432. Record only the IP address, for example, 10.24.0.2 to use in a later step.
-
On the Navigation menu (
), under Compute Engine click VM instances.
-
For the alloydb-client instance, click SSH to open a terminal window.
-
Set the following environment variable, replacing [ALLOYDB_ADDRESS] with the Private IP address of the AlloyDB instance (for example, 10.24.0.2):
export ALLOYDB=[ALLOYDB_ADDRESS]
- Run the following command to store the Private IP address of the AlloyDB instance on the AlloyDB client VM so that it persists throughout the lab:
echo $ALLOYDB > alloydbip.txt
- Connect to the psql client and run the following query to verify that the five source tables are now in the AlloyDB instance. When prompted, provide the postgres user's password (Change3Me) which was specified when the cluster was created:
psql -h $ALLOYDB -U postgres
\dt
List of relations
Schema | Name | Type | Owner
--------+-------------+-------+---------------------
public | countries | table | alloydbexternalsync
public | departments | table | alloydbexternalsync
public | employees | table | alloydbexternalsync
public | jobs | table | alloydbexternalsync
public | locations | table | alloydbexternalsync
public | regions | table | alloydbexternalsync
(6 rows)
- Run the following queries to determine the row counts for the migrated tables. The values match the query outputs on the source instance:
select count (*) as countries_row_count from countries;
select count (*) as departments_row_count from departments;
select count (*) as employees_row_count from employees;
select count (*) as jobs_row_count from jobs;
select count (*) as locations_row_count from locations;
select count (*) as regions_row_count from regions;
The target table row counts are as follows:
| Name |
Rows |
| countries |
25 |
| departments |
27 |
| employees |
107 |
| jobs |
19 |
| locations |
23 |
| regions |
4 |
- Run the following query to verify the data in the regions table:
select region_id, region_name from regions;
region_id | region_name
-----------+------------------------
1 | Europe
2 | Americas
3 | Asia
4 | Middle East and Africa
(4 rows)
.
Leave this terminal window open to use in the next section.
Task 5. Propagate a live update to the AlloyDB instance
Because the Database Migration Service job is set in a continuous update configuration, any updates you make on the source instance are applied to the AlloyDB target.
-
Return to Google Cloud console.
-
For the pg14-source instance, click SSH to open a terminal window.
-
Use the following command to launch the PostgreSQL (psql) client:
sudo -u postgres psql
- At the psql terminal prompt, input and run the following SQL command to add one row of data to the regions table:
insert into regions values (5, 'Oceania');
- Confirm that the row was inserted locally:
select region_id, region_name from regions;
region_id | region_name
-----------+------------------------
1 | Europe
2 | Americas
3 | Asia
4 | Middle East and Africa
5 | Oceania
(5 rows)
Review data in the AlloyDB for PostgreSQL instance
- Return to the terminal shell for the alloydb-client. The psql client remains open. Run the following query to verify that the Oceania row was added to the targeted AlloyDB instance.
select region_id, region_name from regions;
region_id | region_name
-----------+------------------------
1 | Europe
2 | Americas
3 | Asia
4 | Middle East and Africa
5 | Oceania
(5 rows)
- Click Check my progress to verify the objective.
Test the continuous migration of data
Congratulations!
You have now successfully migrated a stand-alone PostgreSQL database (running on a virtual machine) to AlloyDB for PostgreSQL using a continuous Database Migration Service job.
Next steps / Learn more
Learn more about data migration, Cloud SQL databases, and Database Migration Service:
Google Cloud training and certification
...helps you make the most of Google Cloud technologies. Our classes include technical skills and best practices to help you get up to speed quickly and continue your learning journey. We offer fundamental to advanced level training, with on-demand, live, and virtual options to suit your busy schedule. Certifications help you validate and prove your skill and expertise in Google Cloud technologies.
Manual Last Updated December 15, 2025
Lab Last Tested November 24, 2025
Copyright 2025 Google LLC. All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.