1. Introduction
A Private Service Connect interface is a resource that lets a producer Virtual Private Cloud (VPC) network initiate connections to various destinations in a consumer network. Producer and consumer networks can be in different projects and organizations.
A connection between a network attachment and a Private Service Connect interface is similar to the connection between a Private Service Connect endpoint and a service attachment, but it has two key differences:
- A network attachment lets a producer network initiate connections to a consumer network (managed service egress), while an endpoint lets a consumer network initiate connections to a producer network (managed service ingress).
- A Private Service Connect interface connection is transitive. This means that a producer network can communicate with other networks that are connected to the consumer network.
What you'll build
Vertex AI Pipelines, deployed in a Google managed tenant project, will leverage the PSC Network Attachment to create a multi-nic instance between the producer and consumer network. Since the PSC Network Attachment is deployed with a multi-nic from the consumer network, Vertex AI Pipelines can reach routes available from the consumers network.
In this tutorial, you're going to build a comprehensive Private Service Connect (PSC) Interface architecture for Vertex AI Pipelines that utilizes Cloud Firewall rules to allow, or deny, connectivity from the producer to the consumer's test instances as illustrated in Figure 1.
Figure 1
You'll create a single psc-network-attachment in the consumer VPC resulting in the following use cases:
- Create an Ingress Firewall rule in the consumer-vpc allowing Vertex AI Pipeline subnet (192.168.10.0/28) to test-svc-1. Confirm successful PING generated from the Pipeline Job to test-svc-1 using TCPDUMP
- Create an Ingress Firewall rule in the consumer-vpc denying Vertex AI Pipeline subnet (192.168.10.0/28) to test-svc-2. Confirm PING failure based on Firewall logs generated by Log Explorer.
What you'll learn
- How to create a network attachment
- How Vertex AI Pipelines can use a network attachment to create a PSC Interface
- How to establish communication from the producer to the consumer
- How to allow access from Verex AI Pipelines to the consumer VM, test-svc-1
- How to deny access from Verex AI Pipelines to the consumer VM, test-svc-2 using Cloud Firewall
What you'll need
- Google Cloud Project
- IAM Permissions
- Compute Instance Admin (roles/compute.instanceAdmin)
- Compute Network Admin (roles/compute.networkAdmin)
- Compute Security Admin (roles/compute.securityAdmin)
- IAP-secured Tunnel User (roles/iap.tunnelResourceAccessor)
- Logging Admin (roles/logging.admin)
- Notebooks Admin (roles/notebooks.admin)
- Project IAM Admin (roles/resourcemanager.projectIamAdmin)
- Quota Admin (roles/servicemanagement.quotaAdmin)
- Service Account Admin (roles/iam.serviceAccountAdmin)
- Service Account User (roles/iam.serviceAccountUser)
- Vertex AI Admin (roles/aiplatform.admin)
2. Before you begin
This tutorial makes use of $variables to aid gcloud configuration implementation in Cloud Shell.
Inside Cloud Shell, perform the following:
gcloud config list project
gcloud config set project [YOUR-PROJECT-NAME]
projectid=YOUR-PROJECT-NAME
echo $projectid
Update the project to support the tutorial
Inside Cloud Shell, perform the following:
gcloud services enable notebooks.googleapis.com
gcloud services enable aiplatform.googleapis.com
gcloud services enable compute.googleapis.com
gcloud services enable cloudresourcemanager.googleapis.com
3. Consumer Setup
Create the consumer VPC
Inside Cloud Shell, perform the following:
gcloud compute networks create consumer-vpc --project=$projectid --subnet-mode=custom
Create the consumer subnets
Inside Cloud Shell, perform the following:
gcloud compute networks subnets create test-subnet-1 --project=$projectid --range=192.168.20.0/28 --network=consumer-vpc --region=us-central1
Inside Cloud Shell, perform the following:
gcloud compute networks subnets create test-subnet-2 --project=$projectid --range=192.168.30.0/28 --network=consumer-vpc --region=us-central1
Inside Cloud Shell, perform the following:
gcloud compute networks subnets create workbench-subnet --project=$projectid --range=192.168.40.0/28 --network=consumer-vpc --region=us-central1 --enable-private-ip-google-access
Cloud Router and NAT configuration
Cloud Network Address Translation (NAT) is used in the tutorial for notebook software package downloads since the notebook instance does not have an external IP address. Cloud NAT offers egress NAT capabilities, which means that internet hosts are not allowed to initiate communication with a user-managed notebook, making it more secure.
Inside Cloud Shell, create the regional cloud router.
gcloud compute routers create cloud-router-us-central1 --network consumer-vpc --region us-central1
Inside Cloud Shell, create the regional cloud nat gateway.
gcloud compute routers nats create cloud-nat-us-central1 --router=cloud-router-us-central1 --auto-allocate-nat-external-ips --nat-all-subnet-ip-ranges --region us-central1
Create the Private Service Connect Network Attachment subnet
Inside Cloud Shell, create the Network Attachment subnet used by Vertex AI Pipelines.
gcloud compute networks subnets create intf-subnet --project=$projectid --range=192.168.10.0/28 --network=consumer-vpc --region=us-central1
4. Enable Identify Aware Proxy (IAP)
To allow IAP to connect to your VM instances, create a firewall rule that:
- Applies to all VM instances that you want to be accessible by using IAP.
- Allows ingress traffic from the IP range 35.235.240.0/20. This range contains all IP addresses that IAP uses for TCP forwarding.
Inside Cloud Shell, create the IAP firewall rule.
gcloud compute firewall-rules create ssh-iap-consumer \
--network consumer-vpc \
--allow tcp:22 \
--source-ranges=35.235.240.0/20
5. Create consumer VM instances
Inside Cloud Shell, create the consumer vm instance, test-svc-1.
gcloud compute instances create test-svc-1 \
--project=$projectid \
--machine-type=e2-micro \
--image-family debian-11 \
--no-address \
--image-project debian-cloud \
--zone us-central1-a \
--subnet=test-subnet-1 \
--shielded-secure-boot
Inside Cloud Shell, create the consumer vm instance, test-svc-2.
gcloud compute instances create test-svc-2 \
--project=$projectid \
--machine-type=e2-micro \
--image-family debian-11 \
--no-address \
--image-project debian-cloud \
--zone us-central1-a \
--subnet=test-subnet-2 \
--shielded-secure-boot
Obtain and store the IP Addresses of the instances:
Inside Cloud Shell, perform a describe against the test VM instances.
gcloud compute instances describe test-svc-1 --zone=us-central1-a | grep networkIP:
gcloud compute instances describe test-svc-2 --zone=us-central1-a | grep networkIP:
Example:
user@cloudshell(psc-vertex)$ gcloud compute instances describe test-svc-1 --zone=us-central1-a | grep networkIP:
gcloud compute instances describe test-svc-2 --zone=us-central1-a | grep networkIP:
networkIP: 192.168.20.2
networkIP: 192.168.30.2
6. Private Service Connect network attachment
Network attachments are regional resources that represent the consumer side of a Private Service Connect interface. You associate a single subnet with a network attachment, and the producer (Vertex AI Pipelines) assigns IPs to the Private Service Connect interface.
Create the network attachment
Inside Cloud Shell, create the network attachment.
gcloud compute network-attachments create psc-network-attachment \
--region=us-central1 \
--connection-preference=ACCEPT_MANUAL \
--subnets=intf-subnet
List the network attachments
Inside Cloud Shell, list the network attachment.
gcloud compute network-attachments list
Describe the network attachments
Inside Cloud Shell, describe the network attachment.
gcloud compute network-attachments describe psc-network-attachment --region=us-central1
Make note of the psc-network-attachment URI that will be used by the producer when creating the Private Service Connect Interface.
In the example below the psc network attachment URI is the following:
projects/psc-vertex/regions/us-central1/networkAttachments/psc-network-attachment
user@cloudshell$ gcloud compute network-attachments describe psc-network-attachment --region=us-central1
connectionPreference: ACCEPT_MANUAL
creationTimestamp: '2025-01-21T12:25:25.385-08:00'
fingerprint: m9bHc9qnosY=
id: '56224423547354202'
kind: compute#networkAttachment
name: psc-network-attachment
network: https://www.googleapis.com/compute/v1/projects/psc-vertex/global/networks/consumer-vpc
region: https://www.googleapis.com/compute/v1/projects/psc-vertex/regions/us-central1
selfLink: https://www.googleapis.com/compute/v1/projects/psc-vertex/regions/us-central1/networkAttachments/psc-network-attachment
subnetworks:
- https://www.googleapis.com/compute/v1/projects/psc-vertex/regions/us-central1/subnetworks/intf-subnet
7. Vertex AI Workbench Setup
The following section guides you through creating a Jupyter Notebook. This notebook will be used to deploy a Pipelines Job that sends a PING from Vertex AI Pipelines to the test instances. The Datapath between Vertex AI Pipelines and the consumer network containing the instances uses a Private Service Connect Network Interface.
Create a user managed service account
In the following section, you will create a service account that will be associated with the Vertex AI Workbench instance used in the tutorial.
In the tutorial, the service account will have the following roles applied:
Inside Cloud Shell, create the service account.
gcloud iam service-accounts create notebook-sa \
--display-name="notebook-sa"
Inside Cloud Shell, update the service account with the role Storage Admin.
gcloud projects add-iam-policy-binding $projectid --member="serviceAccount:notebook-sa@$projectid.iam.gserviceaccount.com" --role="roles/storage.admin"
Inside Cloud Shell, update the service account with the role Vertex AI User.
gcloud projects add-iam-policy-binding $projectid --member="serviceAccount:notebook-sa@$projectid.iam.gserviceaccount.com" --role="roles/aiplatform.user"
Inside Cloud Shell, update the service account with the role Artifact Registry Admin.
gcloud projects add-iam-policy-binding $projectid --member="serviceAccount:notebook-sa@$projectid.iam.gserviceaccount.com" --role="roles/artifactregistry.admin"
Inside Cloud Shell, allow the notebook service account to use the Compute Engine default service account to instantiate the Pipeline Job.
gcloud iam service-accounts add-iam-policy-binding \
$(gcloud projects describe $(gcloud config get-value project) --format='value(projectNumber)')-compute@developer.gserviceaccount.com \
--member="serviceAccount:notebook-sa@$projectid.iam.gserviceaccount.com" \
--role="roles/iam.serviceAccountUser"
Create a Vertex AI Workbench instance
In the following section, create a Vertex AI Workbench instance that incorporates the previously created service account, notebook-sa.
Inside Cloud Shell create the private-client instance.
gcloud workbench instances create workbench-tutorial --vm-image-project=deeplearning-platform-release --vm-image-family=common-cpu-notebooks --machine-type=n1-standard-4 --location=us-central1-a --subnet-region=us-central1 --subnet=workbench-subnet --disable-public-ip --shielded-secure-boot=true --service-account-email=notebook-sa@$projectid.iam.gserviceaccount.com
8. Vertex AI Pipelines to test-svc-1 connectivity
Open a new Cloud Shell tab and update your project settings.
Inside Cloud Shell, perform the following:
gcloud config list project
gcloud config set project [YOUR-PROJECT-NAME]
projectid=YOUR-PROJECT-NAME
echo $projectid
To allow connectivity from Vertex AI Pipelines to test-svc-1 create an ingress firewall rules that specifies the PSC Network Attachment as the source (192.168.10.0/28) and test-svc-1 IP Address as the destination.
Inside Cloud Shell, update the destination-range to match your test-svc-1 IP address.
gcloud compute --project=$projectid firewall-rules create allow-icmp-vertex-pipelines-to-test-svc1-vm --direction=INGRESS --priority=1000 --network=consumer-vpc --action=ALLOW --source-ranges=192.168.10.0/28 --destination-ranges=<your-test-svc-1-vm-ip> --rules=icmp
Example:
gcloud compute --project=$projectid firewall-rules create allow-icmp-vertex-pipelines-to-test-svc1-vm --direction=INGRESS --priority=1000 --network=consumer-vpc --action=ALLOW --source-ranges=192.168.10.0/28 --destination-ranges=192.168.20.2 --rules=icmp
Log into the test-svc-1 instance using IAP in Cloud Shell.
gcloud compute ssh test-svc-1 --project=$projectid --zone=us-central1-a --tunnel-through-iap
In the OS, execute tcpdump to capture any icmp traffic. This OS session will be used to validate communication between Vertex AI Pipeline and the VM.
sudo tcpdump -i any icmp -nn
9. Vertex AI Service Agent Update
Vertex AI Pipelines acts on your behalf to perform operations such as obtaining an IP Address from the PSC Network Attachment subnet used to create the PSC Interface. To do so, Vertex AI Pipelines uses a service agent (listed below) that requires Network Admin permission.
service-$projectnumber@gcp-sa-aiplatform.iam.gserviceaccount.com
Inside Cloud Shell, obtain your project number.
gcloud projects describe $projectid | grep projectNumber
Example:
gcloud projects describe $projectid | grep projectNumber:
projectNumber: '795057945528'
Inside Cloud Shell, update the service agent account with the role compute.networkAdmin.
gcloud projects add-iam-policy-binding $projectid --member="serviceAccount:service-<your-projectnumber>@gcp-sa-aiplatform.iam.gserviceaccount.com" --role="roles/compute.networkAdmin"
Example:
gcloud projects add-iam-policy-binding $projectid --member="serviceAccount:service-795057945528@gcp-sa-aiplatform.iam.gserviceaccount.com" --role="roles/compute.networkAdmin"
10. Default Service Account Update
Enable the Compute Engine API and grant your default service account access to Vertex AI. Note that it might take some time for the access change to propagate.
Inside Cloud Shell, update the default service account with the role aiplatform.user
gcloud projects add-iam-policy-binding $projectid \
--member="serviceAccount:<your-projectnumber>-compute@developer.gserviceaccount.com" \
--role="roles/aiplatform.user"
Example:
gcloud projects add-iam-policy-binding $projectid \
--member="serviceAccount:795057945528-compute@developer.gserviceaccount.com" \
--role="roles/aiplatform.user"
11. Deploy Vertex AI Pipelines Job
In the following section, you will create a notebook to perform a successful PING to the consumer test-svc-1 instance.
Run the training job in the Vertex AI Workbench instance.
- In the Google Cloud console, go to the instances tab on the Vertex AI Workbench page.
- Next to your Vertex AI Workbench instance's name (workbench-tutorial), click Open JupyterLab. Your Vertex AI Workbench instance opens in JupyterLab.
- Select File > New > Notebook
- Select Kernel > Python 3
- In a new notebook cell, run the following command to ensure that you have the latest version of pip:
! pip3 install --upgrade --quiet google-cloud-aiplatform \
kfp \
google-cloud-pipeline-components
- Set your project variables in the new notebook cell
PROJECT_ID = "<your-projectid>"
REGION = "<your-region>"
NETWORK_ATTACHMENT_NAME = "psc-network-attachment"
Example:
PROJECT_ID = "psc-vertex"
REGION = "us-central1"
NETWORK_ATTACHMENT_NAME = "psc-network-attachment"
- Define a globally unique bucketname as a variable in a new notebook cell
BUCKET_URI = f"gs://<your-bucket-name>"
Example:
BUCKET_URI = f"gs://psc-vertex-bucket"
- In a new notebook cell, create the bucket
! gsutil mb -l {REGION} -p {PROJECT_ID} {BUCKET_URI}
In the following section, you will determine the default compute engine service account to be used to run the pipeline job, as well as grant the right permissions to it.
shell_output = ! gcloud projects describe $PROJECT_ID
PROJECT_NUMBER = shell_output[-1].split(":")[1].strip().replace("'", "")
SERVICE_ACCOUNT = f"{PROJECT_NUMBER}-compute@developer.gserviceaccount.com"
print(f"Project Number: {PROJECT_NUMBER}")
print(f"Service Account: {SERVICE_ACCOUNT}")
To confirm a successful execution, your Service Account and Project Number are printed
- In a new notebook cell, grant your service account permission to read and write pipeline artifacts in the bucket created in the previous step.
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectCreator {BUCKET_URI}
! gsutil iam ch serviceAccount:{SERVICE_ACCOUNT}:roles/storage.objectViewer {BUCKET_URI}
- In a new notebook cell, define the pipeline parameters. Note, the NETWORK_ATTACHMENT_NAME is the PSC Network Attachment, therefore must match.
PIPELINE_ROOT = f"{BUCKET_URI}/pipeline_root/psc_test"
NETWORK_ATTACHMENT_URI = f"projects/{PROJECT_NUMBER}/regions/{REGION}/networkAttachments/{NETWORK_ATTACHMENT_NAME}"
- In a new notebook cell, initialize Vertex AI SDK
from kfp import dsl
from google.cloud import aiplatform, aiplatform_v1beta1
import time
from google.cloud.aiplatform_v1.types import pipeline_state
import yaml
from datetime import datetime
import logging
aiplatform.init(project=PROJECT_ID, location=REGION, staging_bucket=BUCKET_URI)
# Configure logging
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(levelname)s - %(message)s')
- In a new notebook cell, define the test component
@dsl.container_component
def ping_check(network_address: str):
"""Pings a network address
Args:
network_address: The IP address to ping
"""
return dsl.ContainerSpec(
image="ubuntu:22.04",
command=["sh", "-c"],
args=[
f"""
# Use sed for regex replacement, cleaner than bash parameter expansion for this
cleaned_address=$(echo "{network_address}" | sed 's/[^0-9.]//g')
apt-get update && apt-get install inetutils-traceroute inetutils-ping -y
echo "Will ping $cleaned_address"
if ! ping -c 3 $cleaned_address; then
echo "Ping failed"
traceroute -w 1 -m 7 $cleaned_address
exit 1
fi
"""
],
)
- In a new notebook cell, define the pipeline
@dsl.pipeline(name="check-connectivity")
def pipeline(ip_address: str):
"""Pings an IP address. Facilitated by a Private Service Connect Interface
Args:
ip_address: The IP address to ping
"""
ping_check(network_address=ip_address).set_caching_options(False)
return
- In a new notebook cell, execute the utility function, wait for the pipeline to finish
def wait_for_pipeline(
project_id: str,
region: str,
pipeline_job_resource_name: str,
timeout: int = 20 * 60, # Default timeout of 20 minutes (in seconds)
) -> bool:
"""
Waits for a Vertex AI pipeline to finish, with a timeout.
Args:
project_id (str): The Google Cloud project ID.
region (str): The region where the pipeline is running.
pipeline_job_resource_name (str): The resource name of the pipeline job.
timeout (int): The maximum time to wait for the pipeline to finish, in seconds.
Defaults to 20 minutes (1200 seconds).
Returns:
bool: True if the pipeline succeeded, False otherwise.
Raises:
TimeoutError: If the pipeline does not finish within the specified timeout.
"""
# Initialize the AIPlatform client
aiplatform.init(project=project_id, location=region)
# Get the pipeline job
pipeline_job = aiplatform.PipelineJob.get(resource_name=pipeline_job_resource_name)
logging.info(
f"Vertex AI Console Link: https://console.cloud.google.com/vertex-ai/pipelines/locations/{region}/runs/{pipeline_job.resource_name.split('/')[-1]}?project={project_id}"
)
start_time = time.time()
while True:
status = pipeline_job.state
logging.info(f"Pipeline Job status: {status.name}")
if status in [
pipeline_state.PipelineState.PIPELINE_STATE_SUCCEEDED,
pipeline_state.PipelineState.PIPELINE_STATE_FAILED,
pipeline_state.PipelineState.PIPELINE_STATE_CANCELLED,
]:
break # Exit the loop if the job is finished
if time.time() - start_time > timeout:
logging.error(f"Pipeline timed out after {timeout} seconds.")
raise TimeoutError(f"Pipeline timed out after {timeout} seconds.")
# Wait for a short time before checking again
time.sleep(10) # Adjust the wait time as needed
# Do something based on the final status
if status == pipeline_state.PipelineState.PIPELINE_STATE_SUCCEEDED:
logging.info("Pipeline succeeded")
return True
elif status == pipeline_state.PipelineState.PIPELINE_STATE_CANCELLED:
logging.error("Pipeline cancelled")
raise Exception("Pipeline cancelled")
elif status == pipeline_state.PipelineState.PIPELINE_STATE_FAILED:
logging.error("Pipeline failed")
raise Exception("Pipeline failed")
- In a new notebook cell, execute the utility function to run the pipeline
def run_job_with_psc_interface_config(
project_id: str,
region: str,
pipeline_root: str,
network_attachment_name: str,
ip_address: str,
local_pipeline_file: str = "pipeline.yaml",
):
"""
Compiles, submits, and monitors a Vertex AI pipeline.
"""
parameter_values = {"ip_address": ip_address}
pipeline_root = f"{pipeline_root}/{datetime.now().strftime('%Y%m%d%H%M%S')}"
logging.info("Compiling pipeline")
try:
with open(local_pipeline_file, "r") as stream:
pipeline_spec = yaml.safe_load(stream)
logging.info(f"Pipeline Spec: {pipeline_spec}")
except yaml.YAMLError as exc:
logging.error(f"Error loading pipeline yaml file: {exc}")
raise
logging.info(f"Will use pipeline root: {pipeline_root}")
# Initialize the Vertex SDK using PROJECT_ID and LOCATION
aiplatform.init(project=project_id, location=region)
# Create the API endpoint
client_options = {"api_endpoint": f"{region}-aiplatform.googleapis.com"}
# Initialize the PipelineServiceClient
client = aiplatform_v1beta1.PipelineServiceClient(client_options=client_options)
# Construct the request
request = aiplatform_v1beta1.CreatePipelineJobRequest(
parent=f"projects/{project_id}/locations/{region}",
pipeline_job=aiplatform_v1beta1.PipelineJob(
display_name="pipeline-with-psc-interface-config",
pipeline_spec=pipeline_spec,
runtime_config=aiplatform_v1beta1.PipelineJob.RuntimeConfig(
gcs_output_directory=pipeline_root, parameter_values=parameter_values
),
psc_interface_config=aiplatform_v1beta1.PscInterfaceConfig(
network_attachment=network_attachment_name
),
),
)
# Make the API call
response = client.create_pipeline_job(request=request)
# Print the response
logging.info(f"Pipeline job created: {response.name}")
return response.name
- In a new notebook cell, compile the pipeline
from kfp import compiler
compiler.Compiler().compile(pipeline_func=pipeline, package_path='pipeline.yaml')
- In a new notebook cell, update the TARGET_IP_ADDRESS to reflect the IP Address obtained in the earlier step for test-svc-1 and observe the pipeline job status
TARGET_IP_ADDRESS = "<your-test-svc-1-ip>"
try:
job_name = run_job_with_psc_interface_config(
project_id=PROJECT_ID,
region=REGION,
pipeline_root=PIPELINE_ROOT,
network_attachment_name=NETWORK_ATTACHMENT_URI,
ip_address=TARGET_IP_ADDRESS,
)
wait_for_pipeline(
project_id=PROJECT_ID,
region=REGION,
pipeline_job_resource_name=job_name,
)
except Exception as e:
logging.error(f"An error occurred: {e}")
Example:
TARGET_IP_ADDRESS = "192.168.20.2"
try:
job_name = run_job_with_psc_interface_config(
project_id=PROJECT_ID,
region=REGION,
pipeline_root=PIPELINE_ROOT,
network_attachment_name=NETWORK_ATTACHMENT_URI,
ip_address=TARGET_IP_ADDRESS,
)
wait_for_pipeline(
project_id=PROJECT_ID,
region=REGION,
pipeline_job_resource_name=job_name,
)
except Exception as e:
logging.error(f"An error occurred: {e}")
Once step 17 is executed, the pipeline will take ~8 minutes to complete.
12. Validate connectivity to test-svc-1
In the cell used to execute step 17, observe the Pipeline Job Status transition from PIPELINE_STATE_PENDING to PIPELINE_STATE_RUNNING and ultimately PIPELINE_STATE_SUCCEEDED that indicates a successful ping from Vertex AI Pipelines and response from test-svc-1.
To validate ICMP traffic between Vertex AI Pipeline and test-svc-1 view the previously generated tcpdump session executed in test-svc-1 OS that provides logs indicating bi-directional traffic.
In the tcpdump example, Vertex AI Pipelines sourced the IP Address 192.168.10.3 from the 192.168.10.0/28 subnet, 192.168.20.2 is the IP Address of test-svc-1. Note, in your environment, the IP Addresses may differ.
user@test-svc-1:~$ sudo tcpdump -i any icmp -nn
tcpdump: data link type LINUX_SLL2
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on any, link-type LINUX_SLL2 (Linux cooked v2), snapshot length 262144 bytes
18:57:54.737490 ens4 In IP 192.168.10.3 > 192.168.20.2: ICMP echo request, id 257, seq 0, length 64
18:57:54.737523 ens4 Out IP 192.168.20.2 > 192.168.10.3: ICMP echo reply, id 257, seq 0, length 64
13. Vertex AI Pipelines AI to test-svc-2 connectivity
In the following section, you will create an ingress firewall rule to deny traffic from Vertex AI Pilelines subnet (192.168.10.0/28) to test-svc-2 followed by updating the Notebook to reflect test-svc-2 IP Address then finally executing the Pipelines Jobs run.
In the Notebook cell, the Pipeline Job Status will indicate Error - Pipeline Failed, in addition Firewall loggings will provide insights to the failed connection.
Create a deny ingress firewall rule
To deny connectivity from Vertex AI Pipelines to test-svc-2 create an ingress firewall rules that specifies the PSC Network Attachment as the source (192.168.10.0/28) and test-svc-2 IP Address as the destination.
Inside Cloud Shell, update the destination-range to match your test-svc-2 IP address.
gcloud compute --project=$projectid firewall-rules create deny-icmp-vertex-pipelines-to-test-svc2-vm --direction=INGRESS --priority=1000 --network=consumer-vpc --action=DENY --source-ranges=192.168.10.0/28 --rules=ALL --destination-ranges=<your-test-svc-2-vm-ip> --rules=icmp --enable-logging
Example:
gcloud compute --project=$projectid firewall-rules create deny-icmp-vertex-pipelines-to-test-svc2-vm --direction=INGRESS --priority=1000 --network=consumer-vpc --action=DENY --source-ranges=192.168.10.0/28 --rules=ALL --destination-ranges=192.168.30.2 --enable-logging
Execute pipeline job from Notebook Cell
In a new notebook cell, update the TARGET_IP_ADDRESS to reflect the IP Address obtained in the earlier step for test-svc-2 and observe the Pipelines Job Status.
TARGET_IP_ADDRESS = "<your-test-svc-2-ip>"
try:
job_name = run_job_with_psc_interface_config(
project_id=PROJECT_ID,
region=REGION,
pipeline_root=PIPELINE_ROOT,
network_attachment_name=NETWORK_ATTACHMENT_URI,
ip_address=TARGET_IP_ADDRESS,
)
wait_for_pipeline(
project_id=PROJECT_ID,
region=REGION,
pipeline_job_resource_name=job_name,
)
except Exception as e:
logging.error(f"An error occurred: {e}")
Example:
TARGET_IP_ADDRESS = "192.168.30.2"
try:
job_name = run_job_with_psc_interface_config(
project_id=PROJECT_ID,
region=REGION,
pipeline_root=PIPELINE_ROOT,
network_attachment_name=NETWORK_ATTACHMENT_URI,
ip_address=TARGET_IP_ADDRESS,
)
wait_for_pipeline(
project_id=PROJECT_ID,
region=REGION,
pipeline_job_resource_name=job_name,
)
except Exception as e:
logging.error(f"An error occurred: {e}")
Once executed, the pipeline job will take ~8 minutes to complete.
14. Validate failed connectivity to test-svc-2
In the cell used to execute the Pipelines Job, note the status transition from PIPELINE_STATE_PENDING to PIPELINE_STATE_FAILED indicating an unsuccessful ping from Vertex AI Pipelines and response from test-svc-2.
Using Log Explorer you can view Firewall Logging entries matching the Ingress Deny rule consisting of the Vertex AI Pipelines subnet (192.168.10.0/28) and test-svc-2 IP Address.
Select Show Query and insert the filter below, last 15 minutes followed by Run Query.
jsonPayload.rule_details.reference:("network:consumer-vpc/firewall:deny-icmp-vertex-pipelines-to-test-svc2-vm")
Select a log entry followed by expanding nested fields to reveal information elements consisting of the Vertex AI Pipelines and test-svc-2 IP Address validating the denied ingress firewall rule.
15. Clean up
From Cloud Shell, delete tutorial components.
gcloud compute instances delete test-svc-1 test-svc-2 --zone=us-central1-a --quiet
gcloud workbench instances delete workbench-tutorial --location=us-central1-a --quiet
gcloud compute firewall-rules delete deny-icmp-vertex-pipelines-to-test-svc2-vm allow-icmp-vertex-pipelines-to-test-svc1-vm ssh-iap-consumer --quiet
gcloud compute routers nats delete cloud-nat-us-central1 --router=cloud-router-us-central1 --region us-central1 --quiet
gcloud compute routers delete cloud-router-us-central1 --region=us-central1 --quiet
16. Congratulations
Congratulations, you've successfully configured and validated a Private Service Connect Interface and consumer and producer connectivity by implementing Ingress allow and deny firewall.
You created the consumer infrastructure, and you added a network attachment that allowed the Vertex AI Pipelines service to create PSC Interface VM to bridge consumer and producer communication. You learned how to create firewall rules in the consumer VPC network that allowed and denied connectivity to the instances in the consumer network.
Cosmopup thinks tutorials are awesome!!