构建多集群 GKE 推理网关,并使用 TPU、Cloud Storage FUSE 和受管理的 DRANET

1. 概览

本实验将向您介绍可用于运行 AI 工作负载的 AI 基础设施。您将使用以下内容:

Google Kubernetes Engine (GKE) - 基础容器编排平台。

GKE 托管 DRANET - 动态资源分配网络,可将高速互连结构直接分配给您的 TPU pod。

GKE 推理网关 - 这是 Google Cloud 提供的一种托管网关对象,适用于推理。在本实验中,我们将使用 多集群功能

张量处理单元 (TPU) - Google 自行构建的加速器芯片。

Cloud Storage FUSE - 一种存储接口,允许 Pod 直接装载 Cloud Storage 存储分区,从而实现大规模模型权重的即时加载。

如需进行配置,您将部署自定义 VPC、Cloud Storage 存储分区和两个位于不同区域的集群。每个集群都将有一个 TPU 节点池,该节点池使用托管 DRANET 进行网络连接。将集群添加到 舰队后,您将在存储分区中缓存 Gemma 模型权重,并部署一个 vLLM 工作负载,该工作负载通过 Cloud Storage FUSE 即时挂载这些权重。最后,系统将配置 GKE 推理网关以路由流量,从而让您执行实时跨区域故障切换测试。

这些配置将使用 Terraformgcloudkubectl 的组合。

在本实验中,您将学习如何执行以下任务

  • 设置 VPC、网络、存储空间
  • 以标准模式设置 GKE 集群
  • 创建 TPU 节点池并使用托管 DRANET
  • 将集群添加到舰队
  • 缓存模型权重
  • 设置多集群 GKE 推理网关并测试故障切换

在本实验中,您将创建以下模式。

图 1。

52b36edd128f9ffa.png

2. Google Cloud 服务设置

自定进度的环境设置

  1. 登录 Google Cloud 控制台,然后创建一个新项目或重复使用现有项目。如果您还没有 Gmail 或 Google Workspace 账号,则必须 创建一个

295004821bab6a87.png

37d264871000675d.png

96d86d3d5655cdbe.png

  • 项目名称 是此项目参与者的显示名称。它是 Google API 尚未使用的字符串。您可以随时对其进行更新。
  • 项目 ID 在所有 Google Cloud 项目中是唯一的,并且是不可变的(一经设置便无法更改)。Cloud 控制台会自动生成一个唯一字符串;通常情况下,您无需关注该字符串。在大多数 Codelab 中,您都需要引用项目 ID(通常用 PROJECT_ID 标识)。如果您不喜欢生成的 ID,可以再随机生成一个 ID。或者,您也可以尝试自己的项目 ID,看看是否可用。完成此步骤后便无法更改该 ID,并且此 ID 在项目期间会一直保留。
  • 此外,还有第三个值,即部分 API 使用的项目编号,供您参考。如需详细了解所有这三个值,请参阅文档
  1. 接下来,您需要在 Cloud 控制台中启用结算功能,以便使用 Cloud 资源/API。运行此 Codelab 应该不会产生太多的费用(如果有的话)。若要关闭资源以避免产生超出本教程范围的结算费用,您可以删除自己创建的资源或删除项目。Google Cloud 新用户符合参与 300 美元免费试用计划的条件。

启动 Cloud Shell

虽然可以通过笔记本电脑对 Google Cloud 进行远程操作,但在此 Codelab 中,您将使用 Google Cloud Shell,这是一个在云端运行的命令行环境。

Google Cloud 控制台 中,点击右上角工具栏中的 Cloud Shell 图标:

激活 Cloud Shell

预配和连接到环境应该只需要片刻时间。完成后,您应该会看到如下内容:

Google Cloud Shell 终端的屏幕截图,显示环境已连接

这个虚拟机已加载了您需要的所有开发工具。它提供了一个持久的 5 GB 主目录,并且在 Google Cloud 中运行,大大增强了网络性能和身份验证功能。您在此 Codelab 中的所有工作都可以在浏览器中完成。您无需安装任何程序。

3. 使用 Terraform 设置环境

如需完成本实验,您需要访问 TPU。使用的确切版本是 TPU v6e。

  • 您应按照 TPU 计划文档中的说明 启用 TPU 配额,以获取访问权限。
  • 我们使用的是小型部署,需要 4 个 TPU v6e 芯片 (ct6e-standard-4t)这些芯片将是位于两个不同区域的 2x2 切片
  • Hugging Face 令牌:需要 访问令牌 才能下载 Gemma 模型权重

我们将创建具有防火墙规则、存储空间和子网的自定义 VPC。打开 Cloud 控制台,然后选择您将使用的项目。

  1. 打开 Cloud Shell(位于控制台右上角),确保在 Cloud Shell 中看到正确的项目 ID,并确认所有允许访问的提示。b51b80043d3bac90.png
  2. 创建一个名为 gke-tf 的文件夹,然后移至该文件夹
mkdir -p gke-tf && cd gke-tf
PROJECT_ID=$(gcloud config get-value project)
  1. 现在,添加一些配置文件。这些文件将创建以下 network.tfvariable.tfproviders.tffuse.tf 文件。
cat <<EOF > terraform.tfvars
project_id = "${PROJECT_ID}"
EOF

cat <<EOF > variables.tf
variable "project_id" { type = string }
variable "network_prefix" { default = "tpu-gke-dranet" }
variable "regions" { default = ["europe-west4", "us-east5"] }
variable "region_to_tpu_zone" {
  default = {
    "europe-west4" = "europe-west4-a"
    "us-east5"     = "us-east5-b"
  }
}
EOF

cat <<EOF > providers.tf
terraform {
  required_version = ">= 1.5.7"
  required_providers {
    google-beta = { source = "hashicorp/google-beta", version = "~> 7.0" }
    time = { source = "hashicorp/time", version = "~> 0.11.0" }
  }
}
provider "google-beta" { project = var.project_id }

resource "google_project_service" "base_apis" {
  for_each = toset([
    "compute.googleapis.com",
    "container.googleapis.com",
    "cloudresourcemanager.googleapis.com",
    "storage.googleapis.com"
  ])
  project            = var.project_id
  service            = each.value
  disable_on_destroy = false
}
EOF

cat <<EOF > network.tf
resource "google_compute_network" "vpc" {
  name                    = "\${var.network_prefix}-vpc"
  auto_create_subnetworks = false
  mtu                     = 8896 
  depends_on              = [google_project_service.base_apis]
}
resource "google_compute_subnetwork" "subnets" {
  for_each      = toset(var.regions)
  name          = "\${var.network_prefix}-node-subnet" 
  region        = each.value
  network       = google_compute_network.vpc.id
  ip_cidr_range = each.value == "europe-west4" ? "10.0.1.0/24" : "10.0.2.0/24"
}
resource "google_compute_subnetwork" "proxy_subnets" {
  for_each      = toset(var.regions)
  name          = "\${var.network_prefix}-proxy-subnet-\${each.value}"
  region        = each.value
  network       = google_compute_network.vpc.id
  ip_cidr_range = each.value == "europe-west4" ? "10.1.1.0/24" : "10.1.2.0/24"
  purpose       = "GLOBAL_MANAGED_PROXY"
  role          = "ACTIVE"
}
resource "google_compute_address" "gateway_ips" {
  for_each     = toset(var.regions)
  name         = "gemma-gateway-ip-\${each.value}"
  region       = each.value
  subnetwork   = google_compute_subnetwork.subnets[each.value].id
  address_type = "INTERNAL"
}
resource "google_compute_firewall" "allow_internal" {
  name    = "\${var.network_prefix}-allow-internal"
  network = google_compute_network.vpc.name
  allow { protocol = "all" }
  source_ranges = ["10.0.0.0/8", "10.1.0.0/16"]
}
resource "google_compute_firewall" "allow_health_checks" {
  name    = "\${var.network_prefix}-allow-hc"
  network = google_compute_network.vpc.name
  allow { 
    protocol = "tcp"
    ports    = ["8000"] 
  }
  source_ranges = ["130.211.0.0/22", "35.191.0.0/16"]
}
EOF

cat <<EOF > fuse.tf
resource "google_storage_bucket" "model_bucket" {
  name          = "\${var.project_id}-gemma-weights"
  location      = "US" 
  force_destroy = true 
  uniform_bucket_level_access = true
  depends_on    = [google_project_service.base_apis]
}

resource "google_service_account" "gcs_fuse_sa" {
  account_id   = "gcs-fuse-sa"
  display_name = "Service Account for GCS FUSE"
}

resource "google_storage_bucket_iam_member" "gcs_fuse_sa_admin" {
  bucket = google_storage_bucket.model_bucket.name
  role   = "roles/storage.objectAdmin"
  member = "serviceAccount:\${google_service_account.gcs_fuse_sa.email}"
}

resource "google_project_iam_binding" "workload_identity_binding" {
  project = var.project_id
  role    = "roles/iam.workloadIdentityUser"
  members = ["serviceAccount:\${var.project_id}.svc.id.goog[default/gemma-ksa]"]
}
EOF

variable.tf 文件会添加项目名称、区域和可用区信息。附注:使用您拥有 TPU 配额的区域更新变量“regions” default = ["europe-west4", "us-east5"]。如需了解详情,请参阅“验证 GKE 中的 TPU 可用性”文档。

network.tf 会在您的项目中添加一个新 VPC,其中包含位于两个不同可用区的子网、仅限代理的子网和防火墙规则。

provider.tf 会添加相关提供方以支持 Terraform

fuse.tf 会添加 Cloud Storage 存储分区以缓存您的模型权重,并预配具有 objectAdmin 权限的 IAM 服务账号。它会将此账号绑定到 GKE Workload Identity

  1. 确保您位于 gke-tf 目录中,然后运行以下命令
    terraform init - 初始化工作目录。此步骤会下载给定配置所需的提供程序。terraform plan - 生成执行计划,显示 Terraform 将采取哪些操作来部署您的基础设施。terraform apply –auto-approve 运行更新并自动批准。
terraform init 
terraform plan 
  1. 现在运行部署(此过程可能需要 3-5 分钟)
terraform apply -auto-approve
  1. 在同一 gke-tf 文件夹中,创建以下 gke.tf 文件。
cat <<EOF > gke.tf
resource "google_container_cluster" "clusters" {
  provider = google-beta
  for_each = toset(var.regions)
  name     = "gke-\${each.value}"
  location = var.region_to_tpu_zone[each.value]
  deletion_protection = false
  network             = google_compute_network.vpc.id
  subnetwork          = google_compute_subnetwork.subnets[each.value].id
  release_channel { channel = "RAPID" }
  datapath_provider = "ADVANCED_DATAPATH"
  networking_mode   = "VPC_NATIVE"
  
  gateway_api_config { channel = "CHANNEL_STANDARD" }
  
  ip_allocation_policy {
    cluster_ipv4_cidr_block  = ""
    services_ipv4_cidr_block = ""
  }
  
  workload_identity_config { workload_pool = "\${var.project_id}.svc.id.goog" }
  
  addons_config { 
    gcs_fuse_csi_driver_config { enabled = true } 
  }
  
  initial_node_count = 1
  node_config {
    machine_type = "e2-standard-16"
    oauth_scopes = ["https://www.googleapis.com/auth/cloud-platform"]
    workload_metadata_config { mode = "GKE_METADATA" }
  }
}

resource "google_container_node_pool" "tpu_pools" {
  provider = google-beta
  for_each = toset(var.regions)
  name     = "tpu-v6e-pool"
  location = var.region_to_tpu_zone[each.value]
  cluster  = google_container_cluster.clusters[each.value].name
  node_count = 1
  
  network_config { accelerator_network_profile = "auto" }
  
  node_config {
    machine_type = "ct6e-standard-4t"
    oauth_scopes = ["https://www.googleapis.com/auth/cloud-platform"]    
    labels = { "cloud.google.com/gke-networking-dra-driver" = "true" }
    workload_metadata_config { mode = "GKE_METADATA" }
  }
  
  lifecycle { ignore_changes = [node_config[0].labels] }
}
EOF

gke.tf 会在不同区域添加两个集群,并创建两个 TPU 节点池,这些节点池运行具有 4 个芯片的 TPU v6e,并将托管 DRANET 分配给节点池。

  1. 现在运行部署(此过程可能需要 10-15 分钟)
terraform apply -auto-approve
  1. 验证
echo -e "\n=== Verifying GKE Clusters ==="
gcloud container clusters list --filter="name:gke-europe-west4 OR name:gke-us-east5" --project=$PROJECT_ID

echo -e "\n=== Verifying VPC Network ==="
gcloud compute networks list --filter="name:tpu-gke-dranet-vpc" --project=$PROJECT_ID

echo -e "\n=== Verifying Reserved Static IPs for Gateway ==="
gcloud compute addresses list --filter="name~gemma-gateway-ip" --project=$PROJECT_ID

echo -e "\n=== Verifying GCS Bucket ==="
gcloud storage ls | grep "${PROJECT_ID}-gemma-weights"

echo -e "\n=== Verifying GCS FUSE Service Account ==="
gcloud iam service-accounts list --filter="email:gcs-fuse-sa@${PROJECT_ID}.iam.gserviceaccount.com" --project=$PROJECT_ID

4. 舰队注册

我们需要将集群注册到舰队。

  1. 确保您位于 gke-tf 目录中,然后运行以下命令。
cat <<EOF > fleet.tf
data "google_project" "project" {
  project_id = var.project_id
}

resource "google_project_service" "fleet_apis" {
  for_each = toset([
    "gkehub.googleapis.com",
    "multiclusterservicediscovery.googleapis.com",
    "multiclusteringress.googleapis.com",
    "trafficdirector.googleapis.com"
  ])
  project            = var.project_id
  service            = each.value
  disable_on_destroy = false
}

resource "google_project_service_identity" "mci_sa" {
  provider = google-beta
  project  = var.project_id
  service  = "multiclusteringress.googleapis.com"
  depends_on = [google_project_service.fleet_apis]
}

resource "time_sleep" "wait_for_apis" {
  create_duration = "60s"
  depends_on      = [google_project_service.fleet_apis]
}

resource "google_project_iam_member" "mci_sa_admin" {
  project    = var.project_id
  role       = "roles/container.admin"
  member     = "serviceAccount:\${google_project_service_identity.mci_sa.email}"
  depends_on = [google_project_service_identity.mci_sa, time_sleep.wait_for_apis]
}

resource "google_gke_hub_membership" "memberships" {
  provider      = google-beta
  for_each      = toset(var.regions)
  project       = var.project_id
  membership_id = "gke-\${each.value}"
  endpoint {
    gke_cluster { resource_link = "//container.googleapis.com/\${google_container_cluster.clusters[each.value].id}" }
  }
  depends_on = [time_sleep.wait_for_apis, google_container_cluster.clusters]
}

resource "google_gke_hub_feature" "mcs" {
  provider   = google-beta
  name       = "multiclusterservicediscovery"
  location   = "global"
  project    = var.project_id
  depends_on = [time_sleep.wait_for_apis]
}

resource "google_gke_hub_feature" "ingress" {
  provider   = google-beta
  name       = "multiclusteringress"
  location   = "global"
  project    = var.project_id
  depends_on = [google_gke_hub_membership.memberships, google_project_iam_member.mci_sa_admin]
  spec {
    multiclusteringress { config_membership = "projects/\${var.project_id}/locations/global/memberships/gke-us-east5" }
  }
}
EOF

fleet.tf 文件会将两个集群注册到全局 GKE 舰队,并启用多集群服务发现和 Ingress。它会将美国集群指定为中央配置集群,从而允许 Gateway API 监控和路由流量。

  1. gke-tf 文件夹中运行(此过程应需要 3-5 分钟)
terraform plan 
terraform apply -auto-approve
  1. 验证舰队注册
gcloud container fleet memberships list --project=$PROJECT_ID

5. 将模型权重缓存到 FUSE

我们将在美国集群中运行临时 Kubernetes 作业,以通过 Python 脚本将 Gemma 模型安全地直接下载到 FUSE 挂载的 Cloud Storage 存储分区中。

  1. 创建以下变量
export CTX_EU="gke_${PROJECT_ID}_europe-west4-a_gke-europe-west4"
export CTX_US="gke_${PROJECT_ID}_us-east5-b_gke-us-east5"
  1. 此操作使用 google/gemma-3-27b-it 模型,因此您需要 创建 HF 令牌。将下面的 YOUR_ACTUAL_HUGGING_FACE_TOKEN 替换为您的实际令牌。
export HF_TOKEN="YOUR_ACTUAL_HUGGING_FACE_TOKEN"
  1. 确保您位于 gke-tf 目录中,然后运行以下命令。
gcloud container clusters get-credentials gke-us-east5 --zone us-east5-b --project=$PROJECT_ID

cat <<EOF > ksa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: gemma-ksa
  namespace: default
  annotations:
    iam.gke.io/gcp-service-account: "gcs-fuse-sa@${PROJECT_ID}.iam.gserviceaccount.com"
EOF

kubectl apply -f ksa.yaml --context=$CTX_US
kubectl delete secret hf-secret --context=$CTX_US --ignore-not-found
kubectl create secret generic hf-secret --from-literal=hf_token=${HF_TOKEN} --context=$CTX_US

cat <<EOF > download-job.yaml
apiVersion: batch/v1
kind: Job
metadata:
  name: model-downloader
  namespace: default
spec:
  backoffLimit: 1
  template:
    metadata:
      annotations:
        gke-gcsfuse/volumes: "true"
    spec:
      serviceAccountName: gemma-ksa
      restartPolicy: Never
      containers:
      - name: downloader
        image: python:3.11-slim
        env:
        - name: HF_TOKEN
          valueFrom:
            secretKeyRef:
              name: hf-secret
              key: hf_token
        command:
        - bash
        - -c
        - |
          pip install -U huggingface_hub
          echo "Downloading Gemma 3 directly to GCS bucket..."
          python3 -c "from huggingface_hub import snapshot_download; import os; snapshot_download(repo_id='google/gemma-3-27b-it', local_dir='/data/gemma-weights', token=os.environ['HF_TOKEN'])"
          echo "Download complete! Safe to proceed."
        volumeMounts:
        - name: gcs-fuse-volume
          mountPath: /data/gemma-weights
      volumes:
      - name: gcs-fuse-volume
        csi:
          driver: gcsfuse.csi.storage.gke.io
          volumeAttributes:
            bucketName: "${PROJECT_ID}-gemma-weights"
EOF

kubectl apply -f download-job.yaml --context=$CTX_US
  1. 等待下载完成,然后再继续(此过程应需要 5-10 分钟,具体取决于模型大小)
kubectl logs -f job/model-downloader --context=$CTX_US

(按 Ctrl+C 可在显示“Download complete!”后退出日志)

6. 部署工作负载 vLLM 和 Gemma

  1. 确保您位于 gke-tf 目录中,然后运行以下命令。
cat <<EOF > workload.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: gemma-ksa
  namespace: default
  annotations:
    iam.gke.io/gcp-service-account: "gcs-fuse-sa@${PROJECT_ID}.iam.gserviceaccount.com"
---
apiVersion: resource.k8s.io/v1
kind: ResourceClaimTemplate
metadata:
  name: all-netdev
  namespace: default
spec:
  spec:
    devices:
      requests:
      - name: req-netdev
        exactly:
          deviceClassName: netdev.google.com
          allocationMode: All
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: vllm-gemma
  namespace: default
  labels:
    app: gemma-server
spec:
  replicas: 1
  selector:
    matchLabels:
      app: gemma-server
  template:
    metadata:
      labels:
        app: gemma-server
      annotations:
        gke-gcsfuse/volumes: "true"
    spec:
      serviceAccountName: gemma-ksa
      nodeSelector:
        cloud.google.com/gke-tpu-accelerator: tpu-v6e-slice
        cloud.google.com/gke-tpu-topology: 2x2
      resourceClaims:
      - name: netdev
        resourceClaimTemplateName: all-netdev
      containers:
      - name: vllm-tpu
        image: vllm/vllm-tpu:latest
        command:
        - bash
        - -c
        - |
          export PYTHONUNBUFFERED=1
          echo "Booting vLLM instantly from local GCS FUSE mount..."
          
          python3 -m vllm.entrypoints.openai.api_server \
            --model /data/gemma-weights \
            --tensor-parallel-size 4 \
            --port 8000
        ports:
        - containerPort: 8000
        resources:
          requests:
            google.com/tpu: 4
          limits:
            google.com/tpu: 4
          claims:
          - name: netdev
        volumeMounts:
        - name: dshm
          mountPath: /dev/shm
        - name: gcs-fuse-volume
          mountPath: /data/gemma-weights
          readOnly: true
      volumes:
      - name: dshm
        emptyDir:
          medium: Memory
      - name: gcs-fuse-volume
        csi:
          driver: gcsfuse.csi.storage.gke.io
          readOnly: true
          volumeAttributes:
            bucketName: "${PROJECT_ID}-gemma-weights"
            mountOptions: "implicit-dirs"
            fileCacheCapacity: "100Gi"
            fileCacheForRangeRead: "true"
---
apiVersion: v1
kind: Service
metadata:
  name: vllm-gemma-service
  namespace: default
spec:
  selector:
    app: gemma-server
  ports:
  - protocol: TCP
    port: 8000
    targetPort: 8000
  type: ClusterIP
---
apiVersion: monitoring.googleapis.com/v1
kind: PodMonitoring
metadata:
  name: vllm-gemma-monitoring
  namespace: default
spec:
  selector:
    matchLabels:
      app: gemma-server
  endpoints:
  - port: 8000
    interval: 15s
    path: /metrics
EOF
  1. 现在执行以下脚本(此过程需要 5-10 分钟才能完成,因为它会在两个区域中进行部署)
for CTX in $CTX_EU $CTX_US; do
  ZONE=$(echo $CTX | cut -d_ -f3)
  CLUSTER=$(echo $CTX | cut -d_ -f4)
  gcloud container clusters get-credentials $CLUSTER --zone $ZONE --project=$PROJECT_ID
  
  kubectl delete secret hf-secret --ignore-not-found --context=$CTX
  kubectl create secret generic hf-secret --from-literal=hf_token=${HF_TOKEN} --context=$CTX
  kubectl apply -f workload.yaml --context=$CTX
done
  1. 确认部署
for CTX in $CTX_EU $CTX_US; do kubectl rollout status deployment/vllm-gemma --timeout=15m --context=$CTX; done
  1. 完成后,您可以运行以下命令来验证托管 DRANET 网络是否已分配给 pod。
for CTX in $CTX_EU $CTX_US; do
  echo "Checking DRA network interfaces on $CTX..."
  kubectl --context=$CTX exec deployment/vllm-gemma -c vllm-tpu -- ls /sys/class/net
  echo "----------------------------------------"
done

您将看到标准 pod 网络连接的其他网络接口 eth0 ,以及表示专用 TPU 结构的辅助接口 eth1、eth2 等。

7. 推理 API 和网关配置

现在,您将创建 InferenceObjective (gemma-objective), AutoscalingMetric (tpu-cache)Inference Pool(gemma-pool)。推理池是使用 Helm 图表创建的。安装并验证创建。

  1. 确保您位于 gke-tf 目录中,然后运行以下命令。此操作将部署对象并运行验证。
cat <<EOF > inference-objective.yaml
apiVersion: inference.networking.x-k8s.io/v1alpha2
kind: InferenceObjective
metadata:
  name: gemma-objective
  namespace: default
spec:
  priority: 10
  poolRef:
    name: gemma-pool
    group: "inference.networking.k8s.io" 
EOF

cat <<EOF > metrics.yaml
apiVersion: autoscaling.gke.io/v1beta1
kind: AutoscalingMetric
metadata:
  name: tpu-cache
  namespace: default
spec:
  selector:
    matchLabels:
      app: gemma-server
  endpoints:
  - port: 8000
    path: /metrics
    metrics:
    - name: vllm:kv_cache_usage_perc
      exportName: tpu-cache
EOF

for CTX in $CTX_EU $CTX_US; do
  kubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api-inference-extension/v1.1.0/config/crd/bases/inference.networking.x-k8s.io_inferenceobjectives.yaml --context=$CTX
  kubectl apply -f inference-objective.yaml --context=$CTX
  kubectl apply -f metrics.yaml --context=$CTX
done

helm install gemma-pool --kube-context $CTX_EU \
  --set inferencePool.modelServers.matchLabels.app=gemma-server \
  --set provider.name=gke \
  --version v1.1.0 \
  oci://registry.k8s.io/gateway-api-inference-extension/charts/inferencepool

helm install gemma-pool --kube-context $CTX_US \
  --set inferencePool.modelServers.matchLabels.app=gemma-server \
  --set provider.name=gke \
  --set inferenceExtension.monitoring.gke.enabled=true \
  --version v1.1.0 \
  oci://registry.k8s.io/gateway-api-inference-extension/charts/inferencepool

for CTX in $CTX_EU $CTX_US; do
  kubectl annotate inferencepool gemma-pool networking.gke.io/export="True" --context=$CTX
done

for CTX in $CTX_EU $CTX_US; do
  echo "Verifying Inference API resources on $CTX..."
  kubectl get inferencepools --context=$CTX
  kubectl get autoscalingmetrics tpu-cache --context=$CTX
done

8. 网关配置

现在,您将创建跨区域网关配置。Gateway(cross-region-gateway), HTTPRoute (gemma-route), HealthCheckPolicy(gemma-health-check)and GCPBackendPolicy(gemma-backend-policy. 推理池是使用 Helm 图表创建的。安装并验证创建。(网关需要 8-10 分钟才能变为活跃状态)

cat <<EOF > config-cluster.yaml
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  name: cross-region-gateway
  namespace: default
spec:
  gatewayClassName: gke-l7-cross-regional-internal-managed-mc
  addresses:
  - type: networking.gke.io/named-address-with-region
    value: "regions/europe-west4/addresses/gemma-gateway-ip-europe-west4"
  - type: networking.gke.io/named-address-with-region
    value: "regions/us-east5/addresses/gemma-gateway-ip-us-east5"
  listeners:
  - name: http
    protocol: HTTP
    port: 80
---
apiVersion: gateway.networking.k8s.io/v1
kind: HTTPRoute
metadata:
  name: gemma-route
  namespace: default
spec:
  parentRefs:
  - name: cross-region-gateway
    kind: Gateway
  rules:
  - backendRefs:
    - group: networking.gke.io
      kind: GCPInferencePoolImport
      name: gemma-pool
      port: 8000
---
apiVersion: networking.gke.io/v1
kind: HealthCheckPolicy
metadata:
  name: gemma-health-check
  namespace: default
spec:
  targetRef:
    group: networking.gke.io
    kind: GCPInferencePoolImport
    name: gemma-pool
  default:
    config:
      type: HTTP
      httpHealthCheck:
        requestPath: /health
        port: 8000
---
apiVersion: networking.gke.io/v1
kind: GCPBackendPolicy
metadata:
  name: gemma-backend-policy
  namespace: default
spec:
  targetRef:
    group: networking.gke.io
    kind: GCPInferencePoolImport
    name: gemma-pool
  default:
    timeoutSec: 100
    balancingMode: CUSTOM_METRICS
    trafficDuration: LONG
    customMetrics:
      - name: gke.named_metrics.tpu-cache
        dryRun: false
        maxUtilizationPercent: 60
EOF

echo -e "\n=== Creating Cross-Regional Gateway Resources ==="
kubectl apply -f config-cluster.yaml --context=$CTX_US

echo -e "\n=== Provisioning Global Load Balancer (This takes 5-10 minutes) ==="
echo "Working on the Gateway... waiting for Google Cloud to assign IPs and program routes..."

# The script will hold here until the gateway is officially ready
kubectl wait --for=condition=programmed gateway/cross-region-gateway --timeout=10m --context=$CTX_US

echo -e "\n=== SUCCESS: Gateway is fully provisioned and ready! ==="

推理池 (Helm):将两个区域中的模型服务器分组到单个逻辑后端中。

网关和 HTTPRoute :创建实际的全局内部负载平衡器,并定义将传入 AI 提示路由到模型的规则。

健康检查和后端政策 :确保仅将请求发送到健康状况良好的 pod,并启用基于指标的智能流量分配(防止 TPU 过载)。

验证:脚本会暂停,以确保 Google Cloud 在您继续之前已完全预配内部 IP 地址。

9. 故障切换测试

现在,我们来介绍实验中最精彩的部分:测试架构的高可用性。

以下是此自动化测试将执行的确切操作:

  • 基准测试: 我们的模拟用户发送推理提示(“法国的首都是哪里?”)。由于用户位于主要区域,因此网关会将请求路由到这些本地 TPU,以尽可能降低延迟。
  • 灾难: 我们通过终止主要区域中的所有 TPU Pod (replicas=0) 来模拟灾难性数据中心服务中断。
  • 检测: 我们等待 45 秒。在此期间,网关的健康检查失败,它意识到主要后端完全离线,并动态更新其全局路由表。
  • 故障切换: 我们的用户发送第二个提示(“德国的首都是哪里?”)。用户不知道发生了服务中断。网关会拦截请求,并立即将其在全球范围内重新路由到健康状况良好的辅助 TPU。
  • 恢复: 我们恢复主要 TPU,使您的全球架构恢复到完全健康状态。
  1. 打开 Cloud Shell 并运行以下命令:
cat << 'EOF' > failover-test.sh
#!/bin/bash
# Multi-Cluster Inference Failover Test

export PROJECT_ID=$(gcloud config get-value project)
export CTX_EU="gke_${PROJECT_ID}_europe-west4-a_gke-europe-west4"
export CTX_US="gke_${PROJECT_ID}_us-east5-b_gke-us-east5"

echo -e "\n=== PHASE 1: VERIFYING CURRENT STATE (BOTH CLUSTERS UP) ==="
echo "Checking US Cluster (Primary):"
kubectl get pods -l app=gemma-server --context=$CTX_US
echo "Checking EU Cluster (Secondary):"
kubectl get pods -l app=gemma-server --context=$CTX_EU

echo -e "\nDeploying Test Client in US..."
export GATEWAY_IP_US=$(gcloud compute addresses describe gemma-gateway-ip-us-east5 --region=us-east5 --project=$PROJECT_ID --format="value(address)")

kubectl run curl-test --image=curlimages/curl --restart=Never --context=$CTX_US -- sleep 3600
kubectl wait --for=condition=ready pod/curl-test --context=$CTX_US --timeout=60s

echo -e "\n=== PHASE 2: BASELINE TEST (US Client -> US TPUs) ==="
echo "Prompting the AI: 'What is the capital of France?'"
echo "Expect to see the full JSON response including token usage..."
kubectl exec curl-test --context=$CTX_US -- curl -s -X POST http://$GATEWAY_IP_US/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "/data/gemma-weights", 
    "messages": [{"role": "user", "content": "What is the capital of France?"}], 
    "max_tokens": 100
  }' | jq .

echo -e "\n=== PHASE 3: SIMULATING REGIONAL OUTAGE (Scaling US to 0) ==="
kubectl scale deployment vllm-gemma --replicas=0 --context=$CTX_US
echo "Waiting 20 seconds for pods to begin terminating..."
sleep 20

echo -e "\n=== PHASE 4: CONFIRMING STATE (PODS TERMINATING) ==="
echo "Checking US Cluster (Should be terminating):"
kubectl get pods -l app=gemma-server --context=$CTX_US
echo "Checking EU Cluster (Should still be running):"
kubectl get pods -l app=gemma-server --context=$CTX_EU

echo -e "\nWaiting 45 seconds for Gateway health checks to update global routing tables..."
sleep 45

echo -e "\n=== PHASE 5: CONFIRMING COMPLETE DOWN AND EURO UP ==="
echo "Checking US Cluster (Should be completely empty now):"
kubectl get pods -l app=gemma-server --context=$CTX_US
echo "Checking EU Cluster (Should still be running):"
kubectl get pods -l app=gemma-server --context=$CTX_EU

echo -e "\n=== PHASE 6: FAILOVER TEST (US Client -> EU TPUs) ==="
echo "Prompting the AI: 'What is the capital of Germany?'"
echo "Request is actively being rerouted to Europe. Expecting full JSON response..."
kubectl exec curl-test --context=$CTX_US -- curl -s -X POST http://$GATEWAY_IP_US/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{
    "model": "/data/gemma-weights", 
    "messages": [{"role": "user", "content": "What is the capital of Germany?"}], 
    "max_tokens": 100
  }' | jq .

echo -e "\n=== PHASE 7: RESTORING INFRASTRUCTURE (Scaling US to 1) ==="
kubectl scale deployment vllm-gemma --replicas=1 --context=$CTX_US
echo "Waiting for US pods to boot and mount FUSE..."
kubectl rollout status deployment/vllm-gemma --timeout=15m --context=$CTX_US

echo -e "\n=== PHASE 8: CONFIRMING BOTH SYSTEMS ARE BACK UP ==="
echo "Checking US Cluster (Restored):"
kubectl get pods -l app=gemma-server --context=$CTX_US
echo "Checking EU Cluster (Still Healthy):"
kubectl get pods -l app=gemma-server --context=$CTX_EU

echo -e "\n=== PHASE 9: CLEANUP ==="
kubectl delete pod curl-test --context=$CTX_US
echo "Failover lab complete."
EOF

chmod +x failover-test.sh
./failover-test.sh
  1. 测试完成后,您可以进行清理。

10. 清理

  1. 清理工作负载
#!/bin/bash
echo "=== PART 1: Kubernetes & Workload Cleanup ==="
export PROJECT_ID=$(gcloud config get-value project)
export CTX_EU="gke_${PROJECT_ID}_europe-west4-a_gke-europe-west4"
export CTX_US="gke_${PROJECT_ID}_us-east5-b_gke-us-east5"

echo "Deleting Gateway resources..."
for CTX in $CTX_EU $CTX_US; do
  kubectl delete gateways,httproutes,healthcheckpolicies,gcpbackendpolicies --all --context=$CTX --ignore-not-found
done

echo "Waiting 60 seconds for the external Load Balancer to detach..."
sleep 60

echo "Cleaning up workloads and custom resources..."
for CTX in $CTX_EU $CTX_US; do
  helm uninstall gemma-pool --kube-context=$CTX || true
  kubectl delete job model-downloader --context=$CTX --ignore-not-found
  kubectl delete all -l app=gemma-server --context=$CTX --ignore-not-found
  kubectl delete inferenceobjectives,autoscalingmetrics --all --context=$CTX --ignore-not-found
  kubectl delete serviceaccount gemma-ksa --context=$CTX --ignore-not-found
  kubectl delete -f https://raw.githubusercontent.com/kubernetes-sigs/gateway-api-inference-extension/v1.1.0/config/crd/bases/inference.networking.x-k8s.io_inferenceobjectives.yaml --context=$CTX --ignore-not-found
done

echo -e "\n=== Part 1 Complete! Safe to proceed to Terraform Teardown. ==="
  1. 清理基础设施。确保您位于 gke-tf 文件夹中。
cat << 'EOF' > cleanup-tf.sh
#!/bin/bash
echo "=== PART 2: Infrastructure & Terraform Teardown ==="
export PROJECT_ID=$(gcloud config get-value project)
export LAB_NETWORK="tpu-gke-dranet-vpc"

echo "Destroying GKE Fleet Features to prevent firewall resurrection..."
terraform destroy -target=google_gke_hub_feature.mcs -target=google_gke_hub_feature.ingress -auto-approve

echo "Waiting 30 seconds for the self-healing controllers to spin down..."
sleep 30

echo "Hunting down orphaned auto-generated firewall rules strictly on the lab network..."
GHOST_RULES=$(gcloud compute firewall-rules list --filter="network~${LAB_NETWORK} AND (name~mcsd OR name~k8s-fw-l7)" --format="value(name)" --project=$PROJECT_ID)

if [ ! -z "$GHOST_RULES" ]; then
  for rule in $GHOST_RULES; do
    echo "Deleting ghost rule: $rule"
    gcloud compute firewall-rules delete $rule --project=$PROJECT_ID --quiet
  done
else
  echo "No ghost rules found on ${LAB_NETWORK}."
fi

echo "=== Controllers and Firewalls dead. Destroying remaining Base Infrastructure. ==="

MAX_RETRIES=3
RETRY_COUNT=0
SUCCESS=false

while [ $RETRY_COUNT -lt $MAX_RETRIES ]; do
  # Run the destroy command. If it succeeds (exit code 0), break the loop.
  if terraform destroy -auto-approve; then
    SUCCESS=true
    break
  else
    RETRY_COUNT=$((RETRY_COUNT+1))
    echo -e "\n[WARNING] Terraform destroy encountered an error (likely a GCP resource lock)."
    
    if [ $RETRY_COUNT -lt $MAX_RETRIES ]; then
      echo "Waiting 30 seconds before retry $RETRY_COUNT of $MAX_RETRIES..."
      sleep 30
    fi
  fi
done

if [ "$SUCCESS" = true ]; then
  echo -e "\n=== Lab Cleanup Successfully Completed! ==="
else
  echo -e "\n[ERROR] Lab Cleanup failed after $MAX_RETRIES attempts."
  echo "Some resources may still be locked. Run 'terraform destroy -auto-approve' manually later to finish."
  exit 1
fi
EOF

chmod +x cleanup-tf.sh
./cleanup-tf.sh

如果您在删除特定资源时遇到任何问题,应重新运行 terraform destroy 命令脚本 ./cleanup-tf.sh

11. 恭喜

恭喜!您已使用 GKE、托管 DRANET 和 TPU v6e 加速器成功部署了高可用性多集群 GKE 推理网关跨区域 AI 推理架构。

通过结合使用 Cloud Storage FUSE 进行即时模型加载和推理网关 API 进行延迟感知型多集群路由,您构建了一个弹性后端,该后端能够在不丢弃内部用户流量的情况下应对完整的区域数据中心中断。

后续步骤 / 了解详情

您可以详细了解 GKE 网络

参与下一项实验

继续完成 Google Cloud 挑战任务,并查看以下其他 Google Cloud 实验: