This page covers Vertex AI-specific details. For general GCP setup, see the GCP Integration page.
Installation
google-cloud-aiplatform>=1.34.0- Vertex AI SDKkfp>=2.6.0- Kubeflow Pipelines SDK (used by Vertex)google-cloud-pipeline-components>=2.19.0- Pre-built componentskubernetes- Kubernetes Python client
Components
Vertex AI Orchestrator
Execute complete pipelines as Vertex AI Pipelines
Vertex AI Step Operator
Run individual steps as Vertex AI custom jobs
Vertex Experiment Tracker
Track experiments in Vertex AI Experiments
Vertex AI Orchestrator
Runs your complete pipeline as a Vertex AI Pipeline using Kubeflow Pipelines v2.Configuration
project- GCP project IDlocation- GCP region (e.g.,us-central1,europe-west1)
pipeline_root- GCS URI for pipeline artifactsworkload_service_account- Service account for executionnetwork- VPC network for private connectivityencryption_spec_key_name- Cloud KMS encryption keyprivate_service_connect- Private Service Connect endpoint
Step Settings
Customize steps withVertexOrchestratorSettings and KubernetesPodSettings:
| Setting | Type | Description |
|---|---|---|
pod_settings | KubernetesPodSettings | Kubernetes Pod configuration |
labels | dict | GCP labels for the pipeline job |
synchronous | bool | Wait for pipeline completion |
node_selector_constraint | tuple | (Deprecated) Use pod_settings.node_selectors |
custom_job_parameters | VertexCustomJobParameters | Advanced custom job settings |
Machine Types
Vertex AI uses GCP machine types: Standard:n1-standard-4- 4 vCPU, 15 GB RAMn1-standard-8- 8 vCPU, 30 GB RAMn1-standard-16- 16 vCPU, 60 GB RAM
n1-highmem-4- 4 vCPU, 26 GB RAMn1-highmem-8- 8 vCPU, 52 GB RAMn1-highmem-16- 16 vCPU, 104 GB RAM
n1-highcpu-8- 8 vCPU, 7.2 GB RAMn1-highcpu-16- 16 vCPU, 14.4 GB RAM
GPU Accelerators
Available GPUs:NVIDIA_TESLA_K80- Legacy, low costNVIDIA_TESLA_P4- Inference optimizedNVIDIA_TESLA_T4- Good price/performanceNVIDIA_TESLA_V100- High performance trainingNVIDIA_TESLA_P100- High performanceNVIDIA_TESLA_A100- Latest, 40GB or 80GB
Custom Job Parameters
Advanced configuration for Vertex AI custom jobs:Vertex AI Step Operator
Runs individual steps as Vertex AI custom jobs.Configuration
Usage
Vertex AI Experiments
Track experiments with Vertex AI Experiments.Configuration
Usage
Viewing Experiments
View experiments in the Vertex AI Console:- Go to Vertex AI > Experiments
- Select your experiment
- Compare runs and metrics
- Visualize training curves
Service Account Setup
Create a service account with required permissions:roles/aiplatform.user- Create and manage Vertex AI resourcesroles/storage.objectAdmin- Read/write GCS artifactsroles/artifactregistry.reader- Pull container images
Complete Example
Best Practices
Use Workload Identity
Use Workload Identity
When running from GKE, use Workload Identity instead of key files:
Enable Private GKE and VPC
Enable Private GKE and VPC
Use private networking for security:
Use Customer-Managed Encryption
Use Customer-Managed Encryption
Encrypt data at rest with CMEK:
Label Resources for Cost Tracking
Label Resources for Cost Tracking
Use labels for billing analysis:
Monitoring
View Pipeline Runs:- Go to Vertex AI Console > Pipelines
- Select your pipeline
- View execution DAG and logs
- Click steps to see details
Next Steps
GCP Integration
General GCP integration guide
Kubeflow Integration
Compare with Kubeflow Pipelines
Experiment Tracking
Learn about experiment tracking
Vertex AI Docs
Official Vertex AI documentation
