Maximizing Throughput and Minimizing Costs with Cloud Run’s Direct VPC Egress
The GCP team recently announced the preview release of Direct VPC egress for Cloud Run instead of a serverless VPC connector. The Direct VPC egress provides Higher throughput and lower latency because it uses a new, direct network path with Fewer hops. It also comes with Pay-per-use — network charges only instead of always-on connector instances, reducing network costs.
For production workloads, a Serverless VPC Access connector is still the recommended option until Direct VPC Egress feature is moved to GA
In this blog post, we will explore how to enable the feature in a Cloud Run service and test the internal connectivity.
Limitations and known issues
- Direct VPC Egress currently has the following limitations. Refer to the official documentation for recent information.
- You also need to ensure the subnet has enough free IP addresses. It is recommended to have 4 times (4X) the number of unused IP addresses for the number of instances you plan to run.
For example, if your service’s
max-instances=10
, have at least 40 IP addresses on the subnet that can be used by Cloud Run.
- If the subnet of the VPC network runs out of IP addresses, it is logged by Cloud Logging. When this occurs, Cloud Run cannot start any more service instances or job tasks until more IP addresses become available.
- To delete a subnet, you must first delete all resources that use it. If Cloud Run is using a subnet, you must disconnect Cloud Run from the VPC network or move it to a different subnet before deleting the subnet. After you delete or move your Cloud Run resources, wait 1–2 hours for Cloud Run to release the IPs before you delete the subnet.
Setting up internal service
To test the vpc connectivity from the Cloud Run, we deploy a simple nginx service in a Compute Engine machine with only internal IP.
Step 1: Set up the necessary environment variables and ensure the gcloud
version is 444.0.0 and above.
export PROJECT_ID="your-project-id" #ex: chimbuc-playground
export REGION="your-region" #ex: us-central1
export ZONE="your-zone" #ex: us-central1-a
Step 2: Deploy the nginx service in the default VPC network.
gcloud compute instances create internal-nginx-service \
--project=$PROJECT_ID \
--zone=$ZONE \
--machine-type=e2-medium \
--network-interface=stack-type=IPV4_ONLY,subnet=default,no-address \
--metadata=startup-script=\#/bin/sh$'\n'sudo\ \
apt\ update$'\n'sudo\ apt\ install\ nginx\ -y\ \
--tags=http-server \
--create-disk=auto-delete=yes,boot=yes,device-name=internal-nginx-service,image=projects/debian-cloud/global/images/debian-11-bullseye-v20230814,mode=rw,size=10,type=projects/chimbuc-playground/zones/us-central1-a/diskTypes/pd-balanced
Step 3: SSH into the instance and test the nginx service.
Setting up Cloud Run service
Step 4: Clone the repo https://github.com/ChimbuChinnadurai/direct-vpc-egress-for-cloud-run for a sample simple application that makes calls to the internal service deployed in the VM.
Step 5: Build the container image and push it to the GCP artifact registry. Refer to https://cloud.google.com/artifact-registry/docs/docker/pushing-and-pulling for artifact registry guidelines.
Step 5: Deploy the Cloud Run service without any network connectivity to the VPC.
#Update the $IMAGE and $COMPUTE_ENGINE_INTERNAL_IP_ADDRESS
gcloud run deploy cloud-run-egress-test \
--image=$IMAGE \
--update-env-vars INTERNAL_ENDPOINT=$COMPUTE_ENGINE_INTERNAL_IP_ADDRESS
--allow-unauthenticated \
--region=$REGION \
--project=$PROJECT_ID
Step 6: Perform a connectivity test, and you can see the request to the internal endpoint fails since there is no network connectivity between the cloud run and the VPC.
Step 7: Deploy the Cloud Run service with the new direct egress feature and route all the internal IP requests to the VPC.
#Update the $IMAGE and $COMPUTE_ENGINE_INTERNAL_IP_ADDRESS
gcloud beta run deploy cloud-run-egress-test \
--image=$IMAGE \
--update-env-vars INTERNAL_ENDPOINT=$COMPUTE_ENGINE_INTERNAL_IP_ADDRESS
--allow-unauthenticated \
--network=default \
--subnet=default \
--vpc-egress=private-ranges-only \
--region=$REGION \
--project=$PROJECT_ID
Step 7: Perform the connectivity to the internal endpoint, and you should receive a successful response.
Conclusion
The new direct vpc egress feature simplified the way the Cloud Run access services running inside VPC, and some of the limitations will be removed before the feature is moved to GA. I hope this blog post has been helpful, Reach out to me if you have any questions.