Cert-manager for GKE Multi Cluster Ingress

Chimbu Chinnadurai
8 min readJun 23, 2023

--

The Multi Cluster Ingress is a cloud-hosted controller for Google Kubernetes Engine (GKE) clusters. It's a Google-hosted service that supports deploying shared load-balancing resources across clusters and regions.

Cert-manager is an open-source project that provides automated certificate management for Kubernetes. It can issue certificates from various sources, including Let's Encrypt, HashiCorp Vault, Venafi, and private PKI. It will ensure certificates are valid and up-to-date and attempt to renew certificates at a configured time before expiry.

This blog post will show you how to set up Multi cluster ingress in GKE and integrate cert-manager to automate the certificate management process.

Prerequisites

  • gcloud CLI
  • kubectl
  • Helm3

Set up Multi Cluster Ingress

In this section, we will create 2 GKE autopilot clusters in different regions and enable Multi cluster ingress to route traffic across the clusters.

Set up the environment variables.

export PROJECT_ID="chimbuc-playground"
export GKE_CLUSTER1_NAME="gke-us"
export GKE_CLUSTER2_NAME="gke-eu"
export GKE_CLUSTER1_REGION="us-central1"
export GKE_CLUSTER2_REGION="europe-west1"

Enable the required APIs.

Ensure Anthos API (anthos.googleapis.com)is disabled for any Multi Cluster Ingress resources are billed using standalone pricing.Do not disable the Anthos API if there are other active Anthos components in use in your project.

gcloud services enable \
container.googleapis.com \
multiclusteringress.googleapis.com \
gkehub.googleapis.com \
container.googleapis.com \
multiclusterservicediscovery.googleapis.com \
--project=$PROJECT_ID

Deploy the GKE clusters.

#First cluster 
gcloud container clusters create-auto $GKE_CLUSTER1_NAME \
--region=$GKE_CLUSTER1_REGION \
--release-channel=stable \
--project=$PROJECT_ID

#Second cluster
gcloud container clusters create-auto $GKE_CLUSTER2_NAME \
--region=$GKE_CLUSTER2_REGION \
--release-channel=stable \
--project=$PROJECT_ID

Register clusters to a fleet.

Fleets are a Google Cloud concept for logically organizing clusters and other resources, letting you use and manage multi-cluster capabilities and apply consistent policies across your systems. Refer to How fleets work for more details.

gcloud container fleet memberships register $GKE_CLUSTER1_NAME \
--gke-cluster $GKE_CLUSTER1_REGION/$GKE_CLUSTER1_NAME \
--enable-workload-identity \
--project=$PROJECT_ID

gcloud container fleet memberships register $GKE_CLUSTER2_NAME \
--gke-cluster $GKE_CLUSTER2_REGION/$GKE_CLUSTER2_NAME \
--enable-workload-identity \
--project=$PROJECT_ID

Confirm that clusters are successfully registered to the fleet.

gcloud container fleet memberships list --project=$PROJECT_ID

The multi cluster ingress controller is deployed to a gke cluster registered to the fleet, called the config cluster. The config cluster is the central control point for Ingress across the member clusters.

Enable Multi Cluster Ingress and select gke-us as the config cluster:

gcloud container fleet ingress enable \
--config-membership=$GKE_CLUSTER1_NAME \
--location=$GKE_CLUSTER1_REGION \
--project=$PROJECT_ID

Verify the feature state and ensure it is Ready to use.

Deploy Sample application and MCI across clusters

In the previous steps, we have created multiple GKE clusters, registered the clusters to a fleet and enabled the Multi cluster ingress feature.

We will deploy a sample application across the clusters and route the traffic between the clusters using Multi cluster ingress. We will use the sample application provided by Google.

Connect to the GKE clusters and deploy the sample application using the below template.

cat <<EOF | kubectl apply -f -
---
apiVersion: v1
kind: Namespace
metadata:
name: whereami
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: whereami-deployment
namespace: whereami
labels:
app: whereami
spec:
selector:
matchLabels:
app: whereami
template:
metadata:
labels:
app: whereami
spec:
containers:
- name: frontend
image: us-docker.pkg.dev/google-samples/containers/gke/whereami:v1.2.20
ports:
- containerPort: 8080
EOF

Verify the pod status in both clusters.

Now we need to deploy the MultiClusterIngress and MultiClusterService resources in the config cluster. These are the multi-cluster equivalents of Ingress and Service resources.

Deploy the below MultiClusterService resource to the config cluster.

cat <<EOF | kubectl apply -f -
---
apiVersion: networking.gke.io/v1
kind: MultiClusterService
metadata:
name: whereami-mcs
namespace: whereami
spec:
template:
spec:
selector:
app: whereami
ports:
- name: web
protocol: TCP
port: 8080
targetPort: 8080
EOF

Verify the MCI resource status in the config cluster.

The MultiClusterService creates a headless Service in every cluster that matches selector labels. You can see the headless service in both clusters since the workload is deployed to both clusters.

Create a static IP for the MCI load balancer, which is required for HTTPS support.

gcloud compute addresses create mci-demo --global --project=$PROJECT_ID

Now deploy the MultiClusterIngressresource in the config cluster.

cat <<EOF | kubectl apply -f -
---
apiVersion: networking.gke.io/v1
kind: MultiClusterIngress
metadata:
name: mci-demo
namespace: whereami
annotations:
networking.gke.io/static-ip: 34.36.254.70 #reserved static IP from the above step
spec:
template:
spec:
backend:
serviceName: whereami-mcs
servicePort: 8080
EOF

The controller will take a few minutes to create the required load balancer resources. Verify the MCI status and ensure loadbalancer components are successfully deployed.

kubectl describe mci -n whereami whereami-ingress

Test the application via the MCI loadbalancer, and you should see the requests are routed to both clusters. MCI route the requests to the cluster based on the client's location

Setup cert-manager and integrate cert-manager to MCI

The cert-manager integration with MCI works only with the DNS01 challenge, and http01 is not supported since the controller is running in GKE managed environment. Refer to cert-manager documentation for the Cloud DNS integration.

The cert-manager will be installed in the config cluster where we configured MCS, MCI and other Multi Cluster resources.

For this blog, I am using the public zone chimbuc.dns.doit-playground.com in GCP Cloud DNS and also created a record set for the domain mci-demo.chimbuc.dns.doit-playground.com

Set up workload identity for cert-manager.

#Create a GCP service account
gcloud iam service-accounts create dns01-solver \
--display-name "dns01-solver" \
--project=$PROJECT_ID

#DNS Admin is used for demo purpose
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member serviceAccount:dns01-solver@$PROJECT_ID.iam.gserviceaccount.com \
--role roles/dns.admin \
--project=$PROJECT_ID

#Link cert-manager kuberentes service account to GCP service account
gcloud iam service-accounts add-iam-policy-binding \
--role roles/iam.workloadIdentityUser \
--member "serviceAccount:$PROJECT_ID.svc.id.goog[cert-manager/cert-manager]" \
dns01-solver@$PROJECT_ID.iam.gserviceaccount.com \
--project=$PROJECT_ID

Deploy cert-manager using helm.

#Add the helm repository and update the local helm repository cache
helm repo add jetstack https://charts.jetstack.io
helm repo update

#Install cert-manager
helm upgrade -install cert-manager jetstack/cert-manager \
--namespace cert-manager \
--set global.leaderElection.namespace=cert-manager \
--set installCRDs=true \
--create-namespace

Verify the cert-manager deployment status and ensure all the pods are running without errors.

kubectl get pods -n cert-manager

After deploying the cert-manager, add the proper workload identity annotation to the cert-manager service account.

kubectl annotate serviceaccount --namespace=cert-manager cert-manager \
"iam.gke.io/gcp-service-account=dns01-solver@$PROJECT_ID.iam.gserviceaccount.com"

#Restart the cert-manager pod
kubectl rollout restart -n cert-manager deployment cert-manager

Deploy the below template to create a ClusterIssuer that uses Cloud DNS and Let's Encrypt (Update the email).

cat <<EOF | kubectl apply -f -
---
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-clusterissuer
spec:
acme:
email: <<EMAIL>>
privateKeySecretRef:
name: letsencrypt
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- dns01:
cloudDNS:
project: $PROJECT_ID
EOF

Deploy the below template to create aCertificate for the required DNS name (Update the DNS name value).

cat <<EOF | kubectl apply -f -
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: mcs-demo
namespace: whereami
spec:
dnsNames:
- <<DNS Name >>
issuerRef:
group: cert-manager.io
kind: ClusterIssuer
name: letsencrypt-clusterissuer
secretName: mcs-demo
EOF

Cert-manager may take a few minutes to issue the certificate. Monitor the certificate status and cert-manager logs for any errors. If all goes well, you will see the certificate is successfully issued by lets Encrypt and the Kubernetes secret is updated.

Deploy the below template to use the issued TLS certificate in Multi cluster ingress.

cat <<EOF | kubectl apply -f -
---
apiVersion: networking.gke.io/v1
kind: MultiClusterIngress
metadata:
name: mci-demo
namespace: whereami
annotations:
networking.gke.io/static-ip: 34.36.254.70 #reserved static IP from the above step
spec:
template:
spec:
backend:
serviceName: whereami-mcs
servicePort: 8080
tls:
- secretName: mcs-demo #secret created by cert-manager
EOF
The certificate is successfully linked to the MCI load balancer
Certificate details

Test the HTTPS endpoint, and you should see the Let's Encrypt certificate.

Conclusion

By following the detailed steps outlined in this blog, you can efficiently use cert-manager to issue certificates for GKE Multi cluster Ingress similar to Kubernetes ingress and strengthen the security of your GCP environment.

Resources

The official GCP documented reference for this blog post.

--

--