Canary deployment using Ingress NGINX Controller
In the world of software development and continuous delivery, ensuring a smooth rollout of new features or updates is crucial. Canary deployment is a technique that allows you to gradually release new versions to a subset of users or servers before making them available to everyone.
In this blog post, we will explore how to implement canary deployments using Ingress Nginx Controller in a Kubernetes cluster.
Prerequisites
- A working kubernetes cluster
- Helm installed
The sample files used for the setup are available in the Github repo: https://github.com/ChimbuChinnadurai/canary-deployment-using-ingress-nginx-controller
Setting up Ingress Nginx Controller
Before deploying canary releases, we need to have the Ingress Nginx Controller configured in our Kubernetes cluster. I use a GKE cluster and Helm to install Ingress Nginx Controller for this setup.
Add the ingress-nginx
repo to Helm.
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
Run the below command to install the chart with default values.
helm install ingress-nginx ingress-nginx/ingress-nginx --namespace ingress-nginx --create-namespace --wait --debug
The nginx webhook calls are served over port 8443, So ensure any firewall rule between the control plan -> node allows the request over port 8443. In GKE, the auto-generated firewall rules only allow communication over ports 443,10250, and you need to create a new firewall to allow the over port 8443.
Verify the nginx deployment and ensure an external load balancer is created.
$ kubectl get all -n ingress-nginx
NAME READY STATUS RESTARTS AGE
pod/ingress-nginx-controller-5964797467-mp292 1/1 Running 0 79s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ingress-nginx-controller LoadBalancer 10.94.57.225 35.193.3.55 80:30974/TCP,443:30124/TCP 4m53s
service/ingress-nginx-controller-admission ClusterIP 10.94.63.163 <none> 443/TCP 4m53s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ingress-nginx-controller 1/1 1 1 4m53s
NAME DESIRED CURRENT READY AGE
replicaset.apps/ingress-nginx-controller-5964797467 1 1 1 4m53s
$
Test the nginx external endpoint.
$ curl -I 35.193.3.55
HTTP/1.1 404 Not Found
Date: Fri, 02 Jun 2023 08:09:54 GMT
Content-Type: text/html
Content-Length: 146
Connection: keep-alive
$
Deploy the sample application.
Use the below manifest to deploy the first version of the sample application.
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: canary-test-v1
name: canary-test-v1
spec:
replicas: 3
selector:
matchLabels:
app: canary-test-v1
template:
metadata:
labels:
app: canary-test-v1
spec:
containers:
- image: simbu1290/nginx-canary-test:v1
name: nginx-canary-test
ports:
- name: http
containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
labels:
app: canary-test-v1
name: canary-test-v1
spec:
ports:
- port: 5000
protocol: TCP
targetPort: 5000
selector:
app: canary-test-v1
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: canary-test-v1
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
ingressClassName: nginx
rules:
- host: canary-app.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: canary-test-v1
port:
number: 5000
Verify the deployment status
$ kubectl get pods,svc,ingress
NAME READY STATUS RESTARTS AGE
pod/canary-test-v1-8bc67b6-4nfbm 1/1 Running 0 1m
pod/canary-test-v1-8bc67b6-ccv76 1/1 Running 0. 1m
pod/canary-test-v1-8bc67b6-vqg9f 1/1 Running 0 1m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/canary-test-v1 ClusterIP 10.94.56.97 <none> 5000/TCP 1m
service/kubernetes ClusterIP 10.94.48.1 <none> 443/TCP 2d3h
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress.networking.k8s.io/canary-test-v1 nginx canary-app.com 35.193.3.55 80 1m
$
Verify Traffic Is Flowing
Next, test the endpoint with sample requests to verify that traffic is flowing as expected. Run the following command and update the nginx loadbalancer address.
Deploy sample application-v2
Use the below manifest to deploy the second version of the application.
The annotation nginx.ingress.kubernetes.io/canary: true is required on the ingress resource otherwise you will receive the below error
Error from server (BadRequest): error when creating “canary-app-v2.yaml”: admission webhook “validate.nginx.ingress.kubernetes.io” denied the request: host “canary-app.com” and path “/” is already defined in ingress default/canary-test-v1,
You can also add the annotation nginx.ingress.kubernetes.io/canary-weight: <canary-percentage> later after the new deployment pods are verified.
You can implement canary traffic splitting based on the host, header or cookie value. Refer to the nginx documentation for all the available options.
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: canary-test-v2
name: canary-test-v2
spec:
replicas: 3
selector:
matchLabels:
app: canary-test-v2
template:
metadata:
labels:
app: canary-test-v2
spec:
containers:
- image: simbu1290/nginx-canary-test:v2
name: nginx-canary-test
ports:
- name: http
containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
labels:
app: canary-test-v2
name: canary-test-v2
spec:
ports:
- port: 5000
protocol: TCP
targetPort: 5000
selector:
app: canary-test-v2
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: canary-test-v2
annotations:
ingress.kubernetes.io/rewrite-target: /
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "30"
spec:
ingressClassName: nginx
rules:
- host: canary-app.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: canary-test-v2
port:
number: 5000
Verify Traffic distribution between the versions
Test the endpoint with sample requests and verify the requests are routed based on the canary percentage to both versions.
Based on the test results above, it is evident that nginx is distributing traffic between two versions according to the canary-weight value. To fully transition to the new version of the application, you can remove canary-test-v2 and modify the canary-test-v1 ingress to direct all traffic to the latest version.
Conclusion
In general, utilizing canary deployment allows organizations to introduce changes in a controlled and gradual manner while minimizing risks, obtaining feedback, and ensuring the overall stability and quality of the software systems. The ingress NGNIX controller also streamlines the canary deployment process without running a service mesh in the Kubernetes cluster.