Skip to main content

K8s Restart Pods on ConfigMaps Changes When Using Helm Charts

·1 min·
Kubernetes Helm Gist
Massimiliano Donini
Author
Massimiliano Donini
🚀 Freelance Cloud Architect | Based in The Netherlands 🇳🇱 | Ready to collaborate on cutting-edge Azure & .NET projects! 💡 Let’s talk Azure, .NET, EF Core, ASP.NET Core, Terraform, GitHub, and CI/CD automation. Got a question or an exciting project? Let’s connect! 🔥

A small annoyance of using ConfigMaps in Kubernetes together with Helm charts is that a change in the ConfigMap does not trigger a pod restart. This is a well known issue, and there are even some tools that aim to simplify this process.

Another, perhaps less well-known, solution to this problem is to include an annotation in the deployment, as suggested in the Helm documentation here.

This is the example code:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      annotations:
        checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.25
        envFrom:
        - configMapRef:
            name: nginx-configmap
            optional: false
      volumes:
      - name: nginx-config
        configMap:
          name: nginx-configmap
          optional: false

I hope you find this useful!