Kubernetes is having a transformative impact on modern infrastructure, much like what Linux did years ago when it revolutionized enterprise computing. Back then, Linux empowered developers to break free from the constraints of proprietary systems like Microsoft Windows and macOS, enabling customized and flexible operating systems. Today, Kubernetes is doing something similar for container orchestration, making it easier to manage containers at scale while enhancing application performance through optimal configuration—something I discussed in detail in a previous article.
But once you’ve defined the right Kubernetes configurations, what’s the most efficient way to deploy them without falling into the trap of repeating code? This is where the DRY (Don’t Repeat Yourself) principle comes into play—a foundational concept in Infrastructure as Code (IaC). Tools like Terragrunt implement DRY effectively using features like reusable modules, shared settings, and helper functions. So, can we apply the same DRY philosophy to Kubernetes deployments? Let’s explore how DRY principles can optimize Kubernetes application management.
DRY in Kubernetes Deployment
In most cases, when deployments are to be done on Kubernetes. There is a lot of repetition of YAML files performing very similar deployments. For example, three services need to be deployed: user service, catalog service, and payment service. What usually happens is a DevOps Engineer decides if they will use Helm or Kustomize, or good old YAML/kubectl for the deployment. Whatever option the engineer chooses, there will be a replication of these three services. A repository containing a collection of YAML files representing the user, catalog, and payment services, respectively. Either a single repository is used, or multiple repositories are used, but an identical copy of the group of files needed for the deployment is created. These files include: Service, Deployment, Ingress, Secrets/ExternalSecret for a basic deployment. Which is a total of four files, namely; service.yaml
deployment.yaml
ingress.yaml
externalsecrets.yaml
. These files will be created in three places, making it twelve files in total. The question is, what is wrong with this approach? Can it be better? Can we ensure we do not repeat ourselves?
The Problem
The challenge with this approach, which is typically the default, is that you are essentially repeating yourself. When you examine each of the four files mentioned above, the YAML scripts inside them are almost identical, with only minor differences. This raises the question: why do we need twelve files to deploy three services when four files could be used for all three deployments?
Configurations across the repositories will require manual maintenance periodically to ensure proper alignment. Any change made to the deployment.yaml
file for one service will require a manual change to other services to maintain consistency. This process can become cumbersome and prone to errors, especially when managing updates for 200 services.
Another challenge with this approach is that adding a new Kubernetes manifest can become cumbersome when services continue to grow. Say you need to add a new feature like a PodDisriptionBudget to the three services mentioned previously. It means you have to create a pdb.yaml
file three times for the three services, in different folders or repositories (depending on your structure).
What is a Better Option
A smarter and more efficient way to avoid repetitive Kubernetes configurations is by using a templating engine, and the most popular one for Kubernetes manifests is Helm. According to its official site, Helm is “the best way to find, share, and use software built for Kubernetes.” Think of Helm as the package manager for Kubernetes: it helps bundle your YAML files into reusable packages and manages the entire lifecycle of application deployments—from installation to upgrades, and even rollbacks.
What sets Helm apart is its templating capability. With Helm templates, you don’t have to write the same YAML files over and over again. Instead, you create a reusable boilerplate chart and deploy multiple services from it, changing only the necessary parameters. Typically, the most frequently updated value is the image tag in the deployment.yaml
file, which needs to reflect the latest version of your Docker image.
But Helm is capable of much more than just swapping image tags. It comes with a rich set of built-in functions for string manipulation, math operations, logic and flow control, time and date formatting, and even API version checks for Kubernetes compatibility. These features make Helm a powerful tool for building flexible, DRY-compliant Kubernetes deployments.
In the next section, I’ll walk you through a simple but effective example of how to use these features to build what I like to call the “Ultimate Helm Chart”—also known as a Mono Chart—that can manage multiple services with ease.
The Ultimate Helm Chart
The following chart is a sample of what a Mono Chart looks like. It contains the following standard features that a production-grade Kubernetes deployment should have at the bare minimum. It has the common parameters that need changes across application services parameterized across different Kubernetes objects used: Deployment, Ingress, PodDisruptionBudget, HorizontalPodAutoscaler, Service, and ExternalSecret. A few of the objects and the changes made shall be described; the full chart is published here: https://github.com/MyCloudSeries/mono-chart.
Deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: "{{ .Values.service_name }}-{{ .Values.environment }}"
spec:
selector:
matchLabels:
app: "{{ .Values.service_name }}-{{ .Values.environment }}"
replicas: {{ .Values.replicas | default 1 }}
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1
maxUnavailable: 0
template:
metadata:
labels:
app: "{{ .Values.service_name }}-{{ .Values.environment }}"
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: "{{ .Values.service_name }}-{{ .Values.environment }}"
topologyKey: "kubernetes.io/hostname"
terminationGracePeriodSeconds: 70
containers:
- name: "{{ .Values.service_name }}-{{ .Values.environment }}"
image: "{{ .Values.imageuri }}/{{ .Values.service_name }}-{{ .Values.environment }}:{{ .Values.image_tag }}"
securityContext:
allowPrivilegeEscalation: false
resources:
requests:
memory: {{ .Values.memory_limit | default "200Mi" }}
cpu: {{ .Values.cpu_limit | default "50m" }}
limits:
memory: {{ .Values.memory_limit | default "200Mi" }}
ports:
- name: app-port
containerPort: {{ .Values.container_port | default 80 }}
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 50"]
livenessProbe:
httpGet:
port: app-port
path: {{ .Values.health_check_path }}
periodSeconds: 60
initialDelaySeconds: 10
readinessProbe:
httpGet:
port: app-port
path: {{ .Values.health_check_path }}
periodSeconds: 60
initialDelaySeconds: 10
envFrom:
- secretRef:
name: "{{ $.Values.service_name }}-secret-{{ $.Values.environment }}"
{{ if .Values.cron }}
- name: "{{ .Values.service_name }}-{{ .Values.environment }}-cron"
image: "{{ .Values.imageuri }}/{{ .Values.service_name }}-{{ .Values.environment }}:{{ .Values.image_tag }}"
command: ["node", "app/cron.py"]
securityContext:
allowPrivilegeEscalation: false
resources:
requests:
memory: {{ .Values.cron_memory_limit | default "200Mi" }}
cpu: {{ .Values.cron_cpu_limit | default "50m" }}
limits:
memory: {{ .Values.cron_memory_limit | default "200Mi" }}
lifecycle:
preStop:
exec:
command: ["/bin/sh", "-c", "sleep 50"]
envFrom:
- secretRef:
name: "{{ $.Values.service_name }}-secret-{{ $.Values.environment }}"
{{ end }}
Ingress
Please note that the Ingress used here is ALB Ingress, which needs to be deployed in your cluster before this Ingress configuration can work
{{ if .Values.enable_ingress }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "{{ .Values.service_name }}-ing-{{ .Values.environment }}"
annotations:
alb.ingress.kubernetes.io/group.name: "{{ .Values.environment }}-ing"
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/target-type: ip
alb.ingress.kubernetes.io/actions.ssl-redirect: '{"Type": "redirect", "RedirectConfig": { "Protocol": "HTTPS", "Port": "443", "StatusCode": "HTTP_301"}}'
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]'
alb.ingress.kubernetes.io/healthcheck-path: /
spec:
ingressClassName: alb
rules:
- host: {{ .Values.app_url }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: "{{ .Values.service_name }}-svc-{{ .Values.environment }}"
port:
number: 80
{{ end }}
ExternalSecret
The External Secrets Object (ESO) must be deployed in your cluster, and the ClusterSecretStore must be configured before the ExternalSecret deployment can function correctly.
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: "{{ .Values.service_name }}-secret-{{ .Values.environment }}"
spec:
refreshInterval: 1m
secretStoreRef:
name: cluster-secret-store-engine
kind: ClusterSecretStore
target:
name: "{{ .Values.service_name }}-secret-{{ .Values.environment }}"
dataFrom:
- extract:
key: "{{ $.Values.environment }}/{{ $.Values.service_name }}/creds"
In the sample scripts above, most of the static values have been parameterized, making the Helm chart more flexible for handling different scenarios. One thing to also note is the customization added to the Deployment part. It can support the cron/sidecar option to deploy multiple containers in a single pod. The parameter to enable is the cron:true
value. This is a simple customization; the Mono Chart can be customized for various scenarios and use cases. The repository for the complete Helm Chart is here: https://github.com/MyCloudSeries/mono-chart. Next, let us head using the Mono Chart.
Using the Mono Chart
The Mono Chart is the same as a normal Helm chart; the difference is that everything is compacted into a single repo, just like a Mono repo. To use the chart for deployment, the two options that exist in Helm are the values.yaml
file option or the Helm CLI to pass parameters to the file. The following is an example of the Mono Chart values.yaml
file.
service_name: myapp
environment: dev
image_tag: latest
replicas: 1
memory_limit: 400Mi
cpu_limit: 20m
cron_memory_limit: 400Mi
cron_cpu_limit: 20m
enable_ingress: true
cron: false
app_url: demo.example.com
health_check_path: /health
imageuri: nimboya/myapp
container_port: 80
The other option is to use the Helm CLI command to pass parameters. The following is a sample script
helm install userservice MyCloudSeries/monochart --set service_name=userservice --set environment=dev --set image_tag=latest --set replicas=1 --set memory_limit=400Mi --set cpu_limit=20m --set cron_memory_limit=400Mi --set cron_cpu_limit=20m --set enable_ingress=true --set cron=true app_url=demo.example.com --set health_check_path=health --set imageuri=nimboya/app --set container_port=80
The script above does not only reduce the redundancy of scripts for multiple services, but it also ensures multiple scripts are not used for multiple environments (dev, staging, and prod), as the environment you need to deploy to can either be in the values.yaml
or via the Helm CLI script.
One last deployment technique is with ArgoCD. The following is an ArgoCD application deploying the Mono Chart.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: userservice
namespace: argocd
spec:
project: default
source:
repoURL: https://github.com/MyCloudSeries/mono-chart
targetRevision: HEAD
chart: MyCloudSeries/monochart
helm:
parameters:
- name: service_name
value: userservice
- name: environment
value: dev
- name: image_tag
value: latest
- name: replicas
value: "1"
- name: memory_limit
value: 400Mi
- name: cpu_limit
value: 20m
- name: cron_memory_limit
value: 400Mi
- name: cron_cpu_limit
value: 20m
- name: enable_ingress
value: "true"
- name: cron
value: "true"
- name: app_url
value: demo.example.com
- name: health_check_path
value: health
- name: imageuri
value: nimboya/app
- name: container_port
value: "80"
destination:
server: https://kubernetes.default.svc
namespace: default
syncPolicy:
automated:
prune: true
selfHeal: true
Alternatively, the ArgoCD Helm config can be pointed to a values.yaml
file.
Finally, let us outline the advantages of a Mono chart.
Benefits of using a Mono Chart
- Manage all deployments from a single source of truth.
- It establishes a golden path for all Kubernetes deployments.
- Accelerates the adoption of platform engineering.
- Centralize and manage Kubernetes application security efficiently.
- Eliminates the YAML script of death.
Conclusion
Deployments should aim to reduce repetition by encouraging code reuse, simplifying updates, and speeding up application delivery. This streamlines the software development lifecycle, increases developer productivity, and helps deliver high-quality software faster and more efficiently.