There are different methods for deploying workloads in Kubernetes. When you are just starting in Kubernetes, the first tool that is introduced is kubectl. It is the default tool for administering a Kubernetes cluster. One of the operations of administering a cluster is deploying an application on the cluster. Every workload is made up of different components: the Deployment, the Service, Secrets, Configmaps, and many more. All these come together to make up a workload in Kubernetes. When the default kubectl is used to deploy applications, it does not effectively manage the life cycle of the application. The closest competition to kubectl is Kustomize. Kustomize takes kubectl a step higher. Kustomize allows you to use overlays to manage the deployment of manifest files for different environments. While it does that pretty well and has a few more features than kubectl, it does not effectively manage the life cycle of a workload. The template files used with Kustomize and kubectl are usually the same, with a slight difference between them.
If you have been using kubectl or Kustomize for your deployment and you are trying to move to Helm, this tutorial will teach you how in a step-by-step process. If you are hearing about Helm for the first time, you can read our introductory article to Helm here.
Migrating from kubectl to Kustomize does not require much, as both tools do not implement a templating engine and do not have a structured way to manage an application life cycle. But why would you want to migrate to Helm in the first place?
Why Migrate to Helm?
There are some reasons to migrate your deployment workloads from kubectl and Kustomize to Helm. The following is a list of the reasons
1. Package Management
Helm is a full-fledged package manager (like apt or yum for Kubernetes), allowing you to package, share, and version application configurations as charts. Kustomize and kubectl lack this built-in distribution mechanism.
2. Templating and Reusability
Helm uses Go templates, which allow for dynamic value substitution, loops, conditionals, and reusable templates. Kustomize offers overlays but lacks the same flexibility and logic-driven rendering.
3. Version Control of Deployments
Helm tracks deployment history and lets you rollback to a previous version with a single command (helm rollback). This kind of version tracking is not natively available with kubectl/Kustomize.
4. Secrets and Values Management
Helm provides a central values.yaml for managing environment-specific settings and supports value injection and layering. Managing secrets and overrides in Kustomize can get cumbersome, especially across environments.
5. Chart Ecosystem
With Helm, you gain access to a vast ecosystem of pre-built charts via ArtifactHub and other repositories, saving time when deploying popular tools like Prometheus, NGINX, or ArgoCD.
6. Dependency Management
Helm supports chart dependencies via the Chart.yaml and requirements.yaml files, enabling you to pull in and manage subcharts easily — a feature completely missing in kubectl and only manually achievable in Kustomize.
7. CLI and Lifecycle Management
Helm offers a rich CLI to install, upgrade, lint, diff, template, rollback, and uninstall applications. With kubectl/Kustomize, you often need to script and manually manage the deployment lifecycle.
8. Support for Advanced Deployment Patterns
Helm supports hooks, post-install scripts, and pre-upgrade checks, allowing you to integrate lifecycle workflows such as running migrations or sanity checks before and after a deployment.
9. Better CI/CD Integration
Helm integrates easily into CI/CD pipelines, especially with tools like ArgoCD, FluxCD, GitLab CI, and GitHub Actions. It supports dry runs, diffs, and better automation hooks compared to Kustomize workflows.
10. Community and Enterprise Support
Helm is a CNCF graduate and is widely adopted across enterprises. It benefits from robust community support, extensive documentation, and commercial tool integrations (e.g., Rancher, OpenShift, Azure Arc).
Now that you have 10 solid reasons to migrate to Helm, how do you migrate?
Migrating to Helm
Before we go into the practical steps, it is important to understand how Helm is able to manage workloads effectively from the inside. This understanding is essential for the migration and for fixing bugs when a deployment with Helm fails after the migration.
There are two major features to understand in Helm
1. Helm uses annotations and labels to group a package
2. Helm uses Secrets to track versions of a release
The screenshot above shows the secrets stored in the Secrets of the cluster and each new version of the Helm deployment. With this, Helm can roll back to a previous release.
Armed with these two basic concepts of how Helm manages the life cycle of a workload, let’s see how we can migrate our current workload without downtime.
The Practical
Let’s say we have a simple app with two components, a Deployment and a Service
deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: myapp spec: selector: matchLabels: app: nginxdemo replicas: 3 template: metadata: labels: app: nginxdemo spec: containers: - name: nginxdemo image: nginxdemos/hello ports: - containerPort: 80
service.yaml
apiVersion: v1 kind: Service metadata: name: nginxdemo-svc spec: selector: app: nginxdemo ports: - name: http protocol: TCP port: 80 targetPort: 80 type: LoadBalancer
Both the YAML manifests for deployment miss the labels and annotations needed by Helm. To add it, we need to first create a Helm chart and move the components of this code to the Helm chart.
Use the following command to create a Helm Chart
helm create myapp
You can learn more about creating a Helm Chart here.
When the chart is created, navigate to the templates folder and delete the components that are not needed (usually ingress.yaml, hpa.yaml, serviceaccount.yaml) , leaving only deployment.yaml and service.yaml .
The next step is to take the scripts from the previous scripts and move them to the deployment.yaml and service.yaml files respectively. When doing this change, take note of the three labels that are needed for Helm to take control over the workload when it is deployed to the Kubernetes cluster.
The next step is to label and annotate existing components of the workload in the Kubernetes cluster, to prepare it for the new management via a Helm chart.
kubectl label deployment myapp app.kubernetes.io/managed-by=Helm --overwrite kubectl label service myapp app.kubernetes.io/managed-by=Helm --overwrite
kubectl annotate deployment myapp meta.helm.sh/release-name=myapp --overwrite kubectl annotate service myapp meta.helm.sh/release-name=myapp --overwrite
kubectl annotate deployment myapp meta.helm.sh/release-namespace=default --overwrite kubectl annotate servvice myapp meta.helm.sh/release-namespace=default --overwrite
The commands above add a label and two annotations to the myapp deployment. The first command adds a label to allow Helm to manage that workload, the second command adds an annotation for Helm to be able to identify the name of the release, while the third command.
The next step is to change the name in the Helm chart of a component to match the name of the existing resource running in the cluster. This should be done for both the deployment.yaml file and the service.yaml file respectively. The screenshot below shows the folder structure.
This should be changed
apiVersion: apps/v1 kind: Deployment metadata: name: {{ include "demo.fullname" . }} labels: {{- include "demo.labels" . | nindent 4 }}
To this
apiVersion: apps/v1 kind: Deployment metadata: name: myapp labels: {{- include "demo.labels" . | nindent 4 }}
This same operation should be repeated for the service.
IMPORTANT: Take note of the selector configuration too and make necessary changes so that the service can route to the deployment.
Next is to deploy the new chat we created previously using the values file of the Helm chart. It is called values.yaml by default. This can be done using the Helm install command (ensure you are in the root directory of the Helm chart, or you can be outside it and specify the folder appropriately)
helm -i upgrade myapp ./ -n default --set image.tag=nginxdemos/hello --set replicas=3
This would make Helm start managing your workload from this point on.
Errors ? Yes
Please note that there might be issues encountered during this migration, one of which is when you have multiple components and not all have been migrated appropriately. The common error that usually shows up when you try to run the Helm command to deploy is the following error
Error: Unable to continue with install: Deployment "myapp" in namespace "default" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "myapp"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "default"
The error above simply means that the Deployment called myapp in namespace default is running in the cluster, but is not currently being managed by Helm because the appropriate labels and annotations that should allow it to be managed the Deployment has not been added.
Large Scale Migration
What if you have hundreds of workloads to migrate, and it is difficult to do them? You can use a bash script for it. Here is a sample bash script that can be tweaked based on your unique use case.
#!/bin/bash # Get all items names in all namespaces deployments=$(kubectl get deployment -n default -o jsonpath="{range .items[*]}{.metadata.namespace} {.metadata.name}{'\n'}{end}") # Loop through each item while IFS= read -r line; do namespace=$(echo "$line" | awk '{print $1}') deploy_name=$(echo "$line" | awk '{print $2}') echo "Processing deployment: $deploy_name in namespace: $namespace (clean name: $deploy_name)" kubectl annotate deployment "$deploy_name" \ -n "$namespace" \ meta.helm.sh/release-name="$clean_name" --overwrite kubectl annotate deployment "$deploy_name" \ -n "$namespace" \ meta.helm.sh/release-namespace=default --overwrite kubectl label deployment "$deploy_name" \ -n "$namespace" \ app.kubernetes.io/managed-by=Helm --overwrite done <<< "$deployments" echo "Annotation and labeling complete."
The sample code above is for deployments alone. It can be tweaked for any other Kubernetes resource.
Conclusion
Helm is a great tool that can manage the life cycle of your Kubernetes application more effectively than kubectl and Kustomize, This article explains how Helm is able to do that with its unique features and why you should use Helm instead of kubectl or Kustomize. It also showed you how to migrate your existing workload in kubectl and Kustomize into Helm.