The latest release of Kubernetes v1.34 is packed with new features and enhancements across various services and offerings on Kubernetes. According to the team, this new version is inspired by “the wind around us – and will within us” hence the mascot.
The following is a list of feature updates and beta releases
1. Dynamic Resource Allocation (DRA) Graduates to General Availability (GA)
This is the most significant feature of this release. Dynamic Resource Allocation (DRA) offers a powerful, flexible, and vendor-agnostic framework for requesting and sharing resources, such as GPUs, FPGAs, and other specialized hardware.
Why it’s important: Before DRA, device access was managed through the more rigid Device Plugin framework. DRA introduces ResourceClaims, allowing pods to request specific resource characteristics. This enables more complex scenarios, such as sharing devices, delayed allocation, and improved integration with custom schedulers. Its graduation to GA means it’s now considered stable and production-ready.
2. Mutating Admission Policy Reaches Beta
Following the success of Validating Admission Policies, Mutating Admission Policy is introduced as a beta feature. It allows cluster administrators to define policies that can modify incoming objects using Common Expression Language (CEL) expressions, directly within Kubernetes, without needing to run a separate admission webhook server.
Why it’s important: This significantly simplifies the process of enforcing standards in a cluster. You can now write a native Kubernetes policy to automatically add a specific label to all new pods or set a default security context, reducing operational overhead and improving security posture.
3. Urgent Breaking Changes in Monitoring Metrics
The release notes highlight an urgent upgrade note regarding significant changes to Prometheus metrics labels. Several key metrics across the API server and etcd have had their labels changed for better consistency. For example, the resource_prefix label has been replaced with more specific group and resource labels in several apiserver_cache_* metrics.
Why it’s important: This is a breaking change that will impact virtually anyone monitoring a Kubernetes cluster. Dashboards and alerting rules that rely on the old labels will need to be updated to use the new format to continue working correctly after the upgrade.
4. Job Pod Replacement Policy Graduates to GA
The Job Pod Replacement Policy feature is now stable. This feature allows users to control when the Job controller creates replacement pods for pods that are terminating but not yet fully gone (e.g., stuck with a finalizer or a long shutdown period). The policy can be set to TerminatingOrFailed.
Why it’s important: For batch workloads, this provides crucial control. It prevents the Job controller from prematurely creating a replacement pod, ensuring that at most one pod is running per index, which is vital for jobs that are not idempotent.
5. Pod-Level Resources Graduates to Beta
The Pod-Level Resources feature is now beta and enabled by default. This allows you to specify CPU and memory resource requests and limits for an entire pod in the pod.spec.resources field, in addition to the traditional container-level definitions.
Why it’s important: This simplifies resource management for pods with multiple containers, especially those with sidecars. Instead of manually tuning resources for each container, you can define a total resource envelope for the pod, which also makes Horizontal Pod Autoscalers (HPAs) easier to configure.
6. Stateful Pod Reliability Fix for Volume Mount Failures
A major operational pain point has been addressed for stateful workloads. Kubelet can now detect when a pod’s volume mount fails because of an attachment limit on a node (e.g., “too many EBS volumes”). Instead of the pod getting stuck in the ContainerCreating state indefinitely, the Kubelet will now mark the pod as Failed.
Why it’s important: This allows the pod’s controller (like a StatefulSet) to react to the failure and reschedule the pod on a different, viable node. This dramatically improves the resilience and self-healing capabilities of stateful applications.
7. Core Windows Networking Features Go GA
Two key networking features for Windows nodes, WinDSR (Direct Server Return) and WinOverlay, have graduated to General Availability. This brings Windows networking capabilities much closer to parity with Linux.
Why it’s important: WinOverlay provides overlay networking support for Windows nodes, while WinDSR improves network performance for services. Their stabilization makes running production-grade, networked applications on Windows nodes in a Kubernetes cluster a fully supported and more robust experience.
8. VolumeAttributesClass Graduates to GA
This storage feature is now stable, allowing cluster administrators to define different classes of storage with specific parameters that can be dynamically provisioned.
Why it’s important: A StorageClass defines the provisioner and basic parameters, but a VolumeAttributesClass lets you specify more detailed attributes like IOPS or throughput tiers for a volume. This allows users to request PVCs with specific performance characteristics in a standardized way.
9. New kyaml Output Format for kubectl
A new output format, kyaml, has been added to kubectl. It’s a stricter, more standardized subset of YAML that is designed to be both human-readable and easy for machines to parse reliably.
Why it’s important: The standard -o yaml output can have formatting inconsistencies that make it difficult to use in scripts. The -o json format is machine-friendly but hard for humans to read. kyaml provides a perfect middle ground, improving the experience for anyone automating tasks with kubectl.
10. Scheduler’s Handling of nominatedNodeName Changes
The responsibility for managing the .spec.nominatedNodeName field on a Pod has shifted. Previously, the scheduler would clear this field. Now, the scheduler no longer clears it, and external components (like Cluster Autoscaler or Karpenter) are expected to manage it, with the API server clearing the field only after the pod is successfully bound to a node.
Why it’s important: This is a subtle but architecturally significant change that clarifies the division of responsibilities between the core scheduler and external scheduling components. It creates a more predictable and robust ecosystem for cluster autoscaling tools.
Click this link for a full list of features and the official announcement from Kubernetes.