kubernetes-1-35

Kubernetes 1.35 Released, codenamed Timbernetes

Share

Kubernetes v1.35 represents a significant advancement in workload stability, security, and operational efficiency. This release delivers 15 enhancements that graduate to General Availability, expands Kubernetes’ native security model, and introduces meaningful improvements to scheduling, autoscaling, and observability—while also signaling important ecosystem transitions that administrators must prepare for.

Whether you operate large multi-tenant clusters, run AI/ML workloads, or manage stateful production systems, Kubernetes v1.35 provides powerful new capabilities to run workloads more safely, efficiently, and predictably.

Let’s explore the most impactful updates.

🌟 Major Highlights

Stable: In-Place Update of Pod Resources

One of the most anticipated features is now Generally Available.

You can now adjust CPU and memory requests and limits on running Pods without restarting them. Previously, resource changes required Pod recreation—disruptive for stateful, batch, or long-running workloads.

Why this matters

  • Enables true vertical autoscaling
  • Eliminates downtime for stateful and batch jobs
  • Simplifies resource tuning and development workflows

Beta: Native Pod Certificates for Workload Identity

Kubernetes v1.35 introduces native, automated Pod-level certificates—dramatically simplifying workload identity and zero-trust architectures.

The kubelet now:

  • Generates private keys
  • Requests certificates via PodCertificateRequest
  • Writes credential bundles directly into Pod filesystems
  • Handles rotation automatically

Why this matters

  • Eliminates dependency on external systems like cert-manager or SPIFFE
  • Enables pure mTLS without bearer tokens
  • Enforces strict node isolation at admission time

Alpha: Nodes Declare Supported Features Before Scheduling

Version skew between control planes and nodes can cause pods to land on incompatible nodes.

With this new alpha capability, nodes can now publish supported Kubernetes features via:

.status.declaredFeatures

Schedulers, admission controllers, and extensions can then:

  • Prevent scheduling onto incompatible nodes
  • Enforce API-level feature compatibility
  • Reduce runtime failures caused by skew

🚀 Features Graduating to Stable

PreferSameNode Traffic Distribution

Service traffic routing gains clearer semantics:

  • PreferSameNode: strictly prefers local-node endpoints
  • PreferSameZone (renamed from PreferClose): zone-level affinity

Job API managedBy Field

Jobs can now cleanly delegate lifecycle management to external controllers.

This is especially important for MultiKueue and multi-cluster job dispatching, where:

  • Built-in Job controllers must not interfere
  • External systems synchronize execution and status

Reliable Pod Updates with .metadata.generation

Pods now behave like other Kubernetes APIs:

  • Every spec change increments .metadata.generation
  • Kubelet reports .status.observedGeneration
  • Conditions track per-generation processing

Why this matters

  • Reliable detection of when Pod changes are applied
  • Essential for in-place vertical scaling and automation
  • Eliminates ambiguity in Pod state tracking

Configurable NUMA Node Limits

Kubernetes can now fully utilize modern high-end servers with more than 8 NUMA nodes.

Administrators can configure:

max-allowable-numa-nodes

🔐 Security & Identity Improvements

User Namespaces for Pods (Beta)

Pods can now run with isolated UID/GID mappings:

  • Containers can run as root internally
  • Mapped to unprivileged users on the host

Enforced Credential Verification for Cached Images

Kubelet now verifies image pull credentials even for cached images.

This closes a serious multi-tenant security gap where unauthorized pods could reuse private images already present on a node (KEP-2535, SIG Node).


Secure CSI ServiceAccount Tokens

CSI drivers can now opt into receiving ServiceAccount tokens via secure secret fields rather than volume_context, preventing credential leakage in logs


⚙️ Scheduling, Autoscaling & Performance

Opportunistic Pod Scheduling Batching

The scheduler can now batch compatible pods using a Pod Scheduling Signature, sharing filtering and scoring results.


Configurable HPA Tolerance (Beta)

Autoscaling sensitivity is now configurable per workload.

Operators can fine-tune tolerance (e.g., 5% instead of 10%) to:

  • Improve responsiveness for critical services
  • Reduce unnecessary scaling oscillations

Gang Scheduling (Alpha)

Native support for all-or-nothing scheduling of pod groups via:

  • Workload API
  • PodGroup

This is a major milestone for AI/ML training and HPC workloads, preventing partial scheduling deadlocks (KEP-4671, SIG Scheduling).


🧰 Developer & Operator Experience

KYAML (Beta, Enabled by Default)

KYAML provides a safer, less ambiguous YAML subset for Kubernetes:

  • Avoids type coercion bugs
  • Easier to read and reason about
  • Fully compatible with existing YAML tooling

Structured /flagz and /statusz Endpoints

Kubernetes components now expose machine-readable JSON via:

  • /flagz
  • /statusz

⚠️ Deprecations & Important Notices

Ingress NGINX Retirement

Ingress NGINX enters best-effort maintenance until March 2026, after which it will be archived.

Recommended migration path: Gateway API


Removal of cgroup v1 Support

Kubernetes v1.35 removes cgroup v1 entirely.

Action required

  • Upgrade nodes to Linux distributions with cgroup v2
  • kubelet will fail to start on unsupported systems

Final Release Supporting containerd v1.X

v1.35 is the last Kubernetes release supporting containerd 1.x.

Operators must migrate to containerd 2.0+ before upgrading further.


✅ Final Thoughts

Kubernetes v1.35 is a release focused on maturity and trust:

  • Less disruption through in-place updates
  • Stronger security primitives built into the platform
  • Smarter scheduling and autoscaling for modern workloads
  • Clear signals for ecosystem transitions ahead

As always, the Kubernetes community encourages users to test new features, provide feedback, and prepare for upcoming deprecations.

Kubernetes continues to evolve—not just by adding features, but by making the platform safer, clearer, and more reliable at scale. See the official release notes.


Share

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top
×