Kubernetes v1.35 Explained: Complete Guide to All New Features, Enhancements & API Changes (2026)

Last Updated: March 2026

I always wanted to simplify the release notes documentation and create a sperate article especially for all the features explained in the release notes of the new Kubernetes release. So bringing you all the features stacked up in one blog article so that you don’t have to go through the entire kubernetes documentation link to read about the new features, you will get a quick reference to all of them from this single blog. So let’s start.

Kubernetes v1.35 “Timbernetes — The World Tree Release” dropped on December 17, 2025, and it’s one of the most feature-rich releases in recent memory. With exactly 60 enhancements — 17 Stable (GA), 19 Beta, and 22 Alpha — this release touches every layer of the platform: scheduling, storage, security, networking, node management, and AI/ML workloads.

Why Kubernetes v1.35 Matters

Every four months the Kubernetes community ships a new release. Not every release lands equally. v1.35 is different because it closes the loop on several multi-year efforts: in-place pod resizing (alpha since v1.27) finally reaches GA, gang scheduling gets first-class API support for AI workloads, cgroup v1 is removed entirely, and IPVS mode in kube-proxy is formally deprecated. For platform teams managing mixed fleets — AKS, RKE2 on vSphere, on-premises clusters — this release is less an upgrade and more a modernisation milestone.

Here’s every single enhancement, organised by maturity stage, with what changed, why it matters, and what you should do about it.


Stable (GA) Features — 17 Enhancements

 Stable (GA) Features — 17 Enhancements

These features are fully production-ready. No feature gates required unless noted.

1. In-Place Pod Resource Updates (KEP-1287)

SIG Node | After years in alpha and beta, you can now change a running pod’s CPU and memory requests/limits without recreating it. The kubelet updates cgroup settings in place; containers keep running. CPU resize always works without restart; memory resize requires containerd 2.0+ with cgroups v2. Use the new /resize subresource and audit RBAC to grant the create verb.

2. PreferSameNode Traffic Distribution (KEP-3015)

SIG Network | The trafficDistribution field on Services gains a PreferSameNode value, routing traffic to node-local endpoints first. PreferClose is renamed to PreferSameZone for clarity. Reduces cross-node traffic and latency for co-located workloads with zero application changes.

3. Configurable NUMA Node Limit for Topology Manager (KEP-4622)

SIG Node | Removes the hard-coded 8 NUMA node ceiling via the max-allowable-numa-nodes policy option. Enables Topology Manager on large multi-socket servers used for HPC, AI/ML, and telecom, where NUMA-aware CPU/memory/device placement is critical for performance.

4. CPUManager strict-cpu-reservation Policy Option (KEP-4540)

SIG Node | The strict-cpu-reservation option for the static CPUManager policy is now GA. Previously, BestEffort and Burstable pods could bleed onto reservedSystemCPUs. Now ALL pods are blocked from reserved CPUs, giving system daemons guaranteed isolation regardless of pod QoS class.

5. Kubelet Parallel Image Pull Limit (KEP-3673)

SIG Node | The maxParallelImagePulls kubelet config field is now GA. You can cap simultaneous image pulls to avoid saturating node network/disk during large-scale pod restarts or node boot. Default remains serial when serializeImagePulls: true; set it to false plus a numeric cap for controlled concurrency.

6. Kubelet Image Garbage Collection by Age (KEP-4210)

SIG Node | imageMaximumGCAge is now stable. Set a duration (e.g. 24h) and the kubelet proactively removes images unused longer than that threshold, independent of disk pressure. Combats node image cache bloat in long-running clusters without waiting for a disk-full event.

7. Job managedBy Field (KEP-4368)

SIG Apps | Setting spec.managedBy on a Job tells the built-in Job controller to stand down entirely. External systems like MultiKueue or custom multi-cluster schedulers can then own pod creation and status reconciliation without fighting the native controller. Foundation for federated batch workloads.

8. Pod Generation / observedGeneration Tracking (KEP-5067)

SIG Node | Pods now have metadata.generation that increments on every mutable spec update, and status.observedGeneration that the kubelet updates when it acts on the change. Controllers can reliably detect whether an in-place resize or other update has been applied without polling or timing hacks.

9. Structured Authentication Config (KEP-3331)

SIG Auth | Declare JWT authenticators, audiences, and claim mappings in a single YAML file via --authentication-config. No more stacking API server flags. GitOps-native, auditable, and supports multiple OIDC providers simultaneously. Replaces the old --oidc-* flag family.

10. Fine-Grained SupplementalGroups Control (KEP-3619)

SIG Node | Adds a supplementalGroupsPolicy field (Merge or Strict) to pod security context. Strict enforces that only explicitly requested groups are used, preventing container images from injecting extra group IDs from their /etc/group. Important for multi-tenant isolation and compliance.

11. Kubelet Config Drop-In Directory (KEP-3983)

SIG Node | The kubelet now reads a drop-in config directory (similar to systemd’s .conf.d pattern). Node bootstrapping tools, Cluster API, and RKE2 can inject partial kubelet configs without overwriting the base config file. Cleaner config management and easier operator-specific extensions.

12. Watch-Based Route Reconciliation in Cloud Controller Manager (KEP-2836)

SIG Cloud Provider | The CCM’s route controller switches from periodic polling to watch-based reconciliation. Routes for nodes are created and cleaned up immediately on node events rather than on a fixed interval. Faster route convergence and reduced API server load in large cloud-hosted clusters.

13. Job Success Policy (KEP-3998)

SIG Apps | Define successPolicy on Jobs — declare success as “N out of M pods succeeded” rather than “all pods completed”. Ideal for ML training jobs, scientific computing, and parallel batch workloads where partial completion is a valid outcome and eliminating failed-pod retry loops saves compute cost.

14. Node Log Query (KEP-2258)

SIG Node | The NodeLogQuery feature gate is on by default. Query kubelet, containerd, kernel, and journal logs directly from the API server via kubectl get --raw /api/v1/nodes/{node}/proxy/logs/ — no SSH required. Essential for air-gapped and jump-host-less environments.

15. Swap Support for Linux (KEP-2400)

SIG Node | Linux node swap support graduates to stable. Kubelet can now manage swap space for Burstable QoS pods on Linux nodes with cgroups v2, enabling more graceful memory pressure handling for batch and dev/test workloads rather than immediate OOM kills.

16. Recursive Read-Only Volume Mounts (KEP-3857)

SIG Node / SIG Storage | Setting recursiveReadOnly: Enabled on a volume mount propagates read-only semantics to all bind-mounts beneath it. Previously, nested mounts could be writable even with a read-only parent. Closes a meaningful container escape surface for workloads with complex volume hierarchies.

17. Pod Scheduling Readiness / Scheduling Gates (KEP-3521)

SIG Scheduling | schedulingGates allows pods to be held in a “not-yet-ready-to-schedule” state until external conditions are satisfied — quota checks, node warm-up, external provisioning. Eliminates thundering herd on large deployments. Remove gates via field patch when conditions are met.


Beta Features — 19 Enhancements

Beta features are enabled by default in v1.35 unless a feature gate is required. They are production-testable but not yet fully graduated.

 Beta Features — 19 Enhancements

18. Configurable HPA Tolerance (KEP-4951)

SIG Autoscaling | Define autoscaling tolerance per HPA and per direction (scale-up vs scale-down) via the behavior field. Previously a fixed cluster-wide 10% tolerance applied to all HPAs. Now critical services can react to 2% CPU spikes while less sensitive workloads use a 20% buffer, all without touching global controller flags.

19. Mutable CSI Volume Attach Limits (KEP-4876)

SIG Storage | CSI drivers can now dynamically update the allocatable.count in CSINode at runtime. Previously static, this meant the scheduler could over-provision volumes onto nodes that had already exhausted their attachment slots. Now drivers report real-time capacity, and Kubernetes also auto-adjusts on attachment failures.

20. Opportunistic Scheduler Batching (KEP-5598)

SIG Scheduling | The scheduler now computes a “pod scheduling signature” and caches results for identical pods arriving back-to-back. For Jobs with 1,000+ identical workers, this eliminates redundant filter/score computation. Transparent — no config needed. Most impactful for ML training jobs and large parallel batch workloads.

21. maxUnavailable for StatefulSet Rolling Updates (KEP-961)

SIG Apps | StatefulSets can now specify maxUnavailable in their rolling update strategy, allowing parallel pod updates when combined with podManagementPolicy: Parallel. Previously, StatefulSets updated strictly one pod at a time. A large StatefulSet can now roll faster while still respecting your availability budget.

22. Deployment Status: terminatingReplicas Field (KEP-3973)

SIG Apps | status.terminatingReplicas is now visible in Deployment status. Previously, pods being torn down were invisible to the status API, making it hard to know if a rollout had truly settled. Now you can gate CI/CD pipelines on terminatingReplicas == 0 for truly clean deployments.

23. Node Topology Labels via Downward API (KEP-4742)

SIG Node | The kubelet now propagates topology.kubernetes.io/zone and topology.kubernetes.io/region labels to pods via the Downward API. Applications can become topology-aware using environment variables without needing API server access or extra RBAC permissions — a major least-privilege improvement for multi-zone deployments.

24. Pod Certificates for Workload mTLS (KEP-4317)

SIG Auth | Pods can now receive short-lived X.509 certificates bound to their service account identity, enabling workload-to-workload mTLS without a service mesh. Certificates are automatically rotated. This feature is Beta and enabled by default — a foundational step for zero-trust Kubernetes networking.

25. Image Pull Credential Verification for Cached Images (KEP-2535)

SIG Node | Kubelet now verifies that a pod has valid pull credentials before allowing it to use an image that’s already cached on the node. Previously, a pod without pull secrets could silently use an image cached by another tenant. Controlled via imagePullCredentialsVerificationPolicy. Critical for multi-tenant clusters.

26. CSI Driver Service Account Token Integration (KEP-3953)

SIG Storage | CSI drivers can now receive pod-bound service account tokens directly through a secrets field, replacing the need for driver-specific credential projection mechanisms. Improves interoperability and consistency for storage drivers that use Kubernetes identity for cloud credential vending.

27. OCI Image Volumes Enabled by Default (KEP-4639)

SIG Node / SIG Storage | Pods can now mount OCI artifacts and container images directly as read-only volumes. Decouple ML models from application code: store your model as an OCI artifact, mount it at runtime, update independently of the app image. Enabled by default, continuing its path toward full graduation.

28. User Namespaces: On-by-Default Beta (KEP-127)

SIG Node | Linux user namespaces let a process run as root inside a container while mapping to an unprivileged UID on the host. Now on-by-default in beta. Significantly reduces the blast radius of container breakouts and has already mitigated several high-severity CVEs. Requires kernel 6.3+ and containerd 2.0+.

29. PSI (Pressure Stall Information) Metrics (KEP-4870)

SIG Node | Kubernetes now exposes Linux PSI metrics (CPU, memory, IO pressure stall) via the metrics API. PSI gives you direct visibility into when workloads are stalling waiting for resources — far more actionable than utilisation metrics alone. Useful for tuning VPA recommendations and diagnosing noisy-neighbour issues.

30. Workload-Aware Scheduling (Workload API) (KEP-4671 / KEP-5616)

SIG Scheduling | A new Workload core type groups pods as a single scheduling entity with PodGroups and minCount semantics. The scheduler can now handle all-or-nothing gang placement natively. Supports Job, StatefulSet, JobSet, LeaderWorkerSet, MPIJob, and TrainJob. Essential for distributed ML training on Kubernetes.

31. Cgroup v1 Removal Path (KEP-5573)

SIG Node | The cgroup v1 removal is introduced as a Beta enhancement. cgroup v2 is now mandatory. Nodes still running cgroup v1 will fail kubelet startup. Complete removal from the codebase is expected around v1.38. If you’re still on CentOS 7 or older RHEL images, this is your hard deadline to upgrade node OS.

32. Versioned z-pages Debug APIs (KEP-4827)

SIG Instrumentation | Kubernetes components now expose structured, versioned z-page endpoints (/debug/pprof, /healthz, /readyz) under a stable API contract. Previously these were informal debug pages. Now they’re queryable programmatically, enabling better integration with observability pipelines and operator tooling.

33. kubectl KYAML Output Format (KEP-5236)

SIG CLI | kubectl output defaults to KYAML — a stricter YAML subset that eliminates ambiguous formatting (implicit null values, tab indentation, boolean string edge cases). Reduces “copy from kubectl, paste into CI, breaks” situations. Use KUBECTL_KYAML=false to revert while testing compatibility in your pipelines.

34. kuberc Credential Plugin AllowList (KEP-4676)

SIG Auth / SIG CLI | kuberc (user preferences file, beta since v1.34) now supports an exec plugin allowList that restricts which credential provider binaries kubectl will invoke. Prevents malicious kubeconfigs from executing arbitrary binaries on developer workstations when opening shared or downloaded cluster configs.

35. Pod-Level Resource Requirements (KEP-2837)

SIG Node | Specify resources at the pod level, not just per-container. Kubernetes schedules based on the pod total, allowing containers to burst within shared pod limits. Simplifies resource management for tightly coupled multi-container pods (sidecars, init containers) that share a resource budget.

36. In-Place Pod Restart (RestartAllContainers Action) (KEP-4438)

SIG Node | A new pod-level restart action, triggered by container exit codes, restarts ALL containers in a pod while preserving IP, sandbox, and volumes. Previously, only individual container restarts were configurable. Useful for ML workers and init-heavy pipelines where a single container failure should reset the whole pod group.


Alpha Features — 22 Enhancements

 Alpha Features — 22 Enhancements

Alpha features require explicit feature gate opt-in and are not recommended for production. They represent the direction Kubernetes is heading over the next 1–2 release cycles.

37. Native Gang Scheduling with Workload API (KEP-4671)

SIG Scheduling | Introduces scheduling.k8s.io/v1alpha1 Workload objects with PodGroup gang semantics. The scheduler holds pods until a minCount group can be placed simultaneously. If insufficient resources exist, no pods are bound — preventing deadlocked partial placements that waste GPU capacity.

38. Extended Toleration Operators: Lt and Gt (KEP-5471)

SIG Scheduling | Toleration API gains numeric comparison operators (Lt, Gt) alongside existing Equal and Exists. Nodes can expose SLA-level taints as integers; pods can express “only schedule me on nodes with reliability > 900”. Enables threshold-based placement without custom schedulers or node pools.

39. Mutable Container Resources for Suspended Jobs (KEP-5440)

SIG Apps | Suspend a failing Job, update its pod template resource requests, then resume. Previously, OOM-killed jobs required full deletion and recreation, losing history and status. Now you fix resource sizing mid-lifecycle via spec.suspend: true → patch resources → spec.suspend: false.

40. Node Declared Features (KEP-5328)

SIG Node | Nodes advertise their supported Kubernetes features via a new status.declaredFeatures field. The scheduler can restrict pod placement to nodes that support the features a pod requires. Prevents “feature skew” failures where a pod is scheduled to a node that doesn’t yet support the capabilities it needs.

41. Mutable PersistentVolume Node Affinity (KEP-4400)

SIG Storage | PV node affinity fields become mutable after creation. Previously immutable, this blocked storage migrations and topology adjustments. Now storage operators can update node affinity to reflect actual data placement, enabling more flexible volume lifecycle management in multi-zone or migrated clusters.

42. CBOR Serialisation for Kubernetes API (KEP-4222)

SIG API Machinery | CBOR (Concise Binary Object Representation) joins JSON and Protobuf as a supported API wire format. Smaller payloads than JSON, faster parsing than Protobuf for dynamic schemas. Particularly relevant for CRD-heavy clusters with large object counts where API server bandwidth is a bottleneck.

43. Storage Capacity Scoring for Dynamic Provisioning (KEP-3484)

SIG Storage / SIG Scheduling | The scheduler now scores nodes based on available storage capacity when selecting where to place a pod that requires dynamic volume provisioning. Prevents hot-spots where all new volumes land on the same storage node, improving volume distribution and preventing capacity exhaustion.

44. DRA: Device Taints and Tolerations (KEP-4817)

SIG Node | Individual devices (GPUs, accelerators) can now carry taints with a new effect: None dry-run mode before enforcing NoExecute evictions. DeviceTaintRule objects also report status, making device-level eviction observable and safer to operate in GPU clusters.

45. DRA: Partitionable Devices (KEP-5080)

SIG Node | GPU slices and other partitionable devices can now be defined across multiple ResourceSlice objects, rather than forcing all partitions into a single slice. Aligns with how modern hardware (e.g. NVIDIA MIG, Intel Flex) actually exposes fractional device capabilities to the OS.

46. DRA: Consumable Device Capacity (KEP-4815)

SIG Node | Tracks device resources (e.g. memory bandwidth, limited-use accelerator quota) that deplete gradually rather than being exclusively allocated. Multiple bug fixes and expanded test coverage in v1.35 make this more reliable for experimentation.

47. DRA: Device Binding Conditions (KEP-4816)

SIG Node | Improvements to how device allocations become final during scheduling and admission. Edge cases in binding under failure scenarios and partial success are fixed, making DRA-based device allocation more predictable and resilient in production-like environments.

48. Comparable Resource Version Semantics (KEP-5542)

SIG API Machinery | All in-tree resource versions now follow a strictly comparable numeric format, allowing clients to determine version order without server-side help. Improves informer performance and makes controllers more reliable — a foundational change that unblocks several higher-level improvements across the project.

49. Contextual Logging for kubelet (KEP-3077 extension)

SIG Instrumentation | Structured contextual logging is extended to more kubelet code paths. Log entries carry structured key-value context (pod name, namespace, node name) without manual string formatting. Makes log queries in Elasticsearch/Kibana significantly more powerful for operators running centralised log pipelines.

50. Topology Aware Routing Enhancements (KEP-5720)

SIG Network | Extends the topology hints mechanism to better handle uneven endpoint distribution across zones. When a zone has too few healthy endpoints, hints are adjusted to avoid black-holing traffic. Improves the reliability of zone-local routing for services with varying replica distributions.

51. EndpointSlice Migration: Endpoint Deprecation (KEP-4004)

SIG Network | The Endpoints API is formally deprecated in favour of EndpointSlices. A migration path and deprecation warnings are introduced. If your controllers or service meshes still watch Endpoints objects directly, now is the time to migrate. EndpointSlices have been the recommended API since v1.21.

52. Horizontal Pod Autoscaler Backlog Queue Metrics (KEP-5220)

SIG Autoscaling | New metrics expose the HPA’s internal scaling decision queue — pending decisions, evaluation delays, and metric fetch latencies. Enables operators to detect when autoscaling is lagging behind workload demands and pinpoint whether the bottleneck is metric staleness or controller throughput.

53. Service Account Token Binding to Node (KEP-4193)

SIG Auth | Service account tokens can now optionally be bound to the node a pod runs on, in addition to pod binding. This limits token reuse: a leaked token cannot be replayed from a different node, significantly reducing the impact of token theft in compromised workloads.

54. StatefulSet Volume Claim Auto-Deletion Retention (KEP-1847)

SIG Apps | The persistentVolumeClaimRetentionPolicy for StatefulSets continues hardening. PVCs created for StatefulSet pods can now be configured for automatic deletion when pods are scaled down or when the StatefulSet itself is deleted, reducing manual PVC cleanup toil in batch-oriented stateful workloads.

55. Lease Candidate API for Leader Election (KEP-4815)

SIG API Machinery | Introduces LeaseCandidate objects that allow controller managers to declare intent to become leader before acquiring the Lease. Enables safer, more observable leader election — especially useful during rolling upgrades where you want to prevent split-brain between old and new controller versions.

56. VolumeAttributesClass for Volume Modification (GA path) (KEP-3751)

SIG Storage | Continues advancement of the VolumeAttributesClass API that lets you modify storage performance characteristics (IOPS, throughput tier) of existing volumes without recreation. v1.35 adds further validation and stability fixes as the feature moves toward full graduation.

57. Job Backoff Limit Per Index (KEP-3850)

SIG Apps | Indexed Jobs can now specify a per-index backoff limit. Previously, a single failing index could exhaust the global retry budget, killing all other indexes. Now each index gets its own retry counter, allowing healthy work items to continue even when specific indexes repeatedly fail.

58. ResourceSlice Health Reporting for DRA (KEP-4817)

SIG Node | DRA ResourceSlices can now carry health status for individual devices. Pods report DRA resource health in pod status, enabling higher-level controllers (and human operators) to identify which specific device caused a failure without digging through kubelet logs.

59. API Server Tracing Enhancements (KEP-647 extension)

SIG Instrumentation | OpenTelemetry tracing coverage is extended to more API server code paths, including admission webhook calls and etcd operations. Enables end-to-end distributed tracing of API requests through the full control plane stack — invaluable for diagnosing latency in high-QPS production clusters.

60. External IP Address Validation for Services (KEP-5514)

SIG Network | Introduces stricter validation for spec.externalIPs on Services. Previously any IP could be specified, enabling potential IP spoofing. The new validation rejects obviously invalid or dangerous IPs (loopback, link-local, broadcast) at admission time, hardening cluster networking without requiring Network Policies.


Deprecations & Removals

ChangeStatusAction Required
cgroup v1 supportRemoved (Beta gate)Upgrade node OS to use cgroup v2; upgrade kernel ≥4.15, recommend ≥5.8
kube-proxy IPVS modeDeprecatedMigrate to iptables or eBPF-based CNI (Cilium, Calico eBPF)
containerd v1.xEnd-of-life signalUpgrade to containerd 2.0+ — required for in-place resize, user namespaces
Ingress NGINXMaintenance-only until March 2026, then retiredMigrate to Gateway API; see official retirement post
Endpoints APIFormally deprecatedUse EndpointSlices; update controllers, service meshes, tooling
flowcontrol.apiserver.k8s.io/v1beta3RemovedMigrate APF objects to v1
SecurityContextDeny admission pluginRemovedReplace with Pod Security Admission (PSA)
--lock-object-namespace (kube-controller-manager)RemovedUse --leader-elect-resource-namespace

Upgrade Guide

Pre-Upgrade Checklist

  1. Scan for deprecated APIs: pluto detect-helm -o wide
  2. Verify all nodes use cgroup v2: stat -fc %T /sys/fs/cgroup/ (should return cgroup2fs)
  3. Confirm containerd version ≥ 2.0: containerd --version
  4. Back up etcd: etcdctl snapshot save /backup/etcd-$(date +%F).db
  5. Check Ingress NGINX usage and begin Gateway API migration planning
  6. Review if any workloads use IPVS and plan CNI migration
  7. Confirm add-on compatibility: CoreDNS, CSI drivers, CNI plugin, metrics-server

kubeadm Upgrade

# Upgrade kubeadm on control plane
apt-get update && apt-get install -y kubeadm=1.35.0-00

# Plan and apply
kubeadm upgrade plan
kubeadm upgrade apply v1.35.0

# Drain + upgrade each worker
kubectl drain  --ignore-daemonsets --delete-emptydir-data
apt-get install -y kubelet=1.35.0-00 kubectl=1.35.0-00
systemctl daemon-reload && systemctl restart kubelet
kubectl uncordon 

RKE2 / Rancher

Use the Rancher UI to stage the upgrade. Always upgrade the control plane pool first, then worker pools with maxUnavailable: 2 or 3 nodes. Watch kubectl get nodes -w throughout. For a 20-node cluster, a 2-at-a-time upgrade takes roughly 30–45 minutes with a 5-minute cordon/drain/upgrade cycle per node pair.

AKS

az aks get-upgrades --resource-group  --name  -o table
az aks upgrade --resource-group  --name  \
  --kubernetes-version 1.35.0 --control-plane-only
az aks nodepool upgrade --resource-group  --cluster-name  \
  --name  --kubernetes-version 1.35.0

Real-World Tips for Production Clusters

Version Skew Policy

Kubernetes supports ±1 minor version skew between components. With v1.35: control plane must be upgraded before workers; v1.34 workers with v1.35 control plane is supported, v1.33 is not. In multi-cluster fleets, don’t let any cluster fall more than 2 minor versions behind your newest — compound upgrade debt is painful to unwind.

Key Feature Gates to Enable

  • InPlacePodVerticalScaling=true — on kubelet AND API server
  • UserNamespacesSupport=true — on-by-default beta, but verify containerd 2.0+
  • RecursiveReadOnlyMounts=true — verify runtime supports it first
  • MutableJobPodResourcesForSuspendedJobs=true — for suspended job resource edits
  • NodeLogQuery=true — on by default; grant RBAC get on nodes/proxy

Observability: Watch These Metrics Post-Upgrade

  • apiserver_request_duration_seconds — latency regression signal
  • scheduler_pending_pods — elevated = scheduling gate or DRA misconfiguration
  • kubelet_volume_stats_* — CSI migration can affect label shapes
  • container_pressure_* — new PSI metrics; baseline before tuning VPA
  • kube_deployment_status_replicas_terminating — new field, great for rollout gates

FAQ

Is Kubernetes v1.35 production-ready?

Yes, once v1.35.1 or later is available. Always run the latest patch release for critical bug fixes. The 17 GA features are fully production-ready; Beta features are production-testable; Alpha features require explicit opt-in and should not be used in production.

When was Kubernetes v1.35 released?

December 17, 2025. The next release, v1.36, is expected around April 2026 based on the standard 4-month cadence.

Can I skip from v1.32 to v1.35?

No. Kubernetes only supports one-minor-version upgrades at a time. You must go v1.32 → v1.33 → v1.34 → v1.35. Skipping versions risks etcd schema issues and unsupported API state transitions.

Does in-place pod resize require a restart?

For CPU: never. For memory: only if your container runtime (containerd 2.0+ with cgroups v2) cannot apply the resize in-place. If the runtime can’t do it, Kubernetes marks the resize “infeasible” — the pod is not restarted automatically.

What’s the Kubernetes v1.35 codename?

“Timbernetes — The World Tree Release”. The “Yggdrasil” tree theme reflects how this release strengthens Kubernetes from its roots (node primitives, cgroup v2) to its canopy (AI/ML scheduling, gang semantics, security hardening).

Is Ingress NGINX still usable in v1.35?

Yes, but it’s in best-effort maintenance mode until March 2026, after which there will be no further releases, bug fixes, or security patches. Start your Gateway API migration now. The official retirement guide covers the migration path in detail.

What Kubernetes versions do AKS / EKS / GKE support in 2026?

Cloud providers typically lag GA by 1–3 months. Check each provider’s release calendar: AKS, EKS, GKE.


Conclusion

Kubernetes v1.35 is a platform maturity release. The 17 GA graduations reward teams who’ve stayed current. The alpha work — gang scheduling, DRA device taints, mutable PV affinity, numeric toleration operators — shows clearly where the community is taking the platform over the next two releases: AI/ML workloads, hardware-aware scheduling, and zero-trust security primitives baked into the core.

The hard calls (cgroup v1 removal, Ingress NGINX retirement, IPVS deprecation) are well-telegraphed, but they’re real breaking changes that require node OS upgrades, CNI reviews, and ingress migrations before you can safely apply v1.35 to production.


Resources

Official

Internal Resource


About the Author

Kedar Salunkhe

DevOps Engineer | Seven years of fixing things that break at 2am
Kubernetes • OpenShift • AWS • Coffee

I’ve spent almost 7 years keeping production systems running, often when everyone else is asleep. These days I’m working with Kubernetes and OpenShift deployments, automating everything that can be automated, and occasionally remembering to document the things I fix. When I’m not troubleshooting clusters, I’m probably trying out new DevOps tools or explaining to someone why we can’t just “restart everything” as a debugging strategy. You can usually find me where the coffee is strong and the error logs are confusing

Leave a Comment