Kubernetes 1.33: What’s new?
Kubernetes 1.33 (codenamed Octarine) hit the stage on April 23, 2025. This release packs 64 enhancements (18 GA, 20 Beta, 26 Alpha) that advance security, performance, observability, and the developer experience. In this friendly overview, I will walk you through the highlights: what’s new for security and isolation, how scaling and networking got a boost, and the little quality-of-life tweaks that make cluster operations smoother.
🚀 What's Inside?
🔒 Security & Isolation: User namespaces, service account tokens, procMount, NetworkPolicy logging, ClusterTrustBundle
📈 Scalability & Performance: Native sidecars, in-place pod resizing, service IP expansion, nftables kube-proxy, SMT pinning
✨ Developer Experience & Usability: .kuberc for kubectl, Indexed Job improvements, subresource access
🚧 Deprecations & Changes: Endpoint API, gitRepo volumes, Windows host networking
🎯 Conclusion & Recommendations: Summary of key improvements
Security & Isolation
Kubernetes 1.33 brings several features to harden clusters and isolate workloads better. For example, user namespaces for pods move to on-by-default Beta. Now a container’s UIDs can be mapped to a separate range on the host, limiting how much damage a compromised container can do. (Previously this was opt-in via pod.spec.hostUsers; in v1.33 it’s enabled by default unless you turn it off.) Likewise, fine-grained SupplementalGroups policies graduate to Beta (now enabled by default). This feature lets you choose whether a pod’s group IDs merge with or strictly use those in the image, preventing sneaky permission escalation.
Other security wins include service account token improvements (now GA). The new tokens embed a unique ID and the node name, so that a projected token is cryptographically bound to a specific node and lifetime. This makes it much harder for stolen tokens to be reused elsewhere. Also new is ClusterTrustBundle (Beta), a cluster-wide object for storing X.509 root certificates. Administrators can now distribute private CA trust anchors centrally, and workloads can mount a projected ClusterTrustBundle volume to get the latest certs. This replaces the old per-namespace kube-root-ca.crt approach and improves certificate management.
Network Policy Logging (Beta)— You can now log allowed/denied network flows for debugging. Turning on policy logging (alongside audit logs) gives you much better visibility into what traffic is hitting your pods.
ProcMount Option (Beta) – The new procMount field (in a Pod’s securityContext) lets you relax or tighten access to /proc inside containers. This is especially useful when running nested containers or unprivileged pods with user namespaces, further hardening process isolation.
User namespaces on by default A major security milestone in v1.33 is that Linux user namespaces for Pods are now an on-by-default beta feature. Pods still must opt in per-Pod, but the feature no longer needs a hidden flag. In practice, this means you can map a container’s root user to an unprivileged user on the host. For example, enabling user namespaces is as simple as adding spec.hostUsers: false in your Pod spec:
apiVersion: v1
kind: Pod
metadata:
name: userns-demo
spec:
hostUsers: false
containers:
- name: shell
image: debian
command: ["sleep", "infinity"]
With this setting, processes inside the pod will run with a different UID/GID on the node, greatly reducing the impact if a container somehow escapes. In other words, even if a container runs as UID 0, it maps to an unprivileged user on the host.
Certificate and token trust (Beta) – v1.33 also adds new cluster-wide trust and token controls. The ClusterTrustBundle API (beta) is a new cluster-scoped resource for X.509 trust anchors. In other words, you can publish a set of CA certificates as a ClusterTrustBundle, and in-cluster certificate signers (or workloads) will automatically see those trust anchors. This mechanism simplifies managing root certs for image registries, API servers, or any components that need a shared trust store.
These upgrades make it safer to run multi-tenant or high-security workloads on Kubernetes.
Scalability & Performance
v1.33 introduces many engineering improvements for big clusters and heavy workloads:
Native Sidecar Support (GA)—Sidecar containers are now a first-class feature. Kubernetes treats sidecars as special init-containers with restartPolicy: Always, so they automatically start before the main app and end after it. This change simplifies patterns like logging or proxy sidecars. For example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 1
template:
spec:
containers:
- name: main
image: myapp:latest
command: ["sh","-c","echo Running main app"]
volumeMounts:
- name: data
mountPath: /var/app
initContainers:
- name: log-collector
image: alpine:latest
command: ["sh","-c","tail -F /var/app/log.txt"]
volumeMounts:
- name: data
mountPath: /var/app
volumes:
- name: data
emptyDir: {}
Here, the log-collector sidecar init-container will stay running alongside main, making it easy to ship logs without extra hacks.
In-place Pod Resizing (Beta)—You can now adjust a running Pod’s CPU or memory requests without killing it. In earlier Kubernetes versions, changing a container’s resource needed deleting/recreating the Pod. Now the new alpha “InPlaceUpdate” feature lets the kubelet increase or decrease resources on the fly. This means stateful workloads can scale up or down with zero downtime. For example, you can give a database Pod extra memory during a heavy load spike, then shrink it back later.
Service IP Expansion (GA)—The old single CIDR for ClusterIP services is gone. A new ServiceCIDR API lets you create extra IP ranges on demand, and the IPAddress object tracks allocations. In practice, this means when you run out of service IPs, you just create another ServiceCIDR resource with an unused block. Kubernetes will start using the new range automatically, so you don’t have to reconfigure existing services.
Improved Networking—Dual-stack IPv4/IPv6 support gains more flexibility, and a new nftables kube-proxy backend (GA) significantly boosts performance for large Service tables. (Kubernetes still defaults to the old iptables for compatibility, but you can switch to nft to handle more rules faster.) Also, topology-aware routing becomes generally available: Add trafficDistribution: PreferClose to a service spec, and kube-proxy will favor endpoints in the same zone or region, reducing cross-zone latency.
CPU Manager Policies (GA/Beta) – The CPU Manager gets smarter with two new policies. Static Policy (GA) enforces that when pods request whole cores, they are granted full core pairs (preventing hyperthread siblings from splitting). Distribute CPUs across NUMA nodes (Beta) spreads CPU allocations across NUMA nodes rather than packing them on one socket. This can improve performance on large, multi-socket machines.
CPU Manager SMT pinning (stable). High-performance workloads often want exclusive CPU cores, but on SMT-enabled hardware, you must be careful not to inadvertently share a core’s sibling thread. In 1.33, the CPU Manager adds a policy so you can reject pods whose CPU request isn’t SMT-aligned. In practice, this enforces that when a pod asks for whole cores, the kubelet will allocate entire core pairs (physical core + sibling) rather than splitting them. This ensures true isolation for “pinned” workloads and avoids subtle performance or contention issues.
Smarter Scheduling—Under the hood, the scheduler is sharper too. Asynchronous preemption (now Beta) lets the scheduler evict low-priority pods in parallel so it can keep scheduling without getting stuck. When the active queue is empty, it will also pull from the backoff queue immediately rather than idling. These tweaks mean heavy workloads and constant churn are handled more efficiently.
Developer Experience & Usability
Beyond core scaling, Kubernetes 1.33 adds many small user-friendly touches:
kubectl .kuberc (Alpha)—You can now keep kubectl aliases and preferences in a separate ~/.kube/kuberc file. For example, you can add shortcuts or set default flags (like always using server-side apply). To opt in, set KUBECTL_KUBERC=true. This action cleanly separates user preferences from the standard kubeconfig that holds cluster credentials.
kubectl --subresource (GA) – The --subresource flag is now generally available for commands like kubectl get/patch/apply. This makes it easy to fetch or modify a resource’s subparts (such as status or scale) directly. For instance, you can now do:
kubectl get pods mypod --subresource=status
kubectl patch deployment myapp --subresource=scale --patch='{"spec":{"replicas":5}}'
This approach helps in scripts and diagnostics by treating subresources as first-class targets.
Indexed Job Improvements (GA)—Batch jobs with indexing got smarter. Now each index in an Indexed Job has its own backoffLimit and the Job spec can define custom success criteria (e.g., only 80% of tasks need to succeed). This is great for massive parallel jobs where some tasks can be allowed to fail without failing the entire job.
HPA Configurable Tolerance (Alpha)—The Horizontal Pod Autoscaler can now ignore small metric swings via a new tolerance setting (when you enable HPAConfigurableTolerance). This is especially useful to prevent “bouncing” when your target metric fluctuates around the threshold. Here is an example:
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: my-app
spec:
# ...metrics and min/max replicas...
behavior:
scaleDown:
tolerance: 0.05 # ignore <5% drops
scaleUp:
tolerance: 0.00 # react to any rise
Here, small dips under 5% of the target won’t trigger a scale-down, reducing needless churn.
Container Stop Signals (Alpha)—Until now, the signal sent to a container on Pod termination was fixed by the image (often SIGTERM). Kubernetes 1.33 lets you override this per-container in the Pod spec using lifecycle.stopSignal. This saves you from building a new image just to change the shutdown behavior. For example:
spec:
os:
name: linux
containers:
- name: my-app
image: my-app:latest
lifecycle:
stopSignal: SIGINT
The spec.os.name field must be set, and if you omit the signal, Kubernetes falls back to the image’s default.
Other improvements— There are various usability tweaks like Graduated CSI migrations (more in-tree volume plugins can now switch to CSI drivers transparently) and enhanced resource validation tooling under the hood. Also, JobSet (for managing batch sets) and graceful node shutdown improvements reached beta/alpha, but the big theme is that v1.33 is all about making administrators’ and developers’ lives easier and safer.
Deprecations & What to Watch For
As with every release, some old cruft is being shed. In v1.33:
- The classic Endpoints API is officially deprecated. You should migrate any code that directly reads Endpoints to use EndpointSlices instead (which scale much better).
- The gitRepo volume type has been removed, as it was infamously insecure. Instead, you can fetch code with an init-container or CSI volume.
- Windows Pods can no longer use hostNetwork: true . This feature is withdrawn.
- The mostly unused status.nodeInfo.kubeProxyVersion field is gone.
If your scripts or tooling rely on any of these features, update them before upgrading to 1.33. A handy side note: the Endpoints deprecation warnings are only for those calling the API directly. Services themselves work the same way under the hood.
Conclusion
We get major leaps like native sidecars and in-place resizing, along with critical security hardening (user namespaces, token binding, network logging) and much better cluster scaling (multiple Service CIDRs, nftables proxy, topology routing).
If you manage clusters, take time to read the official v1.33 release notes for the full rundown and plan for the few deprecations mentioned above. Happy upgrading!
Got questions
5 FAQs About Kubernetes 1.33-
Kubernetes 1.33 introduces user namespaces, service account token improvements, network policy logging, and procMount to enhance container isolation and security.
-
Kubernetes 1.33 introduces native sidecar support, in-place pod resizing, and Service IP expansion to improve scalability in large clusters.
-
The new
.kuberc
file allows managing kubectl shortcuts and preferences, while subresource access and HPA tolerance enhancements streamline development and troubleshooting. -
The Endpoints API, gitRepo volume type, and Windows host networking have been deprecated in v1.33. EndpointSlices and alternative methods should be used instead.
-
Kubernetes 1.33 adds features like ClusterTrustBundle for centralized certificate management, user namespaces enabled by default, and enhanced service account token security to better isolate and secure workloads.
Uğurcan Çaykara
Platform Engineer @kloia