AWS EKS Multi-Tenancy with vCluster
Kubernetes is the de facto standard for container orchestration, allowing teams to deploy, scale, and manage containerized applications. However, implementing multi-tenancy in Kubernetes is not always straightforward. Many organizations need to run multiple environments such as development, staging, and production on the same infrastructure while keeping each tenant isolated in terms of resources, security, and access.
The challenge becomes even greater as the number of environments grows. Creating a separate Kubernetes cluster for every team or project provides strong isolation but also increases infrastructure costs, operational complexity, and maintenance work. On the other hand, relying only on namespaces to achieve multi-tenancy can save money but often fails to provide complete isolation, custom configurations, or flexibility in Kubernetes versions.
What is vCluster?
vCluster is an open-source tool that helps you create virtual Kubernetes clusters, each with its own control plane components and cluster state, in an automated way.
Think of it like creating a virtual machine in your cloud account. You have full access and control over that virtual machine, but it is actually just a slice of the underlying physical hardware in a data center. Similarly, a virtual cluster is a slice of a Kubernetes cluster, you have full access to it and complete ownership, but ultimately, it is still a part of a larger Kubernetes cluster.
How vCluster Works?
From the physical host cluster perspective, vCluster operates as an application deployed within a namespace. The application architecture consists of two primary components:
1. Virtual Control Plane
This includes the same core components you’d expect in any Kubernetes cluster:
- Kubernetes API Server: Handles API requests within the virtual cluster.
- Controller Manager: Ensures the actual state of resources matches the desired state.
- Datastore: Store the states of the resources in the virtual cluster.
- Scheduler (Optional): Can be used as an alternative to the default host scheduler.
2. Syncer
The Syncer is responsible to sync resources from the vCluster to the host cluster namespace. This is necessary as vCluster itself does not have a network or nodes where workload can be scheduled. By default, only essential resources (Pods, ConfigMaps, Secrets, Services) are synchronized to the host.
The diagram below (sourced from official documentation) provides an overview of this architecture. For additional details, refer to the official docs.
Why Use vCluster for EKS Multitenancy?
vCluster combines the two approaches and gives you the best of both worlds with the help of virtual clusters. While tenants see and use their vClusters as if they were completely independent environments, administrators manage them simply as namespaces in the underlying cluster. This balance provides strong isolation like separate clusters but without the added cost and overhead.
Virtual clusters are lightweight and cost-efficient. Each cluster can be configured with its own policies, limits, and resource quotas so tenants only consume what they require. Tenants can also be granted full admin rights, giving them the flexibility to manage their environments independently.
Use Cases for vCluster
vCluster’s unique architecture and feature set make it suitable for a wide range of use cases, from development and testing to education and multi-tenancy.
1. Development and Testing: Developers can spin up isolated clusters to test features or applications without impacting production, speeding up iteration and experimentation.
2. Multi-Tenancy: Each team or customer gets their own virtual cluster with strong isolation and security, maximizing infrastructure efficiency while preserving tenant autonomy.
3. CI/CD Pipelines: vCluster enables ephemeral clusters for builds and tests that mimic production, then removes them after use to save resources.
4. Platform Engineering / IDPs: vCluster lets platform teams offer self-service Kubernetes clusters to developers, balancing control with autonomy and reducing operational overhead.
Integration with Other Kubernetes Tools
vCluster can be integrated with other Kubernetes tools to enhance its capabilities:
1. Cert Manager: You can integrate cert-manager with your virtual cluster to manage TLS certificates across host and virtual environments.
2. External Secrets Operator: Sync external secret stores and CRDs between host and virtual clusters.
3. Istio: Enables you to use one Istio installation from the host cluster instead of installing Istio in each virtual cluster.
4. Kubevirt: Run VM workloads in vCluster using host KubeVirt CRDs and tooling.
Metrics Server: Access pod and node metrics from the host’s metrics-server.
Community and Ecosystem
vCluster is an open-source project maintained by Loft Labs with contributions and support from an active community. The free edition provides core functionality, while vCluster.Pro adds enterprise features like SSO, enhanced quotas, and multi-cluster management. This ensures flexibility for individual users and reliability for organizations needing commercial support and security.
Prerequisites
Before we dive into the installation process, ensure that you have the following prerequisites in place:
1. kubectl installed on your local machine.2. An AWS account.
3. The AWS CLI installed and set up. Make sure you are logged in as an administrator.
4. eksctl installed and set up.
5. The vCluster CLI installed.
6. Helm 3.x installed (optional but recommended)
The following section describes how to create an EKS cluster, but you can skip it if you already have one. Ensure you have the Amazon EBS CSI driver installed in your cluster. This is needed for vCluster to work correctly.
Setting Up AWS EKS
Creating an EKS Cluster
To create an EKS cluster using eksctl, you can use the code available in the following repository: eks-vcluster-example
eksctl create cluster -f eksctl.yaml
Replace name and region with different values to suit your needs.
Wait for a while as the cluster is spin up. Once the cluster is created, verify its status using:
eksctl get cluster --region eu-west-1
Create the gp3 StorageClass using the following command:
kubectl create -f gp3.yaml
Remove the default status from the gp2 StorageClass:
kubectl patch storageclass gp2 -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
Setting Up vCluster
With the EKS cluster up and running, the next step is to set up the vCluster.
Installation
Follow the official vCluster installation guide or use these quick methods:
# macOS with Homebrew
brew install loft-sh/tap/vcluster
# Linux with curl
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vcluster
# Windows with PowerShell
powershell -c "iwr https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-windows-amd64.exe -outfile vcluster.exe"
Creating Your First vCluster
Let’s set up the virtual cluster for the development environment:
# Create a virtual cluster named "dev-env"
vcluster create dev-env --namespace development --connect=false
Connecting to a Virtual Cluster
You can connect to a virtual cluster using the vCluster connect command:
# Connect to vCluster
vcluster connect dev-env --namespace development
# Verify Context
kubectl config current-context
# Get Namespaces
kubectl get ns
You should see output similar to:
$ kubectl config current-context
vcluster_dev-env_development_aws-go-sdk-1755613766994999000@eks-vcluster-example.eu-west-1.eksctl.io
$ kubectl get ns
NAME STATUS AGE
default Active 4m25s
kube-node-lease Active 4m25s
kube-public Active 4m25s
kube-system Active 4m25s
Deploying Application
Next, let’s deploy a sample application into the virtual cluster:
# Create a deployment
kubectl create deployment nginx --image=nginx:latest --replicas=2
# Expose the deployment
kubectl expose deployment nginx --port=80 --target-port=80
# Check the status
kubectl get pods,services
NAME READY STATUS RESTARTS AGE
pod/nginx-54c98b4f84-krzbl 1/1 Running 0 74s
pod/nginx-54c98b4f84-xfdk6 1/1 Running 0 74s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/kubernetes ClusterIP 10.100.49.154 443/TCP 5m59s
service/nginx ClusterIP 10.100.241.45 80/TCP 7s
Host Cluster Perspective
You’ll notice that the pods created inside the virtual cluster have longer names, as suffixes are automatically added to avoid naming conflicts with other resources.
# Disconnect from virtual cluster
vcluster disconnect
# Check host cluster namespaces
kubectl get namespaces
# View synced resources in host namespace
kubectl get pods,services -n development
NAME READY STATUS RESTARTS AGE
pod/coredns-744d9bd8bf-kwnc5-x-kube-system-x-dev-env 1/1 Running 0 7m17s
pod/dev-env-0 1/1 Running 0 8m17s
pod/nginx-54c98b4f84-krzbl-x-default-x-dev-env 1/1 Running 0 2m33s
pod/nginx-54c98b4f84-xfdk6-x-default-x-dev-env 1/1 Running 0 2m33s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dev-env ClusterIP 10.100.49.154 443/TCP,10250/TCP 8m19s
service/dev-env-headless ClusterIP None 443/TCP 8m19s
service/dev-env-node-ip-172-31-17-215-eu-west-1-compute-internal ClusterIP 10.100.103.69 10250/TCP 7m18s
service/kube-dns-x-kube-system-x-dev-env ClusterIP 10.100.119.60 53/UDP,53/TCP,9153/TCP 7m18s
service/nginx-x-default-x-dev-env ClusterIP 10.100.241.45 80/TCP 86s
Deleting a Virtual Cluster
You can delete a virtual cluster using the vCluster delete command:
vcluster delete dev-env -n development
Advanced vCluster Configuration
Custom Kubernetes Distribution
vCluster supports multiple Kubernetes distributions, allowing you to run virtual clusters consistently across different environments.
> DEPRECATION NOTICE Support for K0s and K3s is deprecated in vCluster v0.25. K0s support is removed in v0.26. |
# vcluster.yaml - Using standard Kubernetes
controlPlane:
distro:
k8s:
image:
registry: ghcr.io # Default: ghcr.io (can be overridden globally with controlPlane.advanced.defaultImageRegistry)
repository: loft-sh/kubernetes # Default: loft-sh/kubernetes
tag: v1.32.1 # Default: v1.32.1 (or matches host Kubernetes version)
High Availability Setup
> This feature is only available in the Enterprise edition. |
# vcluster.yaml - HA configuration
controlPlane:
replicas: 3
backingStore:
etcd:
embedded:
enabled: true
Resource Quota and Limit
vCluster creates the ResourceQuota in the same namespace where the virtual cluster is running. This quota applies to all resources that are synced back to the host cluster.
# vcluster.yaml - Resource Quota config
policies:
resourceQuota:
enabled: true
quota:
cpu: "10"
memory: 20Gi
pods: "5"
RBAC Configuration
You can use extraRules to add more RBAC permissions, or overwriteRules to fully replace the default rules. With overwriteRules, you must define all needed rules yourself, but you still keep Role, RoleBinding, and ServiceAccount managed by the Helm chart.
# vcluster.yaml - Add Custom RBAC Rules
rbac:
role:
enabled: true
extraRules:
- apiGroups: [""]
resources: ["pods/status", "pods/ephemeralcontainers"]
verbs: ["patch", "update"]
# vcluster.yaml - Overwrite Rules
## This example disables cluster-wide rules and specifies namespace-level permissions
rbac:
role:
enabled: true
overwriteRules:
- apiGroups: [""]
resources: ["pods/status", "pods/ephemeralcontainers"]
verbs: ["patch", "update"]
clusterRole:
enabled: false
Network Policy
Set policies.networkPolicy.enabled to create NetworkPolicies that isolate the virtual cluster:
# vcluster.yaml - Network Policy config
policies:
networkPolicy:
enabled: true
This creates NetworkPolicies in the host namespace that:
- Allow traffic between pods within the virtual cluster
- Block traffic from other namespaces
- Permit DNS and API server communication
Pod Security Standard
Pod security standards prevent Pods from starting if they request permissions beyond what's allowed.
policies:
podSecurityStandard:
Replace <policy_profile> with privileged, baseline, or restricted.
Monitoring and Observability
Metrics Collection
Enable serviceMonitor to expose vCluster metrics over HTTPS for Prometheus and Grafana monitoring.
# vcluster.yaml - Expose Metrics
controlPlane:
serviceMonitor:
enabled: true
labels: {}
annotations: {}
Logging
Set the log encoding to json to enable JSON-formatted logs. To use console format, either remove the logging field or explicitly set it to console.
# vcluster.yaml - Console format
logging:
encoding: console
# vcluster.yaml - JSON format
logging:
encoding: json
Conclusion
As more people use Kubernetes, there is a bigger need to run multiple teams or projects in the same cluster. Old methods, like using separate clusters or just namespaces, have some problems. vCluster is changing the way we think about Kubernetes multi-tenancy. vCluster offers an ideal balance between isolation, cost efficiency, and operational simplicity, allowing teams to run multiple workloads securely and efficiently without the overhead of managing separate clusters. If you’re looking to:
✅ Quickly spin up clusters for CI/CD or dev environments
✅ Optimize costs without sacrificing security
✅ Maintain cluster-admin privileges within safe boundaries
✅ Simplify operations instead of managing multiple clusters
✅ Enable platform engineering for your internal teams
Starting with vCluster is simple, even on cloud services like EKS, as you have seen in this tutorial when we set up and used vCluster on an EKS cluster.
References:
- Quick Start Guide | vCluster docs | Virtual Clusters for Kubernetes
- What is vCluster Platform? | vCluster docs | Virtual Clusters for Kubernetes
- AWS EKS Multi-tenancy with vCluster
Got questions
Others frequently ask…-
vCluster is the popular solution today. It has a big community, documentation, and is already proven in many enterprise companies. It gives a good balance of features, speed, and ease of use.
-
Yes. Many companies like CoreWeave, and Trade Connectors use vCluster in production. Use Kubernetes security best practices and think about using vCluster Platform if you need enterprise-level support.
-
vCluster only adds a very small overhead. The syncer is lightweight, so most applications run with almost no performance difference compared to a normal cluster.
-
Migration is straightforward. You can create a vCluster for each tenant, deploy the apps with the same manifests, and then slowly move traffic. It can even be done with zero downtime.
-
vCluster supports most Kubernetes features. Some advanced networking and admission controllers may need extra configuration.
-
Then you should use vCluster Platform (Pro edition). It gives you a central dashboard, enterprise security (SSO, OIDC, LDAP), and extra tools to manage many vClusters at scale.
-
Yes. You can use policy engines like Kyverno or OPA Gatekeeper inside your vClusters, the same way you do in normal Kubernetes clusters.
-
Yes. vCluster works well with ArgoCD for GitOps. You can also manage vClusters using platforms like Rancher, giving teams a familiar interface.
Raja Haikal
Cloud Engineer @kloia