Skip to main content

Summary

This page explains how to provision and access an Managed Kubernetes environment (called Virtual Kubernetes in the console) so you can run Kubernetes workloads in an isolated, single-tenant cluster without provisioning dedicated infrastructure. Use this guide if you need
  • A full Kubernetes API endpoint for standard tools like kubectl and helm 
  • An isolated cluster boundary for teams, environments, or projects (separate control plane/API server per virtual cluster)

Availability

Currently Virtual Kubernetes is only available in the reserved cloud service environment.

Requirements

  • Ensure to have the permissions and sufficient quota to create virtual kubernetes cluster
  • SSH access to the login node, with an SSH public key registered in the Console
  • kubectl CLI (preinstalled in the access environment)

Step-by-step

  1. Open the Virtual Kubernetes service
    • In the UI Console, go to Services → Virtual Kubernetes
Virtual Kubernetes Service
  1. Start cluster creation
  • Enter a Cluster Name
  • Select the Project the cluster should belong to
  • Choose the Region where the cluster will be deployed
  1. Configure workload pools (virtual node pools)
Workload pools determine how compute resources are allocated within the cluster For each workload pool, set:
  • Workload Pool Name
    • Must use lowercase alphanumeric characters and dashes (example: workload-pool-1).
  • Node Type
    • Hardware configuration (examples include g.4.standard.80s or 1 × NVIDIA B200)
  • Node Count
    • Number of nodes in the pool (adjust using the plus/minus controls)
Virtual Kubernetes Flow
  1. Review and create the cluster
  • Review the cluster summary, then click Create Cluster
  • After a few minutes, the cluster status should change to Provisioned
  1. Get kubeconfig and verify access
After the cluster is provisioned, the Access your cluster section becomes available in the cluster overview
  • Use Download Kubeconfig to download kubeconfig.yaml, or Copy to clipboard to paste the config into a file
  • Set your KUBECONFIG environment variable and verify:
export KUBECONFIG=~/Downloads/user-guide-test-vkubeconfig.yaml
kubectl get nodes
  • You should see the nodes created by your configured workload pool, and the output will include the Kubernetes version running on those nodes
  1. Create a namespace for your workloads (optional)
kubectl create namespace dev-team
kubectl get namespaces

Tips, Best Practices, and Code Examples

Choose the right node allocation mode for your workloads Virtual Kubernetes can run on:
  • Dedicated mode: host nodes are exclusively assigned to a single Virtual Kubernetes cluster (strong isolation/consistency)
  • Shared mode: multiple virtual clusters share host nodes, with logical isolation via namespaces/quotas/scheduling policies
Use resource pools/workload pools to model capacity A resource pool defines compute resources allocated from the host cluster and enforces boundaries via namespaces and resource quotas In practice, you control this through workload pools (node type + node count) during provisioning Verify Kubernetes version compatibility early When you run:
kubectl get nodes
the output includes the Kubernetes version running on the nodes; verify it if your workloads depend on specific Kubernetes versions.

Common Issues / Troubleshooting

Access controls aren’t shown on the cluster page
  • Symptom: You don’t see Access your cluster (download/copy kubeconfig options)
  • Likely cause: The cluster is not yet provisioned; the access section appears after provisioning completes
  • Fix: Wait until the cluster status is Provisioned, then refresh the cluster overview page
Workload pool name validation error
  • Symptom: You can’t proceed when entering the workload pool name
  • Likely cause: The workload pool name must use lowercase alphanumeric characters and dashes
  • Fix: Rename the pool to match the required format (example: workload-pool-1)