Skip to main content

Summary

This page walks you through creating and using an nscale Slurm cluster from the nscale Console UI—from provisioning a cluster and configuring partitions and storage, to accessing the login environment and running jobs with standard Slurm commands. Use this guide if you:
  • Need a managed Slurm environment without installing or operating Slurm yourself
  • Want to provision compute via the Console UI and then submit workloads using familiar Slurm CLI commands (sbatch, srun, squeue, sacct)

Availability

This feature is currently only available for reserved cloud services.

Requirements

  • SSH access to the login environment requires a registered SSH public key in the Console
  • Note: Slurm CLI tools are preinstalled in the login environment, so you don’t need to install sbatch/srun locally to submit jobs (you’ll run them after connecting)

Step-by-Step

  1. Open the Slurm cluster creation flow
    • In the Console left navigation, go to Services → Slurm Cluster → Create New Cluster
  2. Create a cluster
    • Enter a Cluster Name
    • Select the Project the cluster belongs to
    • Choose the Region where the cluster will be deployed
    • Click Next: Partitions
Slurm Workflow 01 1
  1. Configure partitions
    • Set a Partition Name
    • Select a Node Type (instance flavor) from the available options
    • Specify the Node Count (number of nodes in that partition)
    • Add additional partitions if needed
    • Click Next: Storage
  2. Configure storage
    • Choose the Storage Type (currently VAST is supported)
    • Set the Storage Size in GB
    • Optional: set a Mount Path (default is /mnt/storage)
    • Add additional storage resources if required
    • Click Next: SSH Key
Slurm Workflow 02
  1. Select an SSH key
    • Choose the SSH public key you uploaded in the Console to enable access to the cluster’s login environment
  2. Configure additional options
    • as an option you can enable dedicated infrastructure nodes allowing a cluster creation with a minimum of 1 node instead of 3
    • choose if you want to have the Nscale HPC stack preinstalled (optional)
  3. Review your cluster, create cluster and wait for provisioning
Slurm Workflow 03 1
  1. Access the login environment
    • From the cluster details in the Console, use the provided SSH access details to connect to the login environment
    • You land in a shell with Slurm CLI tools available (sbatch, srun, squeue, sacct)
  2. Submit and monitor jobs (from the login environment)
    • Use standard Slurm commands to run and track jobs.
    • Jobs appear in the queue, start running when resources are available, and complete successfully.

Tips, Best Practices, and Code Examples

  • Use Slurm CLI from the login environment
    • The intended workflow is: provision via Console → connect to login environment → submit jobs via Slurm CLI.
  • Code examples (from the login environment)
Basic connectivity and scheduling sanity checks:
# Run a simple command through Slurm
srun hostname
Monitor the queue:
# View queued/running jobs
squeue
Verify GPU visibility (example output format depends on your cluster):
# Example: show nodes and GPU resources
sinfo -o "%N %G"
View accounting details for a completed job:
# Replace JOBID with your job ID
sacct -j JOBID