Kultrun is accessed via secure shell (ssh). The first step to access
the cluster is requesting an account.
The cluster is hosted at the Department of Astronomy, domain: kultrun.astro-udec.cl.
Kultrun is managed via a system called SLURM, which has three key functions: i) it allocates access to resources (compute nodes) to users, ii) it provides a framework for starting, executing, and monitoring work (normally a parallel job) on the set of allocated nodes, and iii) it manages the queue system.
Jobs can be run in two ways. For testing and small jobs you can run a job interactively. This way you can directly interact with the compute node(s) in real time to make sure your jobs will behave as expected.
The other way, for large and long-running jobs, involves preparing a job submission script and submitting that to the queue system.
Kultrun includes three queues which are different for architecture, number of cores, and processors. Every queue has been assigned a specific name refered to the Mapuche culture. Every user should choose the queue based on the needed resources and the type of code employed. The features and the name of each queue are reported here below.
The distributed memory cluster: 18 intel nodes for a total of 576 cores. Mapu represents the Earth (la tierra) in Mapuche language. It is intended for standard parallel calculations.
More on the nodesThe shared memory queue. 224 cores (last generation intel processors). For very intensive calculations. The name ko represents the water in Mapuche language.
More on the node2 AMD nodes, with a total of 64 cores. Queue intended for short test runs, and serial/interactive jobs. Kutral represents the fire (el fuego) in the Mapuche language.
More on the nodeSubmit a job script to the queue system
sbatch (qsub)
squeue (qstat)
scancel (qdel) job_id
sacct -j job_id
srun -N 1 --ntasks-per-node=32 --partition=kutral_amd --pty bash
#!/bin/bash
#SBATCH -J [jobname]
#SBATCH -t=[HH]:[MM]:[SS]
#SBATCH -l mem=[MM][kb/mb/gb/tb]
#SBATCH -p [partitionNAME]
#SBATCH --mail-type=[flags (NONE, BEGIN, END, FAIL, REQUEUE, ALL)]
#SBATCH --mail-user=[user]
#SBATCH -e [name]
#SBATCH -o [name]
#SBATCH --tasks-per-node=[cores]