Find here below some quick useful tips to submit jobs and compile your code.
For any problem feel free to contact us.
The email should include as Subject: Kultrun Support.
Contact e-mail address:
KULTRUN, has been configured to work on a module-based way. This allows users to easily
set the environment needed to compile and run their codes.
The basic module command are
module - shows the list of module commands
module avail - shows a list of "available" modules
module list - shows a list of loaded modules
module load [name] - loads a module
module unload [name] - unloads a module
module help [name] - prints help for a module
module whatis [name] - prints info about the module
module purge - unload all modules
module swap [name1] [name2] - swap two modules
-------------------------- /opt/modulos/modulefiles ----------------------------
ansys/20.1 fftw/3.3.8_intel_mpi gsl/2.4_intel mpich/1.5 python/3.5.6
casa/5.3.0-143.el7 fftw/3.3.8_openmpi hdf5/1.10.2 mpich/3.2.1 python/3.5.6_openmpi
fftw/2.1.5 gcc/4.9.4 hdf5/1.10.2_intel openmpi/1.10.7
fftw/2.1.5_intel gcc/5.5.0 intel/2018.3.222 openmpi/2.1.3
fftw/2.1.5_intel_mpi gildas/jun19c intel/impi-2018.3.222 openmpi/3.1.0
fftw/3.3.8 gsl/2.4 mercurial/4.6.1 python/2.7.16_intel
#!/bin/bash
#SBATCH --job-name=test
#SBATCH --partition=mapu
#SBATCH -N 4
#SBATCH --ntasks-per-node=32
module load intel/2018.3.222
module load gsl/2.4_intel
module load hdf5/1.10.2_intel
module load fftw/2.1.5_intel
cd /home/user1/run_dir
srun --mpi=pmi2 ./code.exe input > output
#!/bin/bash
#SBATCH --job-name=test_name
#SBATCH --partition=kutral_amd
#SBATCH -N 1
#SBATCH --ntasks-per-node=32
module load openmpi/2.1.3
cd /home/user2/run_dir
mpiexec ./code input > ouput
#!/bin/bash
#SBATCH --job-name=test_name
#SBATCH --partition=kutral_amd
#SBATCH -N 1
#SBATCH --ntasks-per-node=32
module load openmpi/2.1.3
cd $SLURM_SUBMIT_DIR
export SCRDIR=/scratch/${SLURM_JOB_ID}
mkdir $SCRDIR
cp -rp * $SCRDIR/
cd $SCRDIR
mpiexec ./code input > ouput
cp -rp * $SLURM_SUBMIT_DIR
cd $SLURM_SUBMIT_DIR
rm -rf $SCRDIR
#!/bin/bash
#SBATCH --job-name=hybrid
#SBATCH --output=hydrid_job.txt
#SBATCH --ntasks=8
#SBATCH --cpus-per-task=5
#SBATCH --nodes=2
export OMP_NUM_THREADS=$SLURM_CPUS_PER_TASK
mpirun hello_hybrid.mpi