Epyc Cluster

The Epyc cluster consists of 16 compute nodes based on the EPYC 7662 128 core processor.

The cluster has a grand total of 2048 cores (16 * 128 = 2048).

  • Head node FQDN: epyc.simcenter.utc.edu
  • Other nodes: epyc{00-15}

Login procedure

To log into the epyc cluster use the following command:

$ ssh epyc

Submitting Slurm jobs

To launch, a job submission script is used. An example script is as follows:

# execute in the general partition
#SBATCH --partition=general
# execute with 40 processes/tasks
#SBATCH --ntasks=40
# execute on 4 nodes
#SBATCH --nodes=4
# execute 4 threads per task
#SBATCH --cpus-per-task=4
# maximum time is 30 minutes
#SBATCH --time=00:30:00
# job name is my_job
#SBATCH --job-name=my_job
# load environment
module load openmpi
module load ...
# application execution
mpiexec application command line arguments

For non-MPI application users the srun process launcher is available for use.

$ srun <application> <command line arguments>

To submit the script for execution on the compute nodes use the following command:

$ sbatch job_script.sh

Exhaustive description of the “sbatch” command can be found in the official documentation.