Firefly Cluster

The Firefly cluster consists of 4 compute nodes based on the Intel Xeon Gold 6148 CPU.

Each node in the cluster is also equipped with four Nvidia Tesla 32GB V100s.

  • Head node FQDN: firefly.simcenter.utc.edu
  • Other nodes: firefly{00-03}

Login procedure

To log into the firefly cluster use the following command:

$ ssh firefly

Submitting Slurm jobs

To launch, a job submission script is used.

Typical SLURM job script should run fine on this cluster but with one modification, mention the following line in your job script right before loading required modules:

source /opt/ohpc/admin/lmod/8.2.10/init/bash

An example script is as follows:

#!/bin/bash
 
# execute in the general partition
#SBATCH --partition=general
 
# execute with 40 processes/tasks
#SBATCH --ntasks=40
 
# execute on 4 nodes
#SBATCH --nodes=4
 
# execute 4 threads per task
#SBATCH --cpus-per-task=4
 
# maximum time is 30 minutes
#SBATCH --time=00:30:00
 
# job name is my_job
#SBATCH --job-name=my_job
 
#only use if gpu access required, 2GPUs requested
#SBATCH --gres=gpu:2
 
# load environment
source /opt/ohpc/admin/lmod/8.2.10/init/bash
module load openmpi
module load ...
 
# application execution
mpiexec application command line arguments

For non-MPI application users the srun process launcher is available for use.

$ srun <application> <command line arguments>

To submit the script for execution on the compute nodes use the following command:

$ sbatch job_script.sh

Exhaustive description of the “sbatch” command can be found in the official documentation.