Firefly Cluster

The Firefly cluster consists of 4 compute nodes based on the Intel Xeon Gold 6148 CPU.

Each node in the cluster is also equipped with four Nvidia Tesla 32GB V100s.

  • Head node FQDN:
  • Other nodes: firefly{00-03}

Login procedure

To log into the firefly cluster use the following command:

Bash> $ ssh firefly

Submitting Slurm jobs

To launch, a job submission script is used.

Typical SLURM job script should run fine on this cluster but with one modification, mention the following line in your job script right before loading required modules:

 source /opt/ohpc/admin/lmod/8.2.10/init/bash

An example script is as follows:

# execute in the general partition 
#SBATCH --partition=general
# execute with 40 processes/tasks 
#SBATCH --ntasks=40
# execute on 4 nodes 
#SBATCH --nodes=4
# execute 4 threads per task 
#SBATCH --cpus-per-task=4
# maximum time is 30 minutes 
#SBATCH --time=00:30:00
# job name is my_job 
#SBATCH --job-name=my_job
#only use if gpu access required, 2GPUs requested 
#SBATCH --gres=gpu:2
# load environment source /opt/ohpc/admin/lmod/8.2.10/init/bash module load openmpi module load ...
# application execution mpiexec application command line arguments

For non-MPI application users the srun process launcher is available for use.

 $ srun <application> <command line arguments>

To submit the script for execution on the compute nodes use the following command:

 $ sbatch

Exhaustive description of the “sbatch” command can be found in the official documentation.

Interactive Slurm Jobs

To launch, the “srun” command is used. An example interactive job request is as follows:

To run an interactive job using GPUs, a user might execute:


$ srun --x11 --time=1-00:00:00 --nodelist=firefly00 --gres=gpu:4 --ntasks=8 --pty /bin/bash -l

To run an interactive job without GPUs, a user might execute:

 $ srun --x11 --time=1-00:00:00 --nodelist=firefly01 --ntasks=16 --pty /bin/bash -l

Exhaustive description of “srun” command can be found in the Official Documentation