Ansys Fluent, CFX
Versions
Currently installed versions include: 2022r2, 2023r1, 2024r2 In the future, autumn releases will be installed. The file paths are the following for the versions: /opt/software/packages/ansys/<version>
Licensing
A license is required to use Ansys Fluent and CFX. Komondor has access to a central license server with a finite number of licenses.
License types (CFD)
For CFD workloads, two license pools are typically involved:
cfd_base: covers up to 4 CPU coresanshpc: above 4 cores, one license per additional core
Example: a simulation using 16 CPU cores requires cfd_base + 12x anshpc.
Checking license availability (vendor tools)
License availability can be checked using the license manager tools (example):
/opt/software/packages/ansys/2024r2/v242/licensingclient/linx64/lmutil lmstat -a -c 1055@license.komondor.hpc.einfra.hu | grep -e "anshpc" -e "cfd_base"
Queueing for licenses with Slurm (recommended)
If you start Fluent/CFX without reserving licenses first, the job can fail at startup when the solver cannot check out the required licenses. To avoid wasting compute allocation and to wait for licenses in the queue, request licenses as Slurm resources.
List the configured Slurm license pools:
scontrol show license
Example output (values may change):
LicenseName=anshpc@license
Total=706 Used=0 Free=246 Reserved=0 Remote=yes
LastConsumed=460 LastDeficit=460 LastUpdate=2026-02-18T14:38:56
LicenseName=cfd_base@license
Total=275 Used=0 Free=262 Reserved=0 Remote=yes
LastConsumed=13 LastDeficit=13 LastUpdate=2026-02-18T14:38:56
To request licenses, use --licenses (or -L). Example (reserve 2 HPC licenses):
#SBATCH --licenses=anshpc@license:2
For CFD runs using N CPU cores, request:
cfd_base@license:1(always), andanshpc@license:(N-4)if N > 4
Example (16 cores total):
#SBATCH --licenses=cfd_base@license:1,anshpc@license:12
When you request licenses, Slurm will keep the job in PENDING state until the requested amount becomes available (reason: Licenses).
To see the pending reason:
squeue -j <JOBID> -o "%.18i %.9P %.2t %R"
Creating Journal file in Ansys Fluent
File -> Write -> Start Journal
Apply the necessary settings using the Text User Interface, run a few iterations of the simulation than save the results.
File -> Write -> Stop Journal
You can edit and modify the created Journal file using a text editor (eg. number of iterations, save location, etc.)
Alternative method:
A journal file is just a text file disguised with a .jou extension containing the commands. So, writing a journal file from scratch is possible, and it can be more flexible than the GUI method mentioned above. The text file needs to be saved with the .jou extension. A simple example can be seen below:
;changing working directory
sync-chdir <Path to working directory>
;Reading in the case file
/rc "<Path to case file>"
; an alternative form of this is "/file/read-case fullcase.cas.gz"
;initialising the system
/solve/init/init
;Setting the number of time-steps (first number) and the max
;number of iterations per step (second number)
/solve/dual-time-iterate 1000 10
;Outputting solver performance data upon completion of the simulation
/parallel timer usage
;Writing the final case file (overwriting if required)
/wc "<Path for case file>"
yes
;Writing the final data file (overwriting if required)
/wd "<Path for data file>"
yes
; an alternative form of this is "/file/write-data output_results.data ok"
/exit ok
For transient simulations with adaptive time steps or more complex setups, the commands have to be recorded in the GUI. Steady-state simulations can be run with the following command with the number of maximum iterations specified:
solve/iterate 100
Other useful commands:
/file/import/<format>
/file/export/<format>
/file/export-to-cfd-post
/file/transient-export/<format>
/file/confirm-overwrite?
/file/read-case
/file/read-data
/file/read-macro
/file/read-profile
/file/read-transient-table
/file/write-case
/file/write-case-data
/mesh/check
/mesh/mesh-info
/mesh/memory-usage
/mesh/quality
/mesh/reorder
/mesh/scale
/mesh/size-info
Creating a SLURM script
#!/bin/bash
#SBATCH --job-name=<job-name>
#SBATCH --output=%j.out
#SBATCH --error=%j.err
#SBATCH --partition=cpu
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=8
#SBATCH --time=00:10:00
#SBATCH --account=<project-name>
#SBATCH --licenses=cfd_base@license:1,anshpc@license:<HPC_LICENSES>
### NOTICE. DO NOT use flag "--cpus-per-task= "
### It will break job setting, so no MPI/OpenMP job
export SLURM_MPI_TYPE=pmi2
export OMP_NUM_THREADS=1
export SSH_SPAWN=0
export I_MPI_OFI_LIBRARY_INTERNAL=0
export INTELMPI_ROOT=/opt/software/packages/intel/mpi/2021.9.0/
export SLURM_CPU_BIND=none
if [ $SLURM_JOB_NUM_NODES -eq 1 ]
then
export I_MPI_FABRICS=shm
fi
rm -rf fluentHostfile
FLUENTNODES="$(scontrol show hostnames)"
echo $FLUENTNODES > fluentHostfile
export ANSYSLMD_LICENSE_FILE=1056@license.komondor.hpc.einfra.hu
/opt/software/packages/ansys/2023r1/ansys_inc/v231/fluent/bin/fluent 3ddp -g -t $SLURM_NTASKS -mpi=intel -mpiopt="-genv FI_PROVIDER cxi -genv I_MPI_PIN 1 -genv I_MPI_PIN_CELL core -genv FI_PROVIDER_PATH /opt/cray/libfabric/1.15.2.0/lib64/" -i <journal-file> -cnf=fluentHostfile -scheduler_tight_coupling -pcheck=0
Change the parts marked as <$variable> according to your own preferences.
If it is necessary, change the parameters below:
2d – 2D, single-precison solver
3d – 3D, single-precision solver
2ddp – 2D, double-precision solver
3ddp – 3D, double-precison solver
-g – run without GUI and graphics
-i <journal> - reading in a Journal file
Other commands: Ansys Fluent User’s Guide (2023/R2) / 4.1.5. Command Line Startup Options
Note
You can issue TUI commands directly after opening Fluent. In this case, the Journal file is unnecessary.
Running a job
Upload the necessary files to the directory you want to use (eg. using WinSCP).
Login to the server via SSH.
Use the following commands:
cd <working_directory>
sbatch <script_name>.sbatch
You can monitor your submitted jobs with the following command:
squeue -u <user-name>
CFX
Unlike Fluent, CFX works with multiple executables for different tasks. So running a simulation is simpler, and does not require a journal file. A simulation prepared in a definiton file can be run with the following command:
cfx5solve -batch -def <path to .def file> -chdir <path to working directory on scratch> -parallel -part <number of cores> -start-method "Open MPI Local Parallel" -par-dist <hostfile>
The -chdir option is required, otherwise it will not work.
Creating a SLURM script
#!/bin/bash
#SBATCH --job-name=<job-name>
#SBATCH --output=%j.out
#SBATCH --error=%j.err
#SBATCH --partition=cpu
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=16
#SBATCH --time=10:10:00
#SBATCH --account=<project-name>
#SBATCH --licenses=cfd_base@license:1,anshpc@license:<HPC_LICENSES>
### NOTICE. DO NOT use flag "--cpus-per-task= "
### It will break job setting, so no MPI/OpenMP job
export SLURM_MPI_TYPE=pmi2
export OMP_NUM_THREADS=1
export SSH_SPAWN=0
export I_MPI_OFI_LIBRARY_INTERNAL=0
export INTELMPI_ROOT=/opt/software/packages/intel/mpi/2021.9.0/
export SLURM_CPU_BIND=none
if [ $SLURM_JOB_NUM_NODES -eq 1 ]
then
export I_MPI_FABRICS=shm
fi
rm -rf cfxHostfile_2
CFXNODES="$(scontrol show hostnames)"
echo "${CFXNODES}*$SLURM_NTASKS," > cfxHostfile_2
export ANSYSLMD_LICENSE_FILE=1056@license.komondor.hpc.einfra.hu
/opt/software/packages/ansys/<version>/ansys_inc/<version>/CFX/bin/cfx5solve -batch -def <Path to .def file> -chdir <Path to working directory> -parallel -part $SLURM_NTASKS -start-method "Open MPI Local Parallel" -par-dist cfxHostfile_2
The results can be exported using cfx5export, using the -help flag the available formats and their options can be listed.
/opt/software/packages/ansys/<version>/ansys_inc/<version>/CFX/bin/cfx5export -help