Video Tutorials
Code Refactoring Tutorial: Practical Refactoring - How to clean code in many small steps
C++ Recap Playlist: Back to Basics CppCon 2020
Course Materials
Computing at Scale (2025) Course Materials: GitHub Repository
This is the multi-page printable view of this section. Click here to print.
Code Refactoring Tutorial: Practical Refactoring - How to clean code in many small steps
C++ Recap Playlist: Back to Basics CppCon 2020
Computing at Scale (2025) Course Materials: GitHub Repository
module unuse /opt/scorec/spack/lmod/linux-rhel7-x86_64/Core
module use /opt/scorec/spack/v0154_2/lmod/linux-rhel7-x86_64/Core
module load cmake/3.20.0
Note: Ensure that you follow the steps faithfully to avoid issues and achieve desired results
M11_cg_bone_lpelvis.obj
in Autodesk Meshmixer..obj
file..prt
files..prt
.m11_cg_jnt_lsi.obj
) and save it as a part file.screw-7-74.prt
) and remove the Split Bodies and Datum Planes..prt
file..x_b
Importing Geometry:
Preparing for meshing:
.sms
file. (Remember to return to the .smd
file for generating a new mesh if required.)Refining Mesh:
Type | Sub-Type | Value |
---|---|---|
Mesh Size | Absolute | 0.001 |
Mesh Curvature Refinement | Absolute | 0.005 |
Volume Shape Metric | Aspect Ratio | 3.0 |
Allow Refinement For Shape | Relative | 0.0001 |
Quality Check:
.sms
file and navigating to the Display Tab.Exporting Mesh:
(.inp)
.Meshing Screws Separately:
Download the latest Hugo binary from here or v0.119 directly from here.
Extract the zip file to a folder of your choice. For example, C:\Hugo\bin
.
Add the path to the Hugo binary to your PATH environment variable. For example, C:\Hugo\bin
.
C:\Hugo\bin
.Open a new command prompt and run hugo version
to verify that Hugo is installed properly.
You can now delete the zip file.
git clone your-repository-url
.cd your-repository-name
.hugo server
.git clone https://github.com/kokkos/kokkos.git
module use /opt/scorec/spack/rhel9/v0201_4/lmod/linux-rhel9-x86_64/Core/
module load gcc/12.3.0-iil3lno mpich/4.1.1-xpoyz4t cuda/12.1.1-zxa4msk
module load cmake/3.20.0
cmake -S . \
-B build \
-DCMAKE_CXX_COMPILER=g++ \
-DBUILD_SHARED_LIBS=ON \
-DCMAKE_INSTALL_PREFIX=/lore/<username>/Kokkos/Install \
-DKokkos_ENABLE_OPENMP=ON
Build and Install: Run the config.sh file with the command . config.sh
and go to the build
directory (cd build
). Run the command make install
.
Add in your environment variable: Add the executable to the LD_LIBRARY_PATH
environment variable. Use the following command to add to the environment variable.
export LD_LIBRARY_PATH=/lore/<username>/Kokkos/Install/lib64:$LD_LIBRARY_PATH
The following instructions is for building a new Kokkos library for each exercise. For more details, check Kokkos Tutorials.
git clone https://github.com/kokkos/kokkos-tutorials.git
module use /opt/scorec/spack/rhel9/v0201_4/lmod/linux-rhel9-x86_64/Core/
module load gcc/12.3.0-iil3lno mpich/4.1.1-xpoyz4t cuda/12.1.1-zxa4msk
Find the GPU architecture of your machine: To find out the GPU architecture of your machine, follow this.
Build with make: Inside kokkos-tutorials folder, go to Exercises
folder.In each exercise folder, there is a makefile to build each exercise. Open the Makefile and make the following changes.
KOKKOS_PATH = /path/to/kokkos
KOKKOS_DEVICES = "<your GPU language>"
KOKKOS_ARCH = "<your GPU Architecture>"
For example,
KOKKOS_PATH = /lore/<username>/Kokkos/kokkos`
KOKKOS_DEVICES = "Cuda"
KOKKOS_ARCH = "Ada89"
After making changes, run the command make -j8
MFEM is a free, lightweight, scalable C++ library for finite element methods.
MFEM installation information can be found here. Follow the appropriate installation instructions according to your necessity.
More detailed guide can be found in this README file. This might be needed to build MFEM with custom options for example linking with other libraries.
git clone https://github.com/mfem/mfem.git
or get the tarball from here.
module unuse /opt/scorec/spack/tmod/tinux—rhet7—x86_64/Core
module use /opt/scorec/spack/v0181_1/lmod/linux—rhe17—x86_64/Core
module use /opt/scorec/modules
module load gcc/11.2.0 mpich/4.0.2
module load cmake/3.20
Install hypre
and metis
using these instructions from MFEM website. They showed the installation process for metis-4.0
but I used metis-5.1.0
and it worked fine.
Create a configuration file with the following content
cmake -S mfem-4.6 -B mfem-build \
-DMETIS_DIR=metis-5.1.0 \
-DMFEM_USE_MPI=ON \
-DCMAKE_BUILD_TYPE=RelWithDebInfo \
-DCMAKE_INSTALL_PREFIX=/lore/<your_username>/mFEM/install/mfem-mpi
change <your_username>
to your username. It’s preferable to use the /lore
directory for installation.
Read more about storage management on SCOREC machines in the FAQ section.
Omega_h is a C++14 library that implements tetrahedron and triangle mesh adaptativity, with a focus on scalable HPC performance using (optionally) MPI and OpenMP or CUDA. It is intended to provided adaptive functionality to existing simulation codes.
This is a fork of the original Omega_h repository of SANDIA Labs. The fork is maintained by the SCOREC.
/lore/mersoj/laces-software/build/PASCAL61/omega_h/install/
PASCAL61
is the GPU architecture. Find your GPU architecture with this tutorial./lore/hasanm4/Omega_H/
. And the Gmsh executable is in /lore/hasanm4/Gmsh/
Remember to add the shared libraries of gmsh
and omega_h
to your LD_LIBRARY_PATH
environment variable if you are using the compiled version of Omega_h.
# you may also need to remove all the previously loaded modules
#module purge
module unuse /opt/scorec/spack/tmod/tinux—rhet7—x86_64/Core
module use /opt/scorec/spack/v0181_1/lmod/linux—rhe17—x86_64/Core
module use /opt/scorec/modules
module load gcc/11.2.0 mpich/4.0.2
module load fftw/3.3.10
module load cuda/11.4
module load cmake/3.20
# if you want to use Symmetrix with it
module load simmetrix-simmodsuite/2023.1-230907dev # or any other version
git clone https://github.com/SCOREC/omega_h.git
cd omega_h
cmake -S . -B build \
-DCMAKE_CXX_COMPILER=`which mpicxx` \
-DCMAKE_C_COMPILER=`which mpicc` \
-DCMAKE_INSTALL_PREFIX=/lore/<yourUsername>/Omega_H_OMP/ \
-DGmsh_INCLUDE_DIRS=/lore/<yourUsername>/Gmsh/include/ \
-DKokkos_DIR=/lore/<yourUsername>/Kokkos/kokkosInstall/lib64/cmake/Kokkos \
-DCMAKE_BUILD_TYPE=Debug \
-DOmega_h_DBG=OFF \
# if you want to compile with symmetrix
-DOmega_h_USE_SimModSuite=on \
-DSIM_MPI=mpich4.0.2
Important notes on the flags: Change the CMAKE_INSTALL_PREFIX
to your preferred directory. If you want to use Gmsh, change the Gmsh_INCLUDE_DIRS
to the directory where you have Gmsh compiled and installed. If you don’t, remove the line. If you want to use CUDA or OpenMP, change the Kokkos_DIR
to the directory where you have Kokkos installed. CUDA/OpenMP will depend on the Kokkos installation. If you don’t want to use CUDA or OpenMP, remove the line. If you want to compile in release mode, change the CMAKE_BUILD_TYPE
to Release
or RelWithDebInfo
.
. <yourConfigFile>
cd build
make -j8 install
simmetrixModel2Parasolid is tool to convert native Simmetrix models (.smd files) to Parasolid (.x_t files). It uses the Simmetrix Parasolid interface. To use on SCOREC rhel9 machines, load the modules:
module use /opt/scorec/spack/rhel9/v0201_4/lmod/linux-rhel9-x86_64/Core/
module load gcc/12.3.0-iil3lno mpich/4.1.1-xpoyz4t cuda/12.1.1-zxa4msk
module load module load simmetrix-simmodsuite/2025.0-241016dev-vafjs2q
and then run the tool:
/lore/hasanm4/wsources/simModel2parasolid/build/simModel2parasolid simModelName.smd parasolidName.x_t
To convert Parasolid files (.x_t
, which can be also exported from NX parts) to Simmetrix models (.smd
):
module use /opt/scorec/spack/rhel9/v0201_4/lmod/linux-rhel9-x86_64/Core/
module load module load simmetrix-simmodsuite/2025.0-241016dev-vafjs2q
module load pumi/develop-simmodsuite-2025.1-250507dev-int32-shared-g2attww
module load simmetrix/simModeler
simmodeler
GUI and import the Parasolid file. Go to Prepare
tab and select Create Model
or Make Model
or Create Nonmanifold Model
depending on the version. Make sure to select Nonmanifold
. Now, save the model as .smd
file. This will create a new Parasolid file model-nat.x_t
in the same directory.simTranslate model-nat.x_t model.smd model-translated.smd
and use the model-translated.smd
file in the simmetrix tools.
This jupyter notebook shows how to open the Beams3D and BMW output and plot the magnetic fields. It is found that both of them generates the same magnetic field.
Note: To find the RHEL version of the machine you are on, you can use the following command:
cat /etc/redhat-release
This is the recommended way of BMW installation since most of the SCOREC machines are now RHEL9. This process only works on RHEL 9 machines of SCOREC and was tested on 13th June 2024. Some modules may change in the future.
cd
(/lore/<username>/
recommended, see more details about space management here) and clone the repository: git clone https://github.com/ORNL-Fusion/Stellarator-Tools.git
cd Stellarator-Tools
mkdir build
cd build
module use /opt/scorec/spack/rhel9/v0201_4/lmod/linux-rhel9-x86_64/Core/
module load gcc/12.3.0-iil3lno mpich/4.1.1-xpoyz4t cuda/12.1.1-zxa4msk
module load cmake
module load netcdf-c netcdf-fortran openblas netlib-scalapack
cmake -B ./ -S ../ -DBUILD_BMW=ON -DBUILD_MAKEGRID=ON -DCMAKE_INSTALL_PREFIX=/lore/<username>/BMW
Change the installation directory as per your requirement. 5. Build and install:
make -j4
make install
xmbw
and mgrid
, the shared library paths need to be updated. Execute the following export commands before using them. export LD_LIBRARY_PATH=$OPENBLAS_RHEL9_ROOT/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$NETLIB_SCALAPACK_RHEL9_ROOT/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$NETCDF_FORTRAN_RHEL9_ROOT/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH=$NETCDF_C_RHEL9_ROOT/lib:$LD_LIBRARY_PATH
Note: Every time you want to use xmbw
or mgrid
on a new terminal, you need to export the shared library paths after loading the necessary modules(step 3).
This GitHub reporsiory contains the source code and brief installation guide of BMW: BMW.
Note: Direct compilation did not work for me. I used Stellarator-Tools supplied by ORNL-Fusion.
git clone https://github.com/ORNL-Fusion/Stellarator-Tools.git
cd Stellarator-Tools
mkdir build
cd build
module unuse /opt/scorec/spack/lmod/linux-rhel7-x86_64/Core
module use /opt/scorec/spack/v0201_4/lmod/linux-rhel7-x86_64/Core
module load gcc/11.2.0-zcqgw mpich/4.1.1-6p32n
module load netcdf-c cmake netcdf-fortran openblas netlib-scalapack
ccmake
method from the repository README. But I recommend the following: cmake -B ./build -S ./ -DBUILD_BMW=ON -DCMAKE_INSTALL_PREFIX=/lore/<yourUsername>/BMW
make -j4 install
or
make -j4
This will create the bmw
executable in the bin
directory of the installation directory.
xbmw
will keep printing error messages until it finds all the
shared libraries): (These directories may change depending on the modules loaded for installation) export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/scorec/spack/v0201_4/install/linux-rhel7-x86_64/gcc-11.2.0/openblas-0.3.23-4kpgzbwtvvnf4m4f6rqvyclh2khpfepb/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/scorec/spack/v0201_4/install/linux-rhel7-x86_64/gcc-11.2.0/netlib-scalapack-2.2.0-w3lmjdvbshpvqiihwxm2fygyjyzu275t/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/scorec/spack/v0201_4/install/linux-rhel7-x86_64/gcc-11.2.0/netcdf-c-4.9.2-hzgyaz36ol6aqb4o3ne3xjabccpxjlo4/lib
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/scorec/spack/v0201_4/install/linux-rhel7-x86_64/gcc-11.2.0/netcdf-fortran-4.6.0-hwoxscrowy6gh75n5ypnlr3btfver36x/lib
There can be two kinds of use cases for bmw
: the coil generated magnetic filed is considered or
they are not considered. If you don’t need the coil generated magnetic field, you just need the wout_...nc
file and follow the steps below.
vmec_equillibria/NCSX/free_boundary_from_vmec_wiki
): cd vmec_equillibria/NCSX/free_boundary_from_vmec_wiki
You can create the wout_...nc
file using vmec
or use the provided wout_...nc
file as the input to bmw
.
bmw
: /lore/<yourUsername>/BMW/bin/xbmw -num_r=100 -num_p=20 -num_z=100 -rmax=1.5 -rmin=0.9 -zmax=0.5 -zmin=-0.5 -woutf=wout_ncsx_c09r00_free_birth.nc -outf=bmw_ncsx_c09r00_out.nc
This will produce the bmw_ncsx_c09r00_out.nc
file in the current directory.
netCDF4
library. Here in this Jupyter Notebook file, I have plotted the magnetic field from the output file. GitHub Gist for Analysiscoils.xxxxx
.ier_flag
in the wout_...nc
file to 0
before starting the bmw
run. You can modify .nc
files in python using the netCDF4
library. import netCDF4 as nc
wout = nc.Dataset('wout_ncsx_c09r00_free_birth.nc', 'r')
wout.variables['ier_flag'][:] = 0
wout.close()
mgrid
format. So, convert the coils.xxxxx
file to mgrid
format using the following command: /lore/<yourUsername>/BMW/bin/mgrid < input_file.txt
To know how to create an input file for mgrid
, check this STELLOPT MAKEGRID Tutorial. Note that their binary name is xgrid
but in the BMW installation, it’s mgrid
.
bmw
with the mgrid
file generated in the above step. In this case, you don’t have to specify
the parameters for the grid since the grid is already provided by the mgrid
file. /lore/<yourUsername>/BMW/bin/xbmw -woutf=wout_ncsx_c09r00_free_birth.nc -outf=bmw_ncsx_c09r00_out.nc -mgridf=mgrid_xxxx.nc
This will take a long time to run depending on the grid size. You can also do parallel run using mpirun
and -para
flag.
I am not able to compile the STELLOPT tool on scorec yet. The installation instructions given in the STELLOPT README is not working.
If you have access to the PPPL clusters (e.g. stellar), they have compiled STELLOPT tools. You can use them directly.
ssh stellar-intel
module use /home/caoxiang/module
module load stellopt/intel
Learn more about it here
I used the docker image of STELOPT to run Beams3D where all the necessary tools are already installed. The docker image is available at Docker Hub.
docker pull zhucaoxiang/stellopt
docker run -it -u root zhucaoxiang/stellopt
According to their documentation, you can avoid NAG and use LSODE
instead. Try changing the NAG flag to F
in the make_debian.inc
file as well as changing the INTEGRATOR
flag to LSODE
. If you succeed, please add in the documentation or create an issue.
source /home/NAG/nll6i293bl/scripts/nagvars.sh int64 vendor dynamic
xbeams3d -vmec ncsx_c09r00_free_birth -coil coils.c09r00 -vessel NCSX_wall_nbiport_acc.dat -field
.h5
file and it can be read by python. Magnetic field data is read in this Jupyter NotebookTOMMS is a Tokamak meshing software.
TOMMS (previously Fusion) is a private repository. You need to be added to the repository to access it. Please contact appropriate people to get access to the repository. When you get access, you will be able to clone the repository using the following command:
git clone https://github.com/SCOREC/tomms.git
Complete installation instructions and usage are available in the TOMMS repository wiki. No special measures are required except changing the configuration files to point to the appropriate directories (dependencies and installation). Configuration files depend on the system/OS you are trying to install on and it is pointed out in the wiki and usage are explained in the TOMMS user-guide.
XGC is a private repository. Ask Princeton Plasma Physics Laboratory (PPPL) for access to the repository. The documentation can be found here: XGC Documentation
The master (latest commit 5d2cb943a
) or d2neutral
can be installed with spack
using the following packages:
# This is a Spack Environment file.
#
# It describes a set of packages to be installed, along with
# configuration settings.
spack:
# add package specs to the `specs` list
specs:
- kokkos@4+openmp+serial
- cabana@0.7.0
- fftw
- netlib-lapack
- googletest
- libszip
- hdf5+hl+mpi
- netcdf-c+mpi
- netcdf-fortran
- catch2
- kokkos-kernels
- cmake
- adios2@2.8.0
- python@3.9
- petsc@3.15.0+fortran+metis+scalapack ^python@3.9
view: true
concretizer:
unify: true
with this compiler configuration:
packages:
cuda:
externals:
- spec: cuda@12.1.105
prefix: /opt/scorec/spack/rhel9/v0201_4/install/linux-rhel9-x86_64/gcc-12.3.0/cuda-12.1.1-zxa4mskqvbkiefzkvnuatlq7skxjnzt6
buildable: false
mpich:
externals:
- spec: mpich@4.1.1+hydra+libxml2+romio~verbs+wrapperrpath device=ch4 netmod=ofi pmi=pmi
prefix: /opt/scorec/spack/rhel9/v0201_4/install/linux-rhel9-x86_64/gcc-12.3.0/mpich-4.1.1-xpoyz4tqgfxtrm6m7qq67q4ccp5pnlre
buildable: false
gcc:
externals:
- spec: gcc@12.3.0 languages:='c,c++,fortran'
prefix: /opt/scorec/spack/rhel9/v0201_4/install/linux-rhel9-x86_64/gcc-7.4.0/gcc-12.3.0-iil3lnovyknyxf7pec36wljem3fntjd5
extra_attributes:
compilers:
c: /opt/scorec/spack/rhel9/v0201_4/install/linux-rhel9-x86_64/gcc-7.4.0/gcc-12.3.0-iil3lnovyknyxf7pec36wljem3fntjd5/bin/gcc
cxx: /opt/scorec/spack/rhel9/v0201_4/install/linux-rhel9-x86_64/gcc-7.4.0/gcc-12.3.0-iil3lnovyknyxf7pec36wljem3fntjd5/bin/g++
fortran: /opt/scorec/spack/rhel9/v0201_4/install/linux-rhel9-x86_64/gcc-7.4.0/gcc-12.3.0-iil3lnovyknyxf7pec36wljem3fntjd5/bin/gfortran
Run spack concretize -f
and spack install
to install the packages. Now, before installing XGC, the following changes have to be made to the source code:
Following the XGC convention, a file called CMake/find_dependencies_scorecrh9-spack.cmake
should be created with the following content:
find_package(FFTW3 REQUIRED)
find_package(PETSC REQUIRED)
find_package(LAPACK REQUIRED)
To properly install, these files had to be modified too and they are not added to a pull request yet.
XGC_core/cpp/file_reader.hpp
add the following line:#include <iostream>
libs/camtimers/CMakeLists.txt
replace the following lines:install (FILES ${CMAKE_BINARY_DIR}/perf_mod.mod DESTINATION include)
install (FILES ${CMAKE_BINARY_DIR}/perf_utils.mod DESTINATION include)
with these lines:
install(FILES ${CMAKE_BINARY_DIR}/libs/camtimers/perf_mod.mod DESTINATION include)
install (FILES ${CMAKE_BINARY_DIR}/libs/camtimers/perf_utils.mod DESTINATION include)
Now, to install XGC, activate the spack environment, load the necessary packages and run the following commands:
export XGC_PLATFORM=scorecrh9-spack
cmake -S . -B build \
-DCMAKE_Fortran_FLAGS="-fallow-argument-mismatch" \
-DCMAKE_BUILD_TYPE=RelwithDebInfo \
-DCMAKE_INSTALL_PREFIX=build/install \
-DCMAKE_CXX_COMPILER=$MPICXX \
-DCMAKE_C_COMPILER=$MPICC \
-DCMAKE_Fortran_COMPILER=$MPIFC