7. Freeware

The list of the installed freeware is as follows:

Software name Description
GAMESS Computational chemistry Software
Tinker Computational chemistry Software
GROMACS Computational chemistry Software
LAMMPS Computational chemistry Software
NAMMD Computational chemistry Software
CP2K Computational chemistry Software
OpenFOAM Computational Software
CuDNN GPU library
NCCL GPU library
Caffe DeepLearning Framework
Chainer DeepLearning Framework
TensorFlow DeepLearning Framework
R statistics Interpreter
clang compiler
Apache Hadoop Distributed data processing tool
POV-Ray Visualization software
ParaView Visualization software
VisIt Visualization software
turbovnc Remote GUI
gnuplot Data visualization
Tgif Graphics tool
GIMP Image display and manipulation
ImageMagick Image display and manipulation
TeX Live TeX distribution
Java SDK Development environment
PETSc Scientific Computation Library
FFTW FFT library
DMTCP Checkpoint tool

7.1. Computational chemistry Software

7.1.1. GAMESS

The following is a sample job script.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
#!/bin/bash
#$ -cwd
#$ -l f_node=1
#$ -l h_rt=0:10:0
#$ -N gamess

. /etc/profile.d/modules.sh
module load intel intel-mpi gamess

cat $PE_HOSTFILE | awk '{print $1}' > $TMPDIR/machines
cd $GAMESS_DIR
./rungms exam08 mpi 4 4

For more details, please refer the following site: https://www.msg.ameslab.gov/gamess/index.html

7.1.2. Tinker

The following is a sample job script.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
#!/bin/bash
#$ -cwd
#$ -l f_node=1
#$ -l h_rt=0:10:0
#$ -N tinker

. /etc/profile.d/modules.sh
module load intel tinker

cp -rp $TINKER_DIR/example $TMPDIR
cd $TMPDIR/example
dynamic waterbox.xyz -k waterbox.key 100 1 1 2 300
cp -rp $TMPDIR/example $HOME

For more details, please refer the following site: https://dasher.wustl.edu/tinker/

7.1.3. GROMACS

The following is a sample job script.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
#!/bin/bash
#$ -cwd
#$ -l f_node=1
#$ -l h_rt=0:10:0
#$ -N gromacs

. /etc/profile.d/modules.sh
module load cuda intel-mpi gromacs

cp -rp $GROMACS_DIR/examples/water_GMX50_bare.tar.gz $TMPDIR
cd $TMPDIR
tar xf water_GMX50_bare.tar.gz
cd water-cut1.0_GMX50_bare/3072
gmx_mpi grompp -f pme.mdp
OMP_NUM_THREADS=2 mpirun -np 4 gmx_mpi mdrun
cp -rp $TMPDIR/water-cut1.0_GMX50_bare $HOME

For more details, please refer the following site: http://www.gromacs.org/

7.1.4. LAMMPS

The following is a sample job script.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
#!/bin/bash
#$ -cwd
#$ -l f_node=1
#$ -l h_rt=0:10:0
#$ -N lammps

. /etc/profile.d/modules.sh

module load intel cuda openmpi/3.1.4-opa10.10-t3 ffmpeg lammps
cp -rp $LAMMPS_DIR/examples/accelerate $TMPDIR
cd $TMPDIR/accelerate
mpirun -x PATH -x LD_LIBRARY_PATH -x PSM2_CUDA=1 -np 4 lmp -in in.lj
cp -rp $TMPDIR/accelerate $HOME

For more details, please refer the following site: http://lammps.sandia.gov/doc/Section_intro.html

7.1.5. NAMD

The following is a sample job script.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
#!/bin/bash
#$ -cwd
#$ -l f_node=1
#$ -l h_rt=0:10:0
#$ -N namd

. /etc/profile.d/modules.sh
module load cuda intel namd

cp -rp $NAMD_DIR/examples/stmv.tar.gz $TMPDIR
cd $TMPDIR
tar xf stmv.tar.gz
cd stmv
namd2 +idlepoll +p4 +devices 0,1 stmv.namd
cp -rp $TMPDIR/stmv $HOME

For more details, please refer the following site: http://www.ks.uiuc.edu/Research/namd/2.12/ug/

7.1.6. CP2K

The following is a sample job script.

#$ -cwd
#$ -l f_node=1
#$ -l h_rt=0:10:0
#$ -N cp2k

. /etc/profile.d/modules.sh
module load cuda openmpi/3.1.4-opa10.10-t3 cp2k
cp -rp $CP2K_DIR/benchmarks/QS $TMPDIR
cd $TMPDIR/QS
mpirun -x PATH -x LD_LIBRARY_PATH -x PSM2_CUDA=1 -np 4 cp2k.popt -i H2O-32.inp -o H2O-32.out
cp -rp $TMPDIR/QS $HOME

For more details, please refer the following site: https://www.cp2k.org/

7.2. CFD software

7.2.1. OpenFOAM

There are two versions of OpenFOAM, Foundation version is named "openfoam" and ESI version is named "openfoam-esi".
The following is a sample job script.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
#!/bin/bash
#$ -cwd
#$ -l f_node=1
#$ -l h_rt=0:10:0
#$ -N openform

. /etc/profile.d/modules.sh
module load cuda openmpi openfoam

mkdir -p $TMPDIR/$FOAM_RUN
cd $TMPDIR/$FOAM_RUN
cp -rp $FOAM_TUTORIALS .
cd tutorials/incompressible/icoFoam/cavity/cavity
blockMesh
icoFOAM
paraFOAM

If you want to use ESI version, please replace module load cuda openmpi openfoam with module load cuda openmpi openfoam-esi.

For more details, please refer the following site:

https://openfoam.org/resources/
http://www.openfoam.com/documentation/

7.3. Numerical GPU libraries

7.3.1. cuBLAS

cuBLAS is BLAS(Basic Linear Algebra Subprograms) library for GPU.

usage

$ module load cuda
$ nvcc -gencode arch=compute_60,code=sm_60 -o sample sample.cu -lcublas

If you need to call cuBLAS in the usual C program, -I, -L and -l options are required in the compilation.

$ module load cuda
$ gcc -o blas blas.c -I${CUDA_HOME}/include -L${CUDA_HOME}/lib64 -lcublas

7.3.2. cuSPARSE

cuSPARSE is sparse matrix computation library for nvidia GPU.

usage

$ module load cuda
$ nvcc -gencode arch=compute_60,code=sm_60 sample.cu -lcusparse -o sample

If you need to call cuSPARSE in the usual C program, -I, -L and -l options are required in the compilation.

$ module load cuda
$ g++ sample.c -lcusparse_static -I${CUDA_HOME}/include -L${CUDA_HOME}/lib64  -lculibos -lcudart_static -lpthread -ldl -o sample

7.3.3. cuFFT

cuFFT is parallel FFT(Fast Fourier Transformation) library for nvidia GPU.

usage

$ module load cuda
$ nvcc -gencode arch=compute_60,code=sm_60 -o sample sample.cu -lcufft

If you need to call cufft in the usual C program, -I, -L and -l options are required in the compilation.

$ module load cuda
$ gcc -o blas blas.c -I${CUDA_HOME}/include -L${CUDA_HOME}/lib64 -lcufft

7.4. Machine learning, big data analysis software

7.4.1. CuDNN

You can load with the following commands:

$ module load cudacudnn

7.4.2. NCCL

You can load with the following commands:

$ module load cudanccl

7.4.3. Caffe

You can load and use interactively with the following commands:

$ module load intel cuda nccl cudnn caffe

For more details, please refer the following site: http://caffe.berkeleyvision.org/ If you want to use MKL from caffe, you should add #define USE_MKL in the code which invokes caffe, to ensure libraries are loaded from $MKLROOT.

7.4.4. Chainer

You can load and use interactively with the following commands:

$ module load intel cuda nccl cudnn openmpi/2.1.2-opa10.9-t3 chainer

For more details, please refer the following site: https://docs.chainer.org/en/stable/

7.4.5. TensorFlow

You could run interactive use like in this example.

  • python2.7
$ module load python-extension
$ cp -rp $PYTHON_EXTENSION_DIR/examples/tensorflow/examples .
$ cd examples/tutorials/mnist
$ python mnist_deep.py
  • python3.4
$ module load intel cuda/9.0.176 nccl/2.2.13 cudnn/7.1 tensorflow

https://www.tensorflow.org/

7.4.6. R

Rmpi for parallel processing and rpud for GPU are installed.
You could run interactive use like in this example.

$ module load intel cudaopenmpi r
$ mpirun -stdin all -np 2 R --slave --vanilla < test.R

7.4.7. clang

clang is C/C++ compiler whose backend is LLVM.
The following is an exmple to use clang with GPU offloading.

$ module load cuda clang
$ clang -fopenmp -fopenmp-targets=nvptx64-nvidia-cuda --cuda-path=$CUDA_HOME -Xopenmp-target -march=sm_60 test.c

For more details, please refer to the following site:
https://clang.llvm.org/

7.4.8. Apache Hadoop

You could run interactive use like in this example.

$ module load jdk hadoop
$ mkdir input
$ cp -p $HADOOP_HOME/etc/hadoop/*.xml input
$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.0.jar grep input output 'dfs[a-z.]+'
$  cat output/part-r-00000
1       dfsadmin

You could submit a batch job. The following is a sample job script.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
#!/bin/bash
#$ -cwd
#$ -l f_node=1
#$ -l h_rt=0:10:0
#$ -N hadoop

. /etc/profile.d/modules.sh
module load jdk hadoop

cd $TMPDIR
mkdir input
cp -p $HADOOP_HOME/etc/hadoop/*.xml input
hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.8.0.jar grep input output 'dfs[a-z.]+'
cp -rp output $HOME

7.5. Visualization software

7.5.1. POV-Ray

You can start with the following commands:

$ module load pov-ray
$ povray -benchmark

For more details, please refer the following site: http://www.povray.org/

7.5.2. ParaView

You can start with the following commands:

$ module load cuda openmpi paraview
$ paraview

7.5.2.1. Vizualization with multiple GPUs

It is possible to vizualize by using multiple nodes/GPUs with paraview/5.8.0 and paraview/5.8.0-egl.
Please note that paraview/5.8.0-egl does not hve paraview command, it only includes commandline executables.
The following is an exmple use of 8 GPUs with f_node=2.

  • wrap.sh
#!/bin/sh

num_gpus_per_node=4
mod=$((OMPI_COMM_WORLD_RANK%num_gpus_per_node))

if [ $mod -eq 0 ];then
    export VTK_DEFAULT_EGL_DEVICE_INDEX=0
elif [ $mod -eq 1 ];then
    export VTK_DEFAULT_EGL_DEVICE_INDEX=1
elif [ $mod -eq 2 ];then
    export VTK_DEFAULT_EGL_DEVICE_INDEX=2
elif [ $mod -eq 3 ];then
    export VTK_DEFAULT_EGL_DEVICE_INDEX=3
fi

$*
  • job.sh
#!/bin/sh
#$ -cwd
#$ -V
#$ -l h_rt=8:0:0
#$ -l f_node=2

. /etc/profile.d/modules.sh

module purge
module load cuda openmpi paraview/5.8.0-egl

mpirun -x PATH -x LD_LIBRARY_PATH -npernode 4 -np 8 ./wrap.sh pvserver

Please do not forget setting the execution permission to wrap.sh(chmod 755 wrap.sh).
With the above job script, submit the job.

qsub -g <group name> job.sh

Then, check if the job is running by qstat.

yyyyyyyy@login0:~> qstat
job-ID     prior   name       user         state submit/start at     queue                          jclass                         slots ja-task-ID
------------------------------------------------------------------------------------------------------------------------------------------------
   xxxxxxx 0.55354 job.sh     yyyyyyyy     r     05/31/2020 09:24:19 all.q@rXiYnZ                                                     56

Login to the allocated node by ssh command with X forwarding.

yyyyyyyy@login0:~> ssh -CY rXiYnZ
yyyyyyyy@rXiYnZ:~> module load cuda paraview/5.8.0
paraview

turbovnc is also possible to use.
After starting paraview, click "File"->"Connect", then click "Add Server".
Input "Name" field appropriately("test" as an example), click "Configure".

Click "Connect".
When the connection is established, test(cs://localhost:11111) is displayed in "Pipeline Browser".

paraview examples data can be downloaded from here.

For more details, please refer the following site: https://www.paraview.org/

7.5.3. VisIt

You can start with the following commands:

$ module load cuda openmpi vtk visit
$ visit

For more details, please refer the following site: https://wci.llnl.gov/simulation/computer-codes/visit/

7.6. Other freeware

7.6.1. turbovnc

turobvnc is an open source VNC software.
The following is an example to use turbovnc.
Please try them on the compute node by qrsh.

  • allocate a compute node
$ qrsh -g <group name> -l <resource type>=<count> -l h_rt=<time>
  • start vncserver on the node
$ module load turbovnc
$ vncserver

You will require a password to access your desktops.

Password:  # <-- set the password
Verify:
Would you like to enter a view-only password (y/n)? n

Desktop 'TurboVNC: rXiYnZ:1 ()' started on display rXiYnZ:1 # <-- remember the VNC display number ":1"

Creating default startup script /home/n/xxxx/.vnc/xstartup.turbovnc
Starting applications specified in /home/n/xxxx/.vnc/xstartup.turbovnc
Log file is /home/n/xxxx/.vnc/rXiYnZ:1.log

If you want to enlarge the VNC screen size, do vncserver -geometry <WIDTH>x<HEIGHT> and set the size.

  • Download the installer from https://sourceforge.net/projects/turbovnc/files/ and install turbovnc viewer into your local PC
  • From the terminal software you are connecting to TSUBAME, setup port forwarding as local port 5901 to the compute node port 5901.(if the display number is rXiYnZ:n, set the port number 5900+n)
  • Start vncviewer from your PC, connect to localhost:5901 and input the password you set.

7.6.1.1. turbovnc + VirtualGL

For resource types (s_gpu, q_node, h_node, f_node) that uses one or more GPUs when turbovnc is used, it is possible to visualize using the GPU by VirtualGL.
For example, the following is an exmpale to use VirtualGL with s_gpu.

$ qrsh ... -l s_gpu=1
$ . /etc/profile.d/modules.sh
$ module load turbovnc
$ Xorg -config xorg_vgl.conf :99 & # where :99 is aribitrary display number for VirtualGL
$ vncserver

Warning

Please note that the display number for VirtualGL is different from the one of VNC.
If anohter user is using the display number, the following error occurs when executing Xorg.

user@r7i7n7:~> Xorg -config xorg_vgl.conf :99 &
user@r7i7n7:~> (EE)
Fatal server error:
(EE) Server is already active for display 99
        If this server is no longer running, remove /tmp/.X99-lock
        and start again.
(EE)
(EE)
Please consult the The X.Org Foundation support
         at http://wiki.x.org
 for help.
(EE)

In this case, set :99 to :100 to assign a display number that is not used by other users.

  • connect VNC client and do the following
$ vglrun -d :99 <OpenGL application> # where :99 is the display number for VirtualGL that is set by Xorg above

If you allocated multiple GPUs and want to use second or subsequent GPUs, add the screen number to the display number.

$ vglrun -d :99.1 <OpenGL application> # where :99 is the display number for VirtualGL that is set by Xorg above, .1 is the screen number

In the above example, the third GPU is used if screen number .2 is set, and the forth GPU is used if screen number .3 is set.

7.6.2. gnuplot

In addition to the standard configure option, it is built to correspond to X11, latex, PDFlib-lite, Qt4.
You can start with the following commands:

$ module load gnuplot
$ gnuplot

7.6.3. Tgif

You can start with the following commands:

$ module load tgif
$ tgif

(note) Cannot open the Default (Msg) Font '--courier-medium-r-normal--14-----*-iso8859-1'.
If the above error occurs and it does not start up, add the following line to ~ / .Xdefaults.

Tgif.DefFixedWidthFont:             -*-fixed-medium-r-semicondensed--13-*-*-*-*-*-*-*
Tgif.DefFixedWidthRulerFont:        -*-fixed-medium-r-semicondensed--13-*-*-*-*-*-*-*
Tgif.MenuFont:                    -*-fixed-medium-r-semicondensed--13-*-*-*-*-*-*-*
Tgif.BoldMsgFont:                 -*-fixed-medium-r-semicondensed--13-*-*-*-*-*-*-*
Tgif.MsgFont:                     -*-fixed-medium-r-semicondensed--13-*-*-*-*-*-*-*

7.6.4. GIMP

You can start with the following commands:

$ module load gimp
$ gimp

7.6.5. ImageMagick

In addition to standard configure options, it is built to correspond to X11, HDRI, libwmf, jpeg. You can start with the following commands:

$ module load imagemagick
$ convert -size 48x1024 -colorspace RGB 'gradient:#000000-#ffffff' -rotate 90 -gamma 0.5 -gamma 2.0 result.jpg

7.6.6. pLaTeX2e

You can start with the following commands:

$ module load texlive
$ platex test.tex
$ dvipdfmx test.dvi

(note) Please use dvipdfmx to create pdf. Dvipdf does not convert Japanese properly.

7.6.7. Java SDK

You can start with the following commands:

$ module load jdk
$ javac Test.java
$ java Test

7.6.8. PETSc

Two different installations are provided: supporting real numbers and complex numbers. You can start with the following commands:

$ module load intel intel-mpi
$ module load petsc/3.7.6/real           <-- real number
 OR
$ module load petsc/3.7.6/complex       <-- complex number
$ mpiifort test.F -lpetsc

7.6.9. FFTW

Different versions are installed: 2.x series and 3.x series. You can start with the following commands:

$ module load intel intel-mpi fftw         <-- in case, Intel MPI
OR
$ module load intel cuda openmpi fftw     <-- in case, Open MPI
$ ifort test.f90 -lfftw3

7.6.10. DMTCP

An example using DMTCP is as follows.

  • Create the checkpoint
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
#!/bin/sh
# Descriptions about other options is omitted
#$ -ckpt user
#$ -c sx
module load dmtcp
export DMTCP_CHECKPOINT_DIR=<store directory>
export DMTCP_COORD_HOST=`hostname`
export DMTCP_CHECKPOINT_INTERVAL=<time>
dmtcp_coordinator --quiet --exit-on-last --daemon 2>&1        # start DMTCP
# Test if the first start or restarted
dmtcp_launch ./a.out                                # execute program by using DMTCP
$DMTCP_CHECKPOINT_DIR/dmtcp_restart_script.sh       # restart
  • Restart from the checkpoint
1
2
3
4
5
6
7
8
9
#!/bin/sh
# Descriptions about other options is omitted
#$ -ckpt user
#$ -c sx
module load dmtcp
export DMTCP_CHECKPOINT_DIR=<store directory>
export DMTCP_COORD_HOST=`hostname`
export DMTCP_CHECKPOINT_INTERVAL=<time>
$DMTCP_CHECKPOINT_DIR/dmtcp_restart_script.sh       # restart

Refer to the site shown below. http://dmtcp.sourceforge.net/

7.6.11. Singularity

please try them with a qrsh session.

  • Start a shell session
$ module load singularity
$ cp -p $SINGULARITY_DIR/image_samples/centos/centos-7-opa.simg .
$ singularity shell --nv -B /gs -B /apps -B /scr centos-7-opa.simg
  • Execute a command in the container image
$ module load singularity
$ cp -p $SINGULARITY_DIR/image_samples/centos/centos-7-opa.simg .
$ singularity shell --nv -B /gs -B /apps -B /scr centos-7-opa.simg <command>
  • Execute a MPI program
$ module load singularity cuda openmpi
$ cp -p $SINGULARITY_DIR/image_samples/centos/centos-7-opa.simg .
$ mpirun -x LD_LIBRARY_PATH -x SINGULARITYENV_LD_LIBRARY_PATH=$LD_LIBRARY_PATH -x SINGULARITYENV_PATH=$PATH -x <environment variables> -npernode <# of processes/node> -np <# of processes> singularity exec --nv -B /apps -B /gs -B /scr/ centos-7-opa.simg <MPI binary>

From singularity/3.4.2, fakeroot option is available to edit the image by user privilege.

Info

The fakeroot feature is introduced into Singularity 3.3.0. However, due to a system-specific problem, the function does not work on Singularity 3.4.1 or prior. To edit the image by using fakeroot option, it is necessary to invoke them on $T3TMPDIR.

The following is an expample to install vim into centos image.

$ cd $T3TMPDIR
$ module load singularity
$ singularity build -s centos/ docker://centos:latest
INFO:    Starting build...
Getting image source signatures
...
$ singularity shell -f -w centos # -f is the fakeroot option
Singularity> id
uid=0(root) gid=0(root) groups=0(root)
Singularity> unset TMPDIR # a workaround for the error "Cannot create temporary file - mkstemp: No such file or directory"
Singularity> yum install -y vim
Failed to set locale, defaulting to C.UTF-8
CentOS-8 - AppStream                                                                  6.6 MB/s | 5.8 MB     00:00
CentOS-8 - Base                                                                       5.0 MB/s | 2.2 MB     00:00
CentOS-8 - Extras
...
Installed:
  gpm-libs-1.20.7-15.el8.x86_64            vim-common-2:8.0.1763-13.el8.x86_64  vim-enhanced-2:8.0.1763-13.el8.x86_64
  vim-filesystem-2:8.0.1763-13.el8.noarch  which-2.21-12.el8.x86_64

Complete!
Singularity> which vim
/usr/bin/vim
Singularity> exit
$ singularity build -f centos.sif centos/
INFO:    Starting build...
INFO:    Creating SIF file...
INFO:    Build complete: centos.sif
$ singularity shell centos.sif
Singularity> which vim
/usr/bin/vim # <--- vim has been installed
  • Install CUDA OPA driver libraries into the container image(for installing OPA10.9.0.1.2 CUDA version into centos7.5 image)

note: OPA version of the system might be updated on the system maintenance, so please change the version of OPA if needed.
The version of OPA can be checked as follows.

$ rpm -qa |grep opaconfig
opaconfig-10.9.0.1-2.x86_64

Download the OPA installer from this link

$ module load singularity/3.4.2
$ cp -p IntelOPA-IFS.RHEL75-x86_64.10.9.0.1.2.tgz ~
$ singularity build -s centos7.5/ docker://centos:centos7.5.1804
$ find centos7.5/usr/ -mindepth 1 -maxdepth 1 -perm 555 -print0 |xargs -0 chmod 755 # some files in the image does not have writable permission, so add it
$ singularity shell -f -w centos7.5
Singularity centos:~> tar xf IntelOPA-IFS.RHEL75-x86_64.10.9.0.1.2.tgz
Singularity centos:~> cd IntelOPA-IFS.RHEL75-x86_64.10.9.0.1.2/IntelOPA-OFA_DELTA.RHEL75-x86_64.10.9.0.1.2/RPMS/redhat-ES75/CUDA/
Singularity centos:~> yum install -y numactl-libs hwloc-libs libfabric libibverbs infinipath-psm
Singularity centos:~> rpm --force -ivh libpsm2-*.rpm
Singularity centos:~> exit
$ find centos7.5/usr/bin -perm 000 -print0 |xargs -0 chmod 755 # after yum install, somehow permission 000 file is installed in /usr/bin, so change the permission
$ singularity build centos7.5.sif centos7.5/

Though IFS version is used in the previous example, BASIC version can be also used.
For more details, please visit the following page:

https://sylabs.io/docs/