7. Freeware¶
Freesoftware available in this system is listed below.
Software | Description |
---|---|
GAMESS | Solver Simulator |
Tinker | Solver Simulator |
GROMACS | Solver Simulator |
LAMMPS | Solver Simulator |
NAMMD | Solver Simulator |
QUANTUM ESPRESSO | Solver Simulator |
CP2K | Solver Simulator |
OpenFOAM | Solver Simulator, Visualization |
CUDA | GPU library |
CuDNN | GPU library |
NCCL | GPU library |
TensorFlow | DeepLearning framework |
DeePMD-kit | MD DeepLearning framework |
PyTorch | Machine larning |
R | Interpriter (Rmpi,rpud) |
Hadoop | Distributed Data Processing Tools |
POV-Ray | Visualization |
ParaView | Visualization |
VisIt | Visualization |
vmd | Visualization |
VESTA | Visualization |
turbovnc | Remote GUI(X11) |
VirtualGL | Remote GUI |
Open OnDemand | Web poral site for HPC |
gnuplot | Data grapher/visualization |
GIMP | Image Viewing and Editing |
Tgif | Image Viewing and Editing |
ImageMagick | Image Viewing and Editing |
TeX Live | TeX distribution |
OpenJDK | Development tool |
python | Development tool |
ruby | Development tool |
perl | Development tool |
PHP | Development tool |
golang | Development tool |
Emacs | Editor |
vim | Editor |
PETSc | Linear system solvers, libraries |
FFTW | Fast Fourier Transform Library |
Apptainer | Container platform |
Spack | Software package management |
miniconda | Software package management |
PyPl | Software package management |
Rbenv | Software package management |
Alphafold | Bio |
tmux | Terminal mutiplexer |
NetCDF | Multidimensional data format |
HDF5 | Hierarchical data format |
ffmpeg | Video and audio processing |
Info
For "module" command, please refer to Switch User Environment.
7.1. Quantum chemistry/MD related software¶
7.1.1. GAMESS¶
GAMESS is an open source ab initio molecular quantum chemistry calculation application.
An example of how GAMESS can be used with the batch queue system is shown below.
#!/bin/bash
#$ -cwd
#$ -l node_f=1
#$ -l h_rt=0:10:0
#$ -N gamess
module load gamess
$GAMESS_DIR/rungms exam01 00 2 2 1
A detailed description is provided below.
http://www.msg.ameslab.gov/gamess/index.html
7.1.2. Tinker¶
Tinker is a modeling software for molecular dynamics with special features for biopolymers.
An example of how Tinker can be used with the batch queue system is shown below.
#!/bin/bash
#$ -cwd
#$ -l node_f=1
#$ -l h_rt=0:10:0
#$ -N tinker
module load tinker
cp -rp $TINKER_DIR/example $TMPDIR
cd $TMPDIR/example
dynamic waterbox.xyz -k waterbox.key 100 1 1 2 300
cp -rp $TMPDIR/example $HOME
A detailed description is provided below.
https://dasher.wustl.edu/tinker/
7.1.3. GROMACS¶
GROMACS is an engine for molecular dynamics simulation and energy minimization.
An example of how to use GROMACS with the batch queue system is shown below.
#!/bin/bash
#$ -cwd
#$ -l node_f=1
#$ -l h_rt=0:10:0
#$ -N gromacs
module load gromacs
cp -rp $GROMACS_DIR/examples/water_GMX50_bare.tar.gz $TMPDIR
cd $TMPDIR
tar xf water_GMX50_bare.tar.gz
cd water-cut1.0_GMX50_bare/3072
gmx_mpi grompp -f pme.mdp
OMP_NUM_THREADS=2 mpiexec -np 4 gmx_mpi mdrun
cp -rp $TMPDIR/water-cut1.0_GMX50_bare $HOME
A detailed description is provided below.
Info
GROMACS 2023 and later support the CUDA Graphs feature.
This feature may lead to improved execution performance when using multiple GPUs, but it is your own responsibility to determine whether or not it will speed up your case, as described below.
"Note that this remains an experimental feature which has had limited testing, so care should be taken to ensure the results are as expected."
A detailed description is provided below.
https://developer.nvidia.com/blog/a-guide-to-cuda-graphs-in-gromacs-2023/
Example ( TSUBAME4.0 node_f=1 ) :
mpiexec -x OMP_NUM_THREADS=2 -x GMX_ENABLE_DIRECT_GPU_COMM=1 -np 4 gmx_mpi mdrun -nb gpu -bonded gpu -pme gpu -update gpu -npme 1
7.1.4. LAMMPS¶
LAMMPS is a classical molecular dynamics code that models populations of liquid, solid, and gaseous particles.
An example of how LAMMPS can be used with a batch queue system is described below.
#!/bin/bash
#$ -cwd
#$ -l node_f=1
#$ -l h_rt=0:10:0
#$ -N lammps
module load lammps
cp -rp $LAMMPS_DIR/examples/VISCOSITY $TMPDIR
cd $TMPDIR/VISCOSITY
mpirun -x PATH -x LD_LIBRARY_PATH -np 4 lmp -sf gpu -in in.gk.2d
cp -rp $TMPDIR/VISCOSITY $HOME
A detailed description is provided below.
7.1.5. NAMD¶
NAMD is an object-oriented parallel molecular dynamics code designed for high-performance simulations of large biomolecular systems.
An example of how NAMD can be used with a batch queue system is described below.
#!/bin/bash
#$ -cwd
#$ -l node_f=1
#$ -l h_rt=0:10:0
#$ -N namd
module load namd
cp -rp $NAMD_DIR/examples/stmv.tar.gz $TMPDIR
cd $TMPDIR
tar xf stmv.tar.gz
cd stmv
namd3 +idlepoll +p4 +devices 0,1,2,3 stmv.namd
cp -rp $TMPDIR/stmv $HOME
A detailed description is provided below.
https://www.ks.uiuc.edu/Research/namd/3.0/ug/
7.1.6. CP2K¶
CP2K is a quantum chemistry and solid state physics software package that can run atomic simulations of solid, liquid, molecular, periodic, material, crystalline, and biological systems.
An example of how CP2K can be used with a batch queue system is described below.
#!/bin/bash
#$ -cwd
#$ -l node_f=1
#$ -l h_rt=0:10:0
#$ -N cp2k
module load cp2k
cp -rp $CP2K_DIR/benchmarks/QS $TMPDIR
cd $TMPDIR/QS
export OMP_NUM_THREADS=1
mpirun -x PATH -x LD_LIBRARY_PATH -np 4 cp2k.psmp -i H2O-32.inp -o H2O-32.out
cp -rp $TMPDIR/QS $HOME
A detailed description is provided below.
7.1.7. QUANTUM ESPRESSO¶
QUANTUM ESPRESSO is a suite for first-principles electronic structure calculations and materials modeling.
An example of how to use QUANTUM ESPRESSO with the batch queue system is described below.
#!/bin/sh
#$ -cwd
#$ -l h_rt=00:10:00
#$ -l node_f=1
#$ -N q-e
module purge
module load quantumespresso
cp -p $QUANTUMESPRESSO_DIR/test-suite/pw_scf/scf.in .
cp -p $QUANTUMESPRESSO_DIR/example/Si.pz-vbc.UPF .
mpirun -x ESPRESSO_PSEUDO=$PWD -x PATH -x LD_LIBRARY_PATH -np 4 pw.x < scf.in
Info
QUANTUM ESPRESSO introduced in TSUBAME4.0 does not support CPU. Be sure to specify a resource type that can use GPU.
Also, make sure that the number of parallels specified for -np matches the number of GPUs for the resource type you are using. (Round up)
Example: Specify -np 4 when using node_f. Specify -np 1 when using gpu_h.
Also, when implementing "When running OpenMPI/Intel MPI, an hcoll-related error or segmentation fault occurs.", be sure to run it on one node.
If run on multiple nodes, a kernel panic may occur.
A detailed description is provided below.
https://www.quantum-espresso.org/
7.2. CFD related software¶
7.2.1. OpenFOAM¶
OpenFOAM is an open source fluid/continuum simulation code. There are two versions installed: the Foudation version (openfoam) and the ESI version (openfoam-esi).
An example of how to use OpenFOAM with the batch queue system is described below.
#!/bin/bash
#$ -cwd
#$ -l node_f=1
#$ -l h_rt=0:10:0
#$ -N openform
module load openfoam
mkdir -p $TMPDIR/$FOAM_RUN
cd $TMPDIR/$FOAM_RUN
cp -rp $FOAM_TUTORIALS .
cd tutorials/legacy/incompressible/icoFoam/cavity/cavity
blockMesh
icoFoam
paraFoam
If you are using the ESI version of OpenFOAM, replace module load
above with module openfoam-esi
.
A detailed explanation is given below.
https://openfoam.org/resources/
http://www.openfoam.com/documentation/
7.3. Numerical calculation library for GPU¶
7.3.1. cuBLAS¶
cuBLAS is a BLAS (Basic Linear Algebra Subprograms) library that runs on GPUs.
How to use
$ module load cuda
$ nvcc -gencode arch=compute_90,code=sm_90 -o sample sample.cu -lcublas
When calling cuBLAS in a normal C program, it must be specified with -I, -L, or -l at compile time.
$ module load cuda
$ gcc -o blas blas.c -I${CUDA_HOME}/include -L${CUDA_HOME}/lib64 -lcublas
7.3.2. cuSPARSE¶
cuSPARSE is a library for sparse matrix calculations on NVIDIA GPUs.
How to use
$ module load cuda
$ nvcc -gencode arch=compute_90,code=sm_90 sample.cu -lcusparse -o sample
When calling cuSPARSE in a normal C program, it must be specified with -I, -L, or -l at compile time.
$ module load cuda
$ g++ sample.c -lcusparse_static -I${CUDA_HOME}/include -L${CUDA_HOME}/lib64 -lculibos -lcudart_static -lpthread -ldl -o sample
7.3.3. cuFFT¶
cuFFT is a library for parallel FFT (Fast Fourier Transform) on NVIDIA GPUs.
How to use
$ module load cuda
$ nvcc -gencode arch=compute_90,code=sm_90 -o sample sample.cu -lcufft
If cuFFT is called during a normal C program, it must be specified with -I, -L, or -l at compile time.
$ module load cuda
$ gcc -o blas blas.c -I${CUDA_HOME}/include -L${CUDA_HOME}/lib64 -lcufft
7.4. Machine learning and big data analysis related software¶
7.4.1. CuDNN¶
CuDNN is a library for GPU-based Deep Neural Networks.
The usage of CuDNN is described below.
$ module load cuda cudnn
7.4.2. NCCL¶
NCCL is a collective communication library for multiple GPUs.
An example of how to use NCCL is shown below.
$ module load cuda nccl
7.4.3. PyTorch¶
PyTorch is an open source machine learning library available for machine learning in Python.
The installation procedure for PyTorch is described below.
$ python3 -m pip install --user torch
Info
PyTorch is installed in the user environment.
7.4.4. TensorFlow¶
TensorFlow is an open source library for machine learning and AI using data flow graphs.
Instructions for installing TensorFlow are provided below.
$ python3 -m pip install --user tensorflow
Info
TensorFlow is installed in the user environment.
A detailed description is provided below.
Info
To run TensorFlow on a GPU, the versions of python, cudnn, and cuda must match the table linked below.
https://www.tensorflow.org/install/source#gpu
Example)
TensorFlow 2.17.0
Pytnon 3.9~3.12
CuDNN 8.9
CUDA 12.3
Please note that TSUBAME4.0 does not introduce CuDNN 8.9, which supports CUDA12.
In addition, as a general rule, we do not introduce versions older than the currently introduced version.
If you want to run TensorFlow 2.17.0 on a GPU, please install CuDNN 8.9 on your own environment using pip or other software.
(We have confirmed that neither without CuDNN nor with CuDNN 9.0.0 will work on GPUs.)
In addition, depending on the construction environment and combination, there may be cases where installation cannot be performed normally even if the conditions are met.
Our verified procedure is described below for reference information.
If you have trouble installing other versions, procedures, or environments, you will need to check the release notes of each application yourself to resolve the problem.
[Reference]
Here is a log of the process of installing TensorFlow 2.17.0 and CuDNN 8.9 with CUDA12 support in a python virtual environment and confirming the use of GPUs.
shell-session
# Check python version
$ python -V
Python 3.9.18
# Load cuda12.3.2
$ module load cuda/12.3.2
# Create python virtual environment
$ python -m venv venv
# Call python virtual environment
$ source venv/bin/activate
# Upgrade pip
$ pip install --upgrade pip
# Install cudnn8.9 for cuda12 and tenserflow2.17.0
$ pip install nvidia-cudnn-cu12==8.9.7.29 tensorflow==2.17.0
# Check that the GPU is visible
$ python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"
<< Omit logs >>
[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
7.4.5. DeePMD-kit¶
DeePMD-kit is a machine learning framework for MD. An example job script for DeePMD-kit is shown below.
7.4.5.1. 1 DeePMD-kit + LAMMPS¶
7.4.5.1.1. DeePMD-kit LAMMPS 1 node¶
An example job script for DeePMD-kit + LAMMPS (1 node, 4 GPUs) is shown below.
#!/bin/sh
#$ -l h_rt=6:00:00
#$ -l node_f=1
#$ -cwd
module purge
module load deepmd-kit lammps
module li 2>&1
# enable DeePMD-kit for lammps/2aug2023_u3
export LAMMPS_PLUGIN_PATH=$DEEPMD_KIT_DIR/lib/deepmd_lmp
# https://tutorials.deepmodeling.com/en/latest/Tutorials/DeePMD-kit/learnDoc/Handson-Tutorial%28v2.0.3%29.html
wget https://dp-public.oss-cn-beijing.aliyuncs.com/community/CH4.tar
tar xf CH4.tar
cd CH4/00.data
python3 <<EOF
import dpdata
import numpy as np
data = dpdata.LabeledSystem('OUTCAR', fmt = 'vasp/outcar')
print('# the data contains %d frames' % len(data))
# random choose 40 index for validation_data
index_validation = np.random.choice(200,size=40,replace=False)
# other indexes are training_data
index_training = list(set(range(200))-set(index_validation))
data_training = data.sub_system(index_training)
data_validation = data.sub_system(index_validation)
# all training data put into directory:"training_data"
data_training.to_deepmd_npy('training_data')
# all validation data put into directory:"validation_data"
data_validation.to_deepmd_npy('validation_data')
print('# the training data contains %d frames' % len(data_training))
print('# the validation data contains %d frames' % len(data_validation))
EOF
cd ../01.train
dp train input.json
dp freeze -o graph.pb
dp compress -i graph.pb -o graph-compress.pb
dp test -m graph-compress.pb -s ../00.data/validation_data -n 40 -d results
cd ../02.lmp
ln -s ../01.train/graph-compress.pb
mpirun -x PATH -x LD_LIBRARY_PATH -np 4 lmp -sf gpu -in in.lammps
7.4.5.1.2. DeePMD-kit LAMMPS 2 nodes¶
An example job script for DeePMD-kit + LAMMPS (2 nodes, 8 GPUs) is shown below.
#!/bin/sh
#$ -l h_rt=12:00:00
#$ -l node_f=2
#$ -cwd
module purge
module load deepmd-kit lammps
module li 2>&1
# enable DeePMD-kit
export LAMMPS_PLUGIN_PATH=$DEEPMD_KIT_DIR/lib/deepmd_lmp
# https://tutorials.deepmodeling.com/en/latest/Tutorials/DeePMD-kit/learnDoc/Handson-Tutorial%28v2.0.3%29.html
wget https://dp-public.oss-cn-beijing.aliyuncs.com/community/CH4.tar
tar xf CH4.tar
cd CH4/00.data
python3 <<EOF
import dpdata
import numpy as np
data = dpdata.LabeledSystem('OUTCAR', fmt = 'vasp/outcar')
print('# the data contains %d frames' % len(data))
# random choose 40 index for validation_data
index_validation = np.random.choice(200,size=40,replace=False)
# other indexes are training_data
index_training = list(set(range(200))-set(index_validation))
data_training = data.sub_system(index_training)
data_validation = data.sub_system(index_validation)
# all training data put into directory:"training_data"
data_training.to_deepmd_npy('training_data')
# all validation data put into directory:"validation_data"
data_validation.to_deepmd_npy('validation_data')
print('# the training data contains %d frames' % len(data_training))
print('# the validation data contains %d frames' % len(data_validation))
EOF
cd ../01.train
mpirun -x PATH -x LD_LIBRARY_PATH -x PYTHONPATH -x NCCL_BUFFSIZE=1048576 -npernode 4 -np 8 dp train input.json
dp freeze -o graph.pb
dp compress -i graph.pb -o graph-compress.pb
dp test -m graph-compress.pb -s ../00.data/validation_data -n 40 -d results
cd ../02.lmp
ln -s ../01.train/graph-compress.pb
mpirun -x PATH -x LD_LIBRARY_PATH -npernode 4 -np 8 lmp -sf gpu -in in.lammps
A detailed description is provided below. https://docs.deepmodeling.com/projects/deepmd/en/master/index.html
7.4.6. R¶
R is an interpreted programming language for data analysis and graphics.
Rmpi is installed for parallel processing and rpud for GPU.
Examples of how to use R are described below.
$ module load R
$ mpirun -np 2 Rscript test.R
7.4.7. Apache Hadoop¶
The Apache Hadoop software library is a framework for distributed processing of large data sets using a simple programming model.
Examples of how to use Apache Hadoop are described below.
$ module load hadoop
$ mkdir input
$ cp -p $HADOOP_HOME/etc/hadoop/*.xml input
$ hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.6.jar grep input output 'dfs[a-z.]+'
$ cat output/part-r-00000
1 dfsadmin
The following is the procedure for using the system in the case of a batch queue system.
#!/bin/bash
#$ -cwd
#$ -l node_f=1
#$ -l h_rt=0:10:0
#$ -N hadoop
module load hadoop
cd $TMPDIR
mkdir input
cp -p $HADOOP_HOME/etc/hadoop/*.xml input
hadoop jar $HADOOP_HOME/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.6.jar grep input output 'dfs[a-z.]+'
cp -rp output $HOME
7.5. Visualization related software¶
7.5.1. POV-Ray¶
POV-Ray is a free ray-tracing software.
Examples of how POV-Ray can be used are described below.
$ module load pov-ray
$ povray -benchmark
A detailed description is provided below.
7.5.2. ParaView¶
ParaView is an open source, multi-platform data analysis and visualization application.
Examples of how ParaView can be used are described below.
$ module load paraview
$ paraview
7.5.2.1. For visualization with multiple GPUs¶
You can use paraview/5.12.0
, paraview/5.12.0-egl
, to visualize with multiple GPUs on multiple nodes.
Note that paraview/5.12.0-egl
does not include the paraview
command.
Here is an example of using 8 GPUs with node_f=2
.
- wrap.sh
#!/bin/sh
num_gpus_per_node=4
mod=$((OMPI_COMM_WORLD_RANK%num_gpus_per_node))
if [ $mod -eq 0 ];then
export VTK_DEFAULT_EGL_DEVICE_INDEX=0
elif [ $mod -eq 1 ];then
export VTK_DEFAULT_EGL_DEVICE_INDEX=1
elif [ $mod -eq 2 ];then
export VTK_DEFAULT_EGL_DEVICE_INDEX=2
elif [ $mod -eq 3 ];then
export VTK_DEFAULT_EGL_DEVICE_INDEX=3
fi
$*
- job.sh
#!/bin/sh
#$ -cwd
#$ -V
#$ -l h_rt=8:0:0
#$ -l node_f=2
module purge
module load paraview
mpirun -x PATH -x LD_LIBRARY_PATH -npernode 4 -np 8 ./wrap.sh pvserver
Don't forget to give wrap.sh
execute permission. (chmod 755 wrap.sh
)
With the above job script
qsub -g <グループ名> job.sh
Confirm that the job is flowing with qstat
.
yyyyyyyy@login1:~> qstat
job-ID prior name user state submit/start at queue jclass slots ja-task-ID
------------------------------------------------------------------------------------------------------------------------------------------------
xxxxxxx 0.55354 job.sh yyyyyyyy r 05/31/2024 09:24:19 all.q@rXnY 56
Ssh to the node where the job is flowing via X forwarding and start paraview.
yyyyyyyy@login1:~> ssh -CY rXnY
yyyyyyyy@rXnY:~> module load paraview
paraview
turbovnc can also be used.
After starting, click "File"->"Connect" and click "Add Server".
Enter "Name" appropriately (in this case, "test") and click "Configure".
Then click "Connect".
Once connected, test(cs://localhost:11111)
will appear in the "Pipeline Browser" field.
The sample data of paraview can be downloaded from here.
A detailed description is provided below.
7.5.3. VisIt¶
VisIt is an open source visualization application.
Examples of how VisIt can be used are described below.
$ module visit
$ visit
A detailed description is provided below.
https://wci.llnl.gov/simulation/computer-codes/visit/
7.6. Other freeware¶
7.6.1. turbovnc¶
turbovnc is open source VNC software. An example of how to use turbovnc is shown below.
Please allocate a computation node with qrsh and execute on the computation node.
- Get compute nodes
$ qrsh -g <TSUBAME group> -l <Resource type>=<number> -l h_rt=<Maximum run time>
- Execute the following on the secured compute node and start vncserver
$ module load turbovnc
$ vncserver -xstartup xfce.sh
You will require a password to access your desktops.
Password: # <-- Enter a password
Verify:
Would you like to enter a view-only password (y/n)? n
Desktop 'TurboVNC: rXnY:1 ()' started on display rXnY:1 # <-- Make sure to memolize VNC display number:1
Creating default startup script /home/n/xxxx/.vnc/xstartup.turbovnc
Starting applications specified in /home/n/xxxx/.vnc/xstartup.turbovnc
Log file is /home/n/xxxx/.vnc/rXiYnZ:1.log
If you want to increase the screen size, specify the size as vncserver -geometry <WIDTH>x<HEIGHT>
.
- Then download the installer for your own PC from https://sourceforge.net/projects/turbovnc/files/, Install turbovnc viewer
- From the terminal software connected to the compute node, configure SSH port forwarding to forward local port 5901 to port 5901 on the compute node (if the display number is rXiYnZ:n, the port number for port forwarding is set to 5900+n).
- Start turbovnc viewer from your own PC, connect to localhost:5901 and enter the password you set.
Tips
The VNC display number counts up each time vncserver is started; check the port number in the SSH port forwarding configuration each time.
7.6.1.1. How to use VNC client from MobaXterm¶
MobaXterm has a built-in VNC client, so you can use VNC connections without installing a VNC client.
- After securing the node with qrsh, select "Sessions"->"New session"->"VNC" from MobaXterm.
- Then, in "Basic Vnc settings", enter the hostname of the secured compute node in "Remote hostname or IP address" and 5900+n in "Port", click "Connect through SSH gateway( jump host)" in "Network settings", enter login.t4.gsic.titech.ac.jp in "Gateway SSH server", and click "Connect through SSH server" in "Network settings". Enter login.t4.gsic.titech.ac.jp in "Gateway SSH server", leave "Port" as 22, enter your TSUBAME login name in "User", check "Use private key" and enter your Check "Use private key" and enter your private key.
Click OK to start the VNC client.
7.6.1.2. turbovnc + VirtualGL¶
If you are using a resource type (node_f, node_h, node_q, gpu_1) that allocates one or more GPUs when using turbovnc, you can use VirtualGL to visualize using GPUs. As an example, the use of VirtualGL in the case of gpu_1 is shown below.
$ qrsh ... -l gpu_1=1
$ module load turbovnc
$ vncserver -xstartup xfce.sh
- Connect with a VNC client and perform the following
$ vglrun -d /dev/dri/card<N> <OpenGL application>
<N>
is a number from 1 to 4, corresponding to GPU 0 to 3.
The correspondence between the GPU number and card<N>
is as follows
Device name | GPU number |
---|---|
/dev/dri/card1 | GPU1(64:00) |
/dev/dri/card2 | GPU0(04:00) |
/dev/dri/card3 | GPU3(E4:00) |
/dev/dri/card4 | GPU2(84:00) |
7.6.2. gnuplot¶
gnuplot is a command line interactive graph drawing program.
In addition to the standard features, it is built to support X11, latex, PDFlib-lite and Qt4.
Examples of how to use gnuplot are given below.
$ gnuplot
7.6.3. Tgif¶
tgif is an open source drawing tool.
The following is a description of how to use tgif.
$ module load tgif
$ tgif
Cannot open the Default(Msg) Font '-*-courier-medium-r-normal-*-14-*-*-*-*-*-iso8859-1'
, add the following line to ~/.Xdefaults
Tgif.DefFixedWidthFont: -*-fixed-medium-r-semicondensed--13-*-*-*-*-*-*-*
Tgif.DefFixedWidthRulerFont: -*-fixed-medium-r-semicondensed--13-*-*-*-*-*-*-*
Tgif.MenuFont: -*-fixed-medium-r-semicondensed--13-*-*-*-*-*-*-*
Tgif.BoldMsgFont: -*-fixed-medium-r-semicondensed--13-*-*-*-*-*-*-*
Tgif.MsgFont: -*-fixed-medium-r-semicondensed--13-*-*-*-*-*-*-*
7.6.4. GIMP¶
GIMP is an open source image manipulation program.
Examples of how to use GIMP are described below.
$ gimp
7.6.5. ImageMagick¶
ImageMagick is an image processing tool.
In addition to the standard features, it is built to support X11, HDRI, libwmf, and jpeg.
An example of how to use ImageMagick is shown below.
$ module load imagemagick
$ convert -size 48x1024 -colorspace RGB 'gradient:#000000-#ffffff' -rotate 90 -gamma 0.5 -gamma 2.0 result.jpg
7.6.6. Tex Live¶
TeX Live is an integrated TeX package.
An example of how to use TeX Live is shown below.
$ lualatex test.tex
Info
A PDF file will be created.
7.6.7. Java SDK¶
The following versions of OpenJDK are installed as Java SDK.
- openjdk version "1.8.0_402"
- openjdk version "11.0.22" (Default)
- openjdk version "21.0.2"
The module command allows you to switch between versions.
The method of switching is as follows
module unload openjdk
module load openjdk/<version>
OpenJDK version | version |
---|---|
1.8.0_402 | 1.8.0 |
11.0.22 | 11.0.22 |
21.0.2 | 21.0.2 |
The current version can be checked by following the steps below.
java -version
An example of how to use the Java SDK is shown below.
$ javac Test.java
$ java Test
7.6.8. PETSc¶
PETSc is an open source parallel numerical library. It can perform linear equation solving, etc.
Two versions, one for real numbers and the other for complex numbers, are installed.
Examples of how to use PETSc are described below.
$ module load petsc/3.20.4-real ← real number usage
or
$ module load petsc/3.20.4-complex ← for complex numbers
$ mpiifort test.F -lpetsc
7.6.9. FFTW¶
FFTW is an open source library for Fast Fourier Transform.
Since the FFTW 2x and 3x series are incompatible, two versions, Series 2 and Series 3, are installed.
Examples of how to use FFTW are described below.
$ module load fftw/3.3.10-intel intel-mpi/2021.11 ← Intel MPI
or
$ module load fftw/3.3.10-gcc ← Open MPI
$ gfortran test.f90 -lfftw3
7.6.10. Apptainer (Singularity)¶
Apptainer (Singularity) is a Linux container for HPC.
For more information, see Use containers.
7.6.11. Alphafold2¶
Alphafold2 is a protein structure prediction program using machine learning. An example of using Alphafold is shown below.
- Initialization (login node or compute node)
module purge module load alphafold cp -pr $ALPHAFOLD_DIR . cd alphafold
- At runtime (example job script for alphafold/2.3.2)
#!/bin/sh #$ -l h_rt=24:00:00 #$ -l node_f=1 #$ -cwd module purge module load alphafold module li cd alphafold ./run_alphafold.sh -a 0,1,2,3 -d $ALPHAFOLD_DATA_DIR -o dummy_test/ -m model_1 -f ./example/query.fasta -t 2020-05-14
Due to the large size of the database files, please avoid individual downloads whenever possible.
For a detailed description of Alphafold, please see below. https://github.com/deepmind/alphafold
7.6.12. miniconda¶
miniconda is a python virtual environment creation software. An example of using miniconda is shown below.
module load miniconda
eval "$(/apps/t4/rhel9/free/miniconda/24.1.2/bin/conda shell.bash hook)"
conda create -n test
conda acivate test
For a detailed description of miniconda, please see below. https://docs.anaconda.com/free/miniconda/index.html
7.6.13. spack¶
spack is a package manager for HPC. An example of using spack is shown below.
module load spack
spack install tree
For more information on spack, please see below. https://spack.io/
7.6.14. Automake¶
Automake is a programming tool to automate parts of the compilation process.
You can use another version of Automake by using it with the module command.
$ automake --version
automake (GNU automake) 1.16.2
$ module load automake
$ automake --version
automake (GNU automake) 1.17
For more information on Automake, please see below.
https://www.gnu.org/software/automake/
7.6.15. Autoconf¶
Autoconf is a tool for producing configure scripts for building, installing, and packaging software on computer systems where a Bourne shell is available.
You can use another version of Autoconf by using it with the module command.
$ autoconf --version
autoconf (GNU Autoconf) 2.69
$ module load autoconf
$ autoconf --version
autoconf (GNU Autoconf) 2.72
For more information on Autoconf, please see below.
https://www.gnu.org/software/autoconf/
7.7. Databases for software¶
We have prepared databases on TSUBAME for use with some software. Because of the large size of these database files, please avoid downloading them individually if possible.
7.7.1. Database for Alphafold2¶
Databases available in Alphafold2.
module load alphafold2_database
When using Alphafold2 prepared by TSUBAME, the database is also set automatically, so there is no need to specify ALPHAFOLD2_DATABASE.
The database is scheduled to be updated about once a month. By specifying above, the latest database in TSUBAME4.0 is always specified.
If you want to fix the database at a specific timing, specify the version at module load.
[ Version check procedure ]
$ module load alphafold2_database
$ module list
Currently Loaded Modulefiles:
1) alphafold2_database/202411
7.7.2. Database for Alphafold3¶
Databases available in Alphafold3.
module load alphafold3_database
You need to prepare the execution environment of Alphafold3 by yourself.
Please refer to Qiita written by Associate Professor Yoshitaka Moriwaki who belongs to National University Corporation Institute of Science Tokyo Institute for Integrated Research Medical Research Laborator for how to use on TSUBAME4.0.
Info
AlphaFold3 source code and model parameters are each covered by different licenses.
There are restrictions such as prohibition of commercial use, so please check Licence and Disclaimer and comply with the license before building the Alphafold3 execution environment.
The database is scheduled to be updated about once a month. By specifying above, the latest database in TSUBAME4.0 is always specified.
If you want to fix the database at a specific timing, specify the version at module load.
[ Version check procedure ]
$ module load alphafold3_database
$ module list
Currently Loaded Modulefiles:
1) alphafold3_database/202411
7.7.3. Database for LocalColabfold¶
Databases available in LocalColabfold.
module load colabfold_database
You need to prepare the execution environment of LocalColabfold by yourself.
For more information on LocalColabfold, please refer to Qiita written by Associate Professor Yoshitaka Moriwaki who belongs to National University Corporation Institute of Science Tokyo Institute for Integrated Research Medical Research Laborator
Info
The Colabfold license applies to the use of LocalColabfold.
Before building an execution environment for LocalColabfold, please check the LICENSE and comply with the license.
The database is scheduled to be updated about once a month. By specifying above, the latest database in TSUBAME4.0 is always specified.
If you want to fix the database at a specific timing, specify the version at module load.
[ Version check procedure ]
$ module load colabfold_database
$ module list
Currently Loaded Modulefiles:
1) colabfold_database/202411