1. How to use¶
This software is installed to TSUBAME3.0 experimentally and thus NO official support will be provided
The support will be under a best-effort basis.
ParaView is an open-source, scalable, multi-platform data analysis, and visualization application.
ParaView with a single process makes it hard to visualize large-scale data due to insufficient memory. The parallel version of ParaView in TSUBAME 3.0 can visualize such large-scale data.
The configuration of parallel ParaView is shown below.
By starting pvserver on multiple compute nodes and connecting with paraview of login node, large-scale data visualization becomes possible. It is called "Parallel version ParaView."
1.1. Node allocation and booting server¶
Since the ParaView client displays the GUI using the X Window protocol, login to TSUBAME with X forwarding by using "ssh –Y" or equivalent, by referring TSUBAME3.0 User's Guide.
First, use qrsh to allocate the nodes and start pvserver.
Please refer TSUBAME3.0 User's Guide for details about interactive job execution.
In the following example, 10 q_node are allocated for 30 minutes.
Login to q_node rXiYnZ and load necessary modules and start pvserver.
Execute mpirun in 64 parallels out of 70 cores allocated with 7 cores x 10 nodes.
login0:~> qrsh -g tga-GROUP -l q_node=10 -l h_rt=00:30:00 rXiYnZ:~> module load cuda openmpi paraview/0_5.2.0 rXiYnZ:~> mpirun -x LD_LIBRARY_PATH -np 64 pvserver --use-offscreen-rendering --disable-xdisplay-test & Waiting for client... Connection URL: cs://rXiYnZ:11111 Accepting connection(s): rXiYnZ:11111
As shown in the above example, the server process will start in the background by putting
& at the end of mpirun invocation. Also, the URL to access the pvserver is displayed.
1.2. Starting GUI client¶
Then, start paraview as a GUI client.
Using the server's URL obtained by the previous step, connect to the pre-launched pvserver.
rXiYnZ:~> paraview --server-url=cs://rXiYnZ:11111
Now, you can use ParaView. Please refer to the following URL in detail.