Introduction to Running CFX on U2 Introduction to the U2 Cluster Getting Help Hardware Resources Software Resources Computing Environment Data Storage Login and File Transfer UBVPN Login and Logout More about X-11 Display File Transfer
Dec 18, 2015
Introduction to Running CFX on U2
Introduction to the U2 Cluster Getting Help Hardware Resources Software Resources Computing Environment Data Storage
Login and File Transfer UBVPN Login and Logout More about X-11 Display File Transfer
Introduction to Running CFX on U2
Unix Commands Short list of Basic Unix Commands Reference Card Paths and Using Modules
Starting the CFX Solver Launching CFX Monitoring
Running CFX on the Cluster PBS Batch Scheduler Interactive Jobs Batch Jobs
Information and Getting Help
Getting help: CCR uses an email problem ticket system.
Users send their questions and descriptions of problems to [email protected]
The technical staff receives the email and responds to the user.• Usually within one business day.
This system allows staff to monitor and contribute their expertise to the problem.
CCR website: http://www.ccr.buffalo.edu
Cluster Computing
The u2 cluster is the major computational platform of the Center for Computational Research.
Login (front-end) and cluster machines run the Linux operating system.
Requires a CCR account.Accessible from the UB domain.The login machine is u2.ccr.buffalo.eduCompute nodes are not accessible from outside
the cluster.Traditional UNIX style command line interface.
A few basic commands are necessary.
Cluster Computing
The u2 cluster consists of 1056 dual processor DELL SC1425 compute nodes.
The compute nodes have Intel Xeon processors.Most of the cluster machines are 3.2 GHz with 2
GB of memory.There 64 compute nodes with 4 GB of memory
and 32 with 8 GB. All nodes are connected to a gigabit ethernet
network.756 nodes are also connected the Myrinet, a high
speed fibre network.
Cluster Computing
Data Storage
Home directory: /san/user/UBITusername/u2 The default user quota for a home directory is 2GB.
• Users requiring more space should contact the CCR staff. Data in home directories are backed up.
• CCR retains data backups for one month.
Projects directories: /san/projects[1-3]/research-group-name UB faculty can request additional disk space for the use
by the members of the research group. The default group quota for a project directory is
100GB. Data in project directories is NOT backed up by default.
Data Storage
Scratch spaces are available for TEMPORARY use by jobs running on the cluster. /san/scratch provides 2TB of space.
• Accessible from the front-end and all compute nodes. /ibrix/scratch provides 25TB of high performance
storage.• Applications with high IO and that share data files benefit
the most from using IBRIX.• Accessible from the front-end and all compute nodes.
/scratch provides a minimum of 60GB of storage.• The front-end and each computer nodes has local scratch
space. This space is accessible from that machine only.• Applications with high IO and that do not share data files
benefit the most from using local scratch.• Jobs must copy files to and from local scratch.
Software
CCR provides a wide variety of scientific and visualization software. Some examples: BLAST, MrBayes, iNquiry, WebMO,
ADF, GAMESS, TurboMole, CFX, Star-CD, Espresso, IDL, TecPlot, and Totalview.
The CCR website provides a complete listing of application software, as well as compilers and numerical libraries.
The GNU, INTEL, and PGI compilers are available on the U2 cluster.
A version of MPI (MPICH) is available for each compiler, and network.
Note: U2 has two networks: gigabit ethernet and Myrinet. Myrinet performs at twice the speed of gigabit ethernet.
Accessing the U2 Cluster
The u2 cluster front-end is accessible from the UB domain (.buffalo.edu) Use VPN for access from outside the
University. The UBIT website provides a VPN client for
Linux, MAC, and Windows machines.• http://ubit.buffalo.edu/software
The VPN client connects the machine to the UB domain, from which u2 can be accessed.
Telnet access is not permitted.
Login and X-Display
LINUX/UNIX workstation: ssh u2.ccr.buffalo.edu
• ssh [email protected]
The –X or –Y flags will enable an X-Display from u2 to the workstation.• ssh –X u2.ccr.buffalo.edu
Windows workstation: Download and install the X-Win32 client from
ubit.buffalo.edu/software/win/XWin32 Use the configuration to setup ssh to u2. Set the command to xterm -ls
Logout: logout or exit in the login window.
File Transfer
FileZilla is a available of Windows, Linux and MAC machines. Check the UBIT software pages. This is a drag and drop graphical interface. Please use port 22 for secure file transfer.
Command line file transfer for Unix. sftp u2.ccr.buffalo.edu
• put, get, mput and mget are used to uploaded and download data files.
• The wildcard “*” can be used with mput and mget.
scp filename u2.ccr.buffalo.edu:filename
Basic Unix Commands
Using the U2 cluster requires knowledge of some basic UNIX commands.
The CCR Reference Card provides a list of the basic commands. The Reference Card is a pdf linked to
www.ccr.buffalo.edu/display/WEB/Unix+Commands
These will get you started, then you can learn more commands as you go. List files:
• ls• ls –la (long listing that shows all files)
Basic Unix Commands
View files:• cat filename (displays file to screen)• more filename (displays file with page breaks)
Change directory:• cd directory-pathname• cd (go to home directory)• cd .. (go back one level)
Show directory pathname • pwd (shows current directory pathname)
Copy files and directories• cp old-file new-file• cp –R old-directory new-directory
Basic Unix Commands
Move files and directories: • mv old-file new-file• mv old-directory new-directory • NOTE: move is a copy and remove
Create a directory:• mkdir new-directory
remove files and directories:• rm filename • rm –R directory (removes directory and
contents)• rmdir directory (directory must be empty) • Note: be careful when using the wildcard “*”
More about a command: man command
Basic Unix Commands
View files and directory permissions using ls command.• ls –l
Permissions have the following format:• -rwxrwxrwx … filename
– user group other
Change permissions of files and directories using the chmod command. • Arguments for chmod are ugo+-rxw
– user group other read write execute• chmod g+r filename
– add read privilege for group• chmod –R o-rwx directory-name
– Removes read, write and execute privileges from the directory and its contents.
Basic Unix Commands
There are a number of editors available: emacs, vi, nano, pico
• Emacs will default to a GUI if logged in with X-DISPLAY enabled.
Files edited on Windows PCs may have embedded characters that can create runtime problems. Check the type of the file:
• file filename Convert DOS file to Unix. This will remove the
Windows/DOS characters.• dos2unix –n old-file new-file
Modules
Modules are available to set variables and paths for application software, communication protocols, compilers and numerical libraries. module avail (list all available modules) module load module-name (loads a module)
• Updates PATH variable with path of application. module unload module-name (unloads a module)
• Removes path of application from the PATH variable. module list (list loaded modules) module show module-name
• Show what the module sets.
Modules can be loaded in the user’s .bashrc file.
Starting the CFX Solver
Create a subdirectory mkdir bluntbody
Change directory to bluntbody cd bluntbody
Copy the Blunt Body.def file to the bluntbody directory cp
/util/cfx-ub/CFX110/ansys_inc/v110/CFX/examples/BluntBody.def .
ls -l
Starting the CFX Solver
Load the CFX module module load cfx
Launch CFX: cfx5 The CFX solver GUI will display on the
workstation Launch with detach from command line cfx5 &
Click on CFX-Solver 11.0
Starting the CFX Solver
Starting the CFX Solver
Starting the CFX Solver
Click on FileSelect Define RunSelect the BluntBody.defRun mode is serialIn another window on u2 start top to
monitor the memory and CPUClick Start Run in the CFX Define Run
window.After solver has completed, click on NO for
post processing,
Starting the CFX Solver
Starting the CFX Solver
Starting the CFX Solver
Starting the CFX Solver
Running on the U2 Cluster
The compute machines are assigned to user jobs by the PBS (Portable Batch System) scheduler.
The qsub command submits jobs to the scheduler
Interactive jobs depend on the connection from the workstation to u2.
If the workstation is shut down or disconnected from the network, then the job will terminate.
PBS Execution Model
PBS executes a login as the user on the master host, and then proceeds according to one of two modes, depending on how the user requested that the job be run. Script - the user executes the command:
qsub [options] job-script• where job-script is a standard UNIX shell script containing some PBS
directives along with the commands that the user wishes to run (examples later).
Interactive - the user executes the command:qsub [options] –I
• the job is run “interactively,” in the sense that standard output and standard error are connected to the terminal session of the initiating ’qsub’ command. Note that the job is still scheduled and run as any other batch job (so you can end up waiting a while for your prompt to come back “inside” your batch job).
Execution Model Schematic
qsub myscript pbs_server SCHEDULER
Run?
No
Yes
$PBS_NODEFILE
node1
node2
nodeN
prologue $USER login myscript epilogue
PBS Queues
The PBS queues defined for the U2 cluster are CCR and debug.
The CCR queue is the defaultThe debug queue can be requested by the user.
Used to test applications.
qstat –q Shows queues defined for the scheduler. Availability of the queues.
qmgr Shows details of the queues and scheduler.
PBS Queues
Do you even need to specify a queue?
You probably don’t need (and may not even be able) to specify a specific queue destination.
Most of our PBS servers use a routing queue. The exception is the debug queue on u2, which
requires a direct submission. This queue has a certain number of compute nodes set aside for its use during peak times. Usually, this queue has 32 compute nodes. The queue is always available, however it has dedicated nodes
Monday through Friday, from 9:00am to 5:00pm. Use -q debug to specify the debug queue on the u2 cluster.
Batch Scripts - Resources
The “-l” options are used to request resources for a job. Used in batch scripts and interactive jobs.
-l walltime=01:00:00 wall-clock limit of the batch job. Requests 1 hour wall-clock time limit. If the job does not complete before this time limit, then it will be
terminated by the scheduler. All tasks will be removed from the nodes.
-l nodes=8:ppn=2 number of cluster nodes, with optional processors per node.
Requests 8 nodes with 2 processors per node. All the compute nodes in the u2 cluster have 2 processors per
node. If you request 1 processor per node, then you may share that node with another job.
Environmental Variables
$PBS_O_WORKDIR - directory from which the job was submitted.
By default, a PBS job starts from the user’s $HOME directory.
Note that you can change this default in your .cshrc or .bashrc file.
add the following to your .cshrc file:if ( $?PBS_ENVIRONMENT ) then
cd $PBS_O_WORKDIRendif
or this to your .bashrc file:if [ -n "$PBS_ENVIRONMENT" ]; then
cd $PBS_O_WORKDIRFi
In practice, many users change directory to the $PBS_O_WORKDIR directory in their scripts.
Environmental Variables
$PBSTMPDIR - reserved scratch space, local to each host (this is a CCR definition, not part of the PBS package).
This scratch directory is created in /scratch and is unique to the job.
The $PBSTMPDIR is created on every compute node running a particular job.
$PBS_NODEFILE - name of the file containing a list of nodes assigned to the current batch job.
Used to allocate parallel tasks in a cluster environment.
Sample Interactive Job
Example: qsub -I -X -q debug -lnodes=1:ppn=2 -lwalltime=01:00:00
Sample Script – Cluster
Example of a PBS script for the cluster: /util/pbs-scripts/pbsCFXu2-sample
Submitting a Batch Job
qsub pbsCFXu2-sample qstat –an –u username qstat –an jobid