Adenine User GuidePlese check the Adenine Administrator Guide if you are looking for information on how to configure or repair Adenine (only accessible from clemson.edu domain).
Adenine.parl.clemson.edu is a 66 node linux cluster (one head node and 65
slaves), located in 352 EIB. Each node is a dual 1GHz Pentium III with 1
Gbyte of RAM. The nodes are connected by channel bonded fast ethernet,
providing a theoretical bandwith of 200 Mbit/sec between each node.
Each node has two IDE hard disk drives. The current kernel version can
be determined by running "uname -a" on the head node. The operating
system is based on Debian.
All jobs should be submitted to the batch scheduler for execution.
Users are not permitted to run mpirun, rsh, rlogin directly; the only way to access compute nodes is through the scheduler.
There are no regular backups of Adenine at this time. It is the reponsibility of the user and only the user to maintain backups of any data stored on Adenine.
Your account should already be configured to allow you to access the parallel
programming tools on Adenine. You may verify this with the following commands:
Compiling and submitting MPI jobs
It is possible to submit a job to the scheduler that will result in the
creation of an interactive session, where you can manually run any commands
that you wish. This may be useful for debugging sessions, or for manually
sweeping through various parameters on the command line of your program.
To ask for an interactive session, run this command: "qsub -l nodes=4:ppn=2 -I". Substitute the value of nodes (number of nodes to use) and processors per node as necessary for your desired usage. When this job is successfully executed, it will present you with a login shell on one of the nodes that you were assigned. You can run any commands that you wish at this point. Simply type "exit" when you are done in order to release the nodes.
You can find a list of nodes that have been assigned (for example, as input to an mpirun command, or a script) by looking at the PBS_NODEFILE environment variable.
Most of the slave nodes are named a1, a2, a3, a4, etc. Occsionally some
of theme will be offline for maintenance.
To see a list of nodes that are currently operational on the system, run this command: "pbsnodes -a", or check the contents of the /etc/hosts-cluster file.
|Your home directory, /home/"username", is visible (mounted via NFS) from every node in the cluster. /tmp may be used for local temporary storage on each node. PVFS (a high performance parallel file system) will be added at a later date.|