Cesca: Difference between revisions

From Wiki
Jump to navigation Jump to search
No edit summary
Line 35: Line 35:
== Gaussian03 scriptfile ==
== Gaussian03 scriptfile ==


* large queue
* large queue [[g03_large.sh]]


<nowiki>#!/usr/local/bin/ksh</nowiki>
<nowiki>echo '#!/bin/ksh' >$1.cmd</nowiki>
<nowiki>echo '#BSUB -J '$1'.com' >>$1.cmd</nowiki>
<nowiki>echo '#BSUB -q large' >>$1.cmd</nowiki>
<nowiki>echo '#BSUB -o '$1'.log' >>$1.cmd</nowiki>
<nowiki>echo '#BSUB -e '$1'.err' >>$1.cmd</nowiki>
<nowiki>echo '#BSUB -B -N -u mbesora@iciq.es' >>$1.cmd</nowiki>
<nowiki>echo '#BSUB -n 1' >>$1.cmd</nowiki>
<nowiki>echo ' ' >>$1.cmd</nowiki>
<nowiki>echo 'date' >>$1.cmd</nowiki>
<nowiki>echo 'cd '$PWD' ' >>$1.cmd</nowiki>
<nowiki>echo ' ' >>$1.cmd</nowiki>
<nowiki>echo 'g03c2 '$1'.com '$1'.log ' >>$1.cmd</nowiki>
<nowiki>bsub -R g03c2 < $1.cmd</nowiki>





Revision as of 17:34, 4 March 2010

The main objective of the Centre de Supercomputació de Catalunya (CESCA) is to manage a large complex of calculation and communications systems to provide service to the university and the scientific community, based on three activity areas: the systems for scientific calculation and university information; the communications based on the Anella Científica and CATNIX management and the use and profit promotion of these technologies.

Access

ssh -X -p 2122 user@cadi.cesca.es
ssh -X -p 2122 user@obacs.cesca.es
etc


To move files you can use:

scp -P 2122 user@cadi.cesca.es:/cescascratch/user/ .

sftp -p 2122 user@cadi.cesca.es

User details

The home directory is subject to a group disk quota. If you (or some of your running jobs) fills it, all your jobs will crash, as well as those from other users in the same group. All iciq users are currently in the same group.

To avoid this problem, it is mandatory to make all disk consuming work (job submission, chk saving, etc.) in /cescascratch/$USER

Queues

Updated list of queues in the pdf available on this web page (top) http://www.cesca.es/sistemes/supercomputacio/usrecursos/index.html


To submit a job:

  bsub < script.file

To view running jobs, you can use either bjobs or qstat

To delete jobs, qdel

Gaussian03 scriptfile


  • parallel queue

#!/usr/local/bin/ksh

echo '#!/bin/ksh' >$1.cmd

echo '#BSUB -J '$1'.com' >>$1.cmd

echo '#BSUB -q parallel4' >>$1.cmd

echo '#BSUB -o '$1'.log' >>$1.cmd

echo '#BSUB -e '$1'.err' >>$1.cmd

echo '#BSUB -B -N -u mbesora@iciq.es' >>$1.cmd <nowiki>echo '#BSUB -n 4' >>$1.cmd</nowiki>

echo ' ' >>$1.cmd

echo 'date' >>$1.cmd

echo 'cd '$PWD' ' >>$1.cmd

echo ' ' >>$1.cmd

echo 'g03c2 '$1'.com '$1'.log ' >>$1.cmd

bsub -R g03c2 < $1.cmd

Gaussian09 scriptfile

  • large queue

#!/usr/local/bin/ksh

echo '#!/bin/ksh' >$1.cmd

echo '#BSUB -J '$1'.com' >>$1.cmd

echo '#BSUB -q large' >>$1.cmd

echo '#BSUB -o '$1'.log' >>$1.cmd

echo '#BSUB -e '$1'.err' >>$1.cmd

echo '#BSUB -B -N -u mbesora@iciq.es' >>$1.cmd

echo '#BSUB -n 1' >>$1.cmd

echo ' ' >>$1.cmd

echo 'date' >>$1.cmd

echo 'cd '$PWD' ' >>$1.cmd

echo ' ' >>$1.cmd

echo 'g09a2 '$1'.com '$1'.log ' >>$1.cmd

bsub -R g09a2 < $1.cmd


HARDWARE

CESCA has at its disposal seven high performance computers:

Hewlett-Packard N4000: 8 processors PA8500 (440 MHz), 4 GB of main memory, 227 GB of hard disk, 
with a peak performance (Rpeak) of 14.08 Gflop/s and a maximum performance (Rmax ) of 10.22 Gflop/s.

Compaq AlphaServer HPC320: 8 nodes ES40 (4 EV68, 833 MHz, 64 KB/8 MB), 28 GB of main memory, 892 GB 
of hard disk, with Rpeak of 53.31 Gflop/s and Rmax of 40.84 Gflop/s, interconnected by a Memory 
Channel II of 100 MB/s.

 Beowulf de Compaq: 8 nodes DS10 (1 EV67, 600 MHz, 64 KB/2 MB), 4 GB of main memory, 291 GB of hard 
disk, with Rpeak of 9.60 Gflop/s and an estimated Rmax of 7.68 Gflop/s , interconnected by a 
Myrinet of 2 Gbps.

HP AlphaServer GS1280: 16 processors 21364 EV7 (1,150 MHz, 64 KB/1.75 MB), 32 GB of main memory, 
655 GB of hard disk, with Rpeak of 36.80 Gflop/s and Rmax of 31.28 Gflop/s.

HP rx2600: 2 processors Itanium2 (1,000 MHz, 32 KB/256 KB/3 MB), 2 GB of main memory, 146 GB of 
hard disk, with Rpeak of 8.00 Gflop/s and an estimated Rmax of 7.20 Gflop/s.

SGI Altix 3700 Bx2: 128 processors Itanium2 (1.6 MHz, 16 KB/256 KB/6 MB), 384 GB of main memory, 
6.13 TB of hard disk, with Rpeak of 819.20 Gflop/s and an estimated Rmax of 720.60 Gflop/s.

HP CP4000: 16 nodes DL145 G2 (2 AMD64 Opteron 275 dual core, 2.2 GHz, 64 KB/1 MB each core), 256 GB 
of main memory, 4.56 TB of hard disk, with Rpeak of 281.60 Gflop/s and an estimated Rmax of 177.41 
Gflop/s, interconnected by 3 GigabitEthernet (one external and two internal, one of the latest for 
computing and the other one for management)

http://www.cesca.es/sistemes/supercomputacio/que/index.html

SOFTWARE

Gaussian see current versions

http://www.cesca.es/sistemes/supercomputacio/programari/soft.html

HOWTOs

How to convert a file.chk into file.fchk

To do this you have to be in your directory in /cescascratch/ and check in which machine the job has been computed (hint: take a look at the end of the file) then connect to this particular machine (cadi...) and you have to write the following:

 module avail
 module load gaussian/g03c2 
 formchk file.chk file.fchk

The file.fchk with the formatted version of the checkpoint file will be available. This is the file you need for further density plot generation.

How to submit a job on Prades nodes

The additional line is needed to submit jobs to nodes on prades

echo '#BSUB -R select[g09a2&&prades] span[hosts=1]' >>$1.cmd

the span[hosts=1] command means that you will wait until processors are free on a single node of prades, not several nodes (also make sure you only send to parallel 4 or 8 queues as there are only 8 cores on the prades nodes)