Marenostrum: Difference between revisions

From Wiki
Jump to navigation Jump to search
No edit summary
 
(28 intermediate revisions by 5 users not shown)
Line 1: Line 1:
go back to [[Main Page]], [[Computational Resources]], [[Clusters]], [[External Resources]]
Early in 2004 the Ministry of Education and Science (Spanish Government), Generalitat de Catalunya (local Catalan Government) and Technical University of Catalonia (UPC) took the initiative of creating a National Supercomputing Center in Barcelona. BSC-CNS (Barcelona Supercomputing Center – Centro Nacional de Supercomputación) is the National Supercomputing Facility in Spain and was officially constituted in April 2005. BSC-CNS manages MareNostrum, one of the most powerful supercomputer in Europe ([http://www.top500.org]), located at the Torre Girona chapel. The mission of BSC-CNS is to investigate, develop and manage information technology in order to facilitate scientific progress. With this aim, special dedication has been taken to areas such as Computational Sciences, Life Sciences and Earth Sciences.  
Early in 2004 the Ministry of Education and Science (Spanish Government), Generalitat de Catalunya (local Catalan Government) and Technical University of Catalonia (UPC) took the initiative of creating a National Supercomputing Center in Barcelona. BSC-CNS (Barcelona Supercomputing Center – Centro Nacional de Supercomputación) is the National Supercomputing Facility in Spain and was officially constituted in April 2005. BSC-CNS manages MareNostrum, one of the most powerful supercomputer in Europe ([http://www.top500.org]), located at the Torre Girona chapel. The mission of BSC-CNS is to investigate, develop and manage information technology in order to facilitate scientific progress. With this aim, special dedication has been taken to areas such as Computational Sciences, Life Sciences and Earth Sciences.  


[http://10.0.7.240/wiki/images/files/Marenostrum/859.pdf User's Guide Manual ]
[http://10.0.7.240/wiki/images/files/Marenostrum/BSC.pdf User's Guide Manual ]
 
 
 
How to connect to your bsc account:
 
ssh -X {username}@mn4.bsc.es (or mn3.bsc.es or mn2.bsc.es or mn1.bsc.es)
 
Usualy the username is "iciqNNNNN" where N are different numbers for each user.
 
How to kill the jobs
 
mncancel <jobid>
 
To check the status of your jobs:
 
mnq
 
To check a job:
checkjob <job_id>
obtains detailed information about a specific job, including the assigned nodes and the possible
reasons preventing the job from running.
 
To know the estimated time to start execution:
mnstart      <job_id>
 
To block a job:
mnhold -j <job_id>
To release a job, the same command must be run with -r option.
 
To see the usage of space
quota -v
 


== ADF ==
== ADF ==


How to connect to your bsc account
* For''' Carles Bo''' group:
 
How to check the status of your jobs
llme


  ssh -X {username}@mn4.bsc.es (or mn3.bsc.es or mn2.bsc.es or mn1.bsc.es)
How to check the status of the entire group's jobs
 
  llgr


On the first connection (and only then), remember to set up your environment
On the first connection (and only then), remember to set up your environment
Line 15: Line 57:
How to submit jobs
How to submit jobs


  qs -n <Numbers_Proc> <Input_Name> <time_in_hours> (for example: qs -n 16 test.in 72h)
  qs -n <Numbers_Proc> <Input_Name> <time_in_hours> (for example: qs -n 16 test.in 36h)
  It seems better to use multiple of 4 for the number of processors
  It seems better to use multiple of 4 for the number of processors
36h is the upper limit for now with the low priority we have
How to transfer an input-file from your local computer
scp <input-file> {username}@mn1.bsc.es:'/home/iciq38/{username}/DEST_MN_directory'


WATCH OUT!!
WATCH OUT!!


  At variance with kimik, one should use the "real" adf input:
  At variance with kimik, one should use the "real" adf input:
  #! /bin/sh
  #! /bin/sh
  $ADFBIN/adf << eor
  $ADFBIN/adf << eor
Line 29: Line 78:
  (for example: cp TAPE21 $HOME/example.t21)
  (for example: cp TAPE21 $HOME/example.t21)


How to check the status of your jobs
== NWChem ==
   
 
  llme
To submit [[NWCHEM]] calculations in Marenostrum you will need to submit a .cmd file. You can automatically create the cmd file and submit the job by using the script: [[Send_mn.sh]].
 
You will need to copy it in your own bin and call it by typing '''send_mn.sh namefile n XX:XX:XX''', where '''n''' is the number of nodes you would like to use and '''XX:XX:XX''' the time you will allow for the job to run.
 
Be aware that apart from the options that appear in send.sh, more options can be described in the cmd, such as the type of job...
 
NWChem is quite slow, so it is recommended to use at least 64 nodes.
 
== DLPOLY ==
 
== GAMESS ==
 
Last update: Marsh 18 2009
 
1 - Available versions:
  24 MAR 2007 (R6): /gpfs/apps/GAMESS/2008-03-19/gamess.00.x
  12 JAN 2009 (R1): /gpfs/apps/GAMESS/2009-03-09/gamess.00.x
 
2 - Submiting script example:


How to check the status of the entire group's jobs
Paste the following script in a .cmd file and use the classical command to submit it.
mnsubmit gamess_file.cmd


  llgr
  ------------------------------------------------------------------
#! /bin/bash
#
# @ initialdir = .
# @ output = gamess_file.out
# @ error =  gamess_file.err
# @ total_tasks = <NbProc>
#@ wall_clock_limit = hh:mm:ss 
EXE=/gpfs/apps/GAMESS/2009-03-09/bin/rungms
INPUT=gamess_file
sl_get_machine_list > node_list$SLURM_JOBID
rm node_list-myri$SLURM_JOBID
cat  node_list$SLURM_JOBID | while read node; do
  echo $node-myrinet1 >> node_list-myri$SLURM_JOBID
done
${EXE} ${INPUT} 00  $SLURM_NPROCS  node_list-myri$SLURM_JOBID
rm node_list$SLURM_JOBID
rm node_list-myri$SLURM_JOBID
------------------------------------------------------------------


How to kill the jobs
rungms will run the gamess.00.x executable


  mncancel <jobid>
If no scratch directory are specified, you could recover your .dat file in the following directory
  /gpfs/scratch/usergroup/username/tmp/


== NWChem ==
3 - Manual:


== DLPOLY ==
Gamess' manual could be found at the following website
http://www.msg.ameslab.gov/GAMESS/documentation.html


Carefull: it is possible that some keywords change between the two proposed versions


== VASP ==
== VASP ==
[[Useful scripts]]
[[Useful scripts]]
== Molden==
You can use Molden 4.8 by including /gpfs/apps/MOLDEN/4.8/ in your path (PATH="${PATH}":/home/iciq26/iciq26280/bin/:/gpfs/apps/MOLDEN/4.8/)


== Contact ==
== Contact ==
'''BSC Support'''
'''BSC Support'''



Latest revision as of 10:37, 30 September 2011

go back to Main Page, Computational Resources, Clusters, External Resources


Early in 2004 the Ministry of Education and Science (Spanish Government), Generalitat de Catalunya (local Catalan Government) and Technical University of Catalonia (UPC) took the initiative of creating a National Supercomputing Center in Barcelona. BSC-CNS (Barcelona Supercomputing Center – Centro Nacional de Supercomputación) is the National Supercomputing Facility in Spain and was officially constituted in April 2005. BSC-CNS manages MareNostrum, one of the most powerful supercomputer in Europe ([1]), located at the Torre Girona chapel. The mission of BSC-CNS is to investigate, develop and manage information technology in order to facilitate scientific progress. With this aim, special dedication has been taken to areas such as Computational Sciences, Life Sciences and Earth Sciences.

User's Guide Manual


How to connect to your bsc account:

ssh -X {username}@mn4.bsc.es (or mn3.bsc.es or mn2.bsc.es or mn1.bsc.es)

Usualy the username is "iciqNNNNN" where N are different numbers for each user.

How to kill the jobs

mncancel <jobid>

To check the status of your jobs:

mnq

To check a job:

checkjob <job_id>
obtains detailed information about a specific job, including the assigned nodes and the possible
reasons preventing the job from running.

To know the estimated time to start execution:

mnstart       <job_id>

To block a job:

mnhold -j <job_id>
To release a job, the same command must be run with -r option.

To see the usage of space

quota -v


ADF[edit]

  • For Carles Bo group:

How to check the status of your jobs

llme

How to check the status of the entire group's jobs

llgr

On the first connection (and only then), remember to set up your environment

cat /gpfs/projects/iciq38/BASHRC >> .bashrc

How to submit jobs

qs -n <Numbers_Proc> <Input_Name> <time_in_hours> (for example: qs -n 16 test.in 36h)

It seems better to use multiple of 4 for the number of processors
36h is the upper limit for now with the low priority we have

How to transfer an input-file from your local computer

scp <input-file> {username}@mn1.bsc.es:'/home/iciq38/{username}/DEST_MN_directory'

WATCH OUT!!

At variance with kimik, one should use the "real" adf input:

#! /bin/sh
$ADFBIN/adf << eor
    Here you put your input like you would do on Kimik
eor

If you need to keep the TAPE21 file, copy it back at the end of the job, namely after <eor>. 
(for example: cp TAPE21 $HOME/example.t21)

NWChem[edit]

To submit NWCHEM calculations in Marenostrum you will need to submit a .cmd file. You can automatically create the cmd file and submit the job by using the script: Send_mn.sh.

You will need to copy it in your own bin and call it by typing send_mn.sh namefile n XX:XX:XX, where n is the number of nodes you would like to use and XX:XX:XX the time you will allow for the job to run.

Be aware that apart from the options that appear in send.sh, more options can be described in the cmd, such as the type of job...

NWChem is quite slow, so it is recommended to use at least 64 nodes.

DLPOLY[edit]

GAMESS[edit]

Last update: Marsh 18 2009

1 - Available versions:

24 MAR 2007 (R6): /gpfs/apps/GAMESS/2008-03-19/gamess.00.x
12 JAN 2009 (R1): /gpfs/apps/GAMESS/2009-03-09/gamess.00.x

2 - Submiting script example:

Paste the following script in a .cmd file and use the classical command to submit it.

mnsubmit gamess_file.cmd
------------------------------------------------------------------
#! /bin/bash
#
# @ initialdir = .
# @ output = gamess_file.out
# @ error =  gamess_file.err
# @ total_tasks = <NbProc>
#@ wall_clock_limit = hh:mm:ss  

EXE=/gpfs/apps/GAMESS/2009-03-09/bin/rungms
INPUT=gamess_file

sl_get_machine_list > node_list$SLURM_JOBID
rm node_list-myri$SLURM_JOBID

cat  node_list$SLURM_JOBID | while read node; do
 echo $node-myrinet1 >> node_list-myri$SLURM_JOBID
done

${EXE} ${INPUT} 00  $SLURM_NPROCS  node_list-myri$SLURM_JOBID

rm node_list$SLURM_JOBID
rm node_list-myri$SLURM_JOBID
------------------------------------------------------------------

rungms will run the gamess.00.x executable

If no scratch directory are specified, you could recover your .dat file in the following directory

/gpfs/scratch/usergroup/username/tmp/

3 - Manual:

Gamess' manual could be found at the following website 
http://www.msg.ameslab.gov/GAMESS/documentation.html

Carefull: it is possible that some keywords change between the two proposed versions

VASP[edit]

Useful scripts


Molden[edit]

You can use Molden 4.8 by including /gpfs/apps/MOLDEN/4.8/ in your path (PATH="${PATH}":/home/iciq26/iciq26280/bin/:/gpfs/apps/MOLDEN/4.8/)

Contact[edit]

BSC Support

e-mail : support@bsc.es

Fax : 934137721 

User Support

David Vicente 

e-mail : david.vicente@bsc.es


Phone : 934054226

Links[edit]

Barcelona Supercomputing Center