QUEUES for FELIU MASERAS group

From Wiki
Jump to navigation Jump to search

go back to Main_Page, Computational Resources, Clusters, Local Clusters, Kimik2


QUEUES SYSTEM MANAGEMENT[edit]

kimik2 manages automatically calculations sent from your computer:

1- The maximum number of calculations that each user can run at a time is 25, if you send more than that they will wait in the queue until one of your spaces is free.

2- In order to keep the sharing equality of the cluster we decided that the users cannot have more than a number of process running at the same time.

Each member will have a number of credits "quota" that might change depending on the group circumstances: number of users and how active they are. If you notice that queues are too full or too empty for a period larger than two weeks ask Feliu's permision to encrease/decrease the quota.

Today (Oct 2018) each member has 1080 credits of quota. The value (price in credits) of each calculation depends on the queue where it is submitted.

The prices of the queues are:


 queue   --- nodes x credits --- credits per calc.
 c36m192 ---  36x3   credits --- total  96
 c28m128 ---  28x4   credits --- total 112
 c24m128 ---  24x5   credits --- total 120
 c20m128 ---  20x6   credits --- total 120
 c12m24  ---  12x5   credits --- total  60
 c8m24   ---   8x0   credits --- total   0
 c4m8    ---   4x0   credits --- total   0
 cq4m4   ---   4x0   credits --- total   0
 c12m128gpu8 --- 12x0 credits --- total 0

Your calculations will enter into empty queues as far as you have free credits. I.e. if you have a quota of 1080, you can have 9 calculation in the c24m128 or 18 to the c12m24. Or anything in between: 2 in c28m128 + 4 in c20m48 + 6 in c12m24.

3.- If you have not left quota, your calculations will wait even if there are empty machines.

4.-If you can put calculations in hold (qhold JOBID to hold & qalter -h U to unhold), if you need others to be done before.

5.- The small queues can be used freely up to complete the 25 slot allowance.

6.- Please use the c12m128gpu8 queue for calculations that need the gpus

QUEUES[edit]

Currently there are four queues where calculations can be sent:

c4m8.q (18 NODES) MACHINE TYPE

               1 x Xeon X3360 2,83Ghz 2x6Mb cache (4 cores)
               4x2GB RAM DDR2 800Mhz (8GB mem)
  PE
               c4m8_mpi
               c4m8_smp

cq4m4.q (26 NODES) MACHINE TYPE

               1 x  Core2 Quad Q6600 2,4GHz  8M cache (4 cores)
               4x1GB DDR2 667MHz (4GB mem)
   PE
               cq4m4_mpi
               cq4m4_smp


c8m24.q (27 NODES) MACHINE TYPE

               2 x Xeon E5530 2.4GHz 8M cache (8 cores)
               6x4GB RAM DDR3 1333MHz ECC Registered (24GB mem)
   PE
               c8m24_mpi
               c8m24_smp

c12m24.q (18 NODES) MACHINE TYPE

               2 X Intel(R) Xeon(R) CPU E5645 2.40GHz 12M cache (12 cores)
               6x4GB RAM DDR3 1333MHz ECC Registered (24GB mem)
   PE
               c12m24_mpi
               c12m24_smp


c20m48.1 (4 NODES) MACHINE TYPE

               ???? (20 cores)
               ??? (48 GB mem)
   PE
               c20m48_mpi
               c20m48_smp

c24m128.1 (23 NODES) MACHINE TYPE

               ???? (24 cores)
               ??? (128 GB mem)
   PE
               c24m128_mpi
               c24m128_smp

c28m128.1 (4 NODES) MACHINE TYPE

               ???? (28 cores)
               ??? (128 GB mem)
   PE
               c28m128_mpi
               c28m128_smp

c36m192.1 (3 NODES) MACHINE TYPE

               ???? (36 cores)
               ??? (192 GB mem)
   PE
               c36m192_mpi
               c36m192_smp

GPU's NODES[edit]

  • c12m128gpu8.1

There are some gpu nodes that can be used. Terachem, orca and other type of calculations can be achieved with them.


SENDING CALCULATIONS[edit]

  • Gaussian jobs
qs TYPE_OF_CALCULATION inputfile.in

where TYPE_OF_CALCULATION can be:



for g16:

g16.c36m192 (36 cores smp pe)
g16.c28m128 (28 cores smp pe)
g16.c24m128 (24 cores smp pe)
g16.c20m48 (20 cores smp pe)
g16.c12m24 (12 cores smp pe)
g16.cq4m4 (4 cores smp pe)

for g09:

g09.c36m192 (36 cores smp pe)
g09.c28m128 (28 cores smp pe)
g09.c24m128 (24 cores smp pe)
g09.c20m48 (20 cores smp pe)
g09.c12m24 (12 cores smp pe)
g09.c8m24 (8 cores smp pe)
g09.cq4m4 (4 cores smp pe)       
g09.c4m8 (4 cores smp pe) 

for g03

g03.cq4m4 (4 cores smp pe)
g03.c4m8 (4 cores smp pe)


NOTE: remember to add the correct parameters in your inputfile.in according to the TYPE_OF_CALCULATION. %nproc=... and %mem=...

  • Orca jobs
qs orcaVERSION.QUEUE NMACHINES INPUTNAME

where:

VERSION could be 27 (for version 2.7), 28 (for version 2.8) or 40 (for version 4.0)
QUEUE For versions previous to 4.0, could be cq4m4 or c4m8 or c8m24 or c12m24. 
      For version 4.0 the queues available are: cq4m4.q, c20m48.q, c24m128.q, c28m128 and c12m128gpu8.q 
NMACHINES is the number of the machines to use (this parameter have directly relation with ! PALX of the input file, see the ORCA page)
INPUTNAME is the name of your input

If when using Orca 4 you don't want too many files copied into your home you could use instead of "qs orcaVERSION.QUEUE NMACHINES INPUTNAME" the submiting script submit

  • GRRM jobs

Currently GRRM calculations can only be submitted to the c20m48.q

qs grrm14.c20m48 infile.in
  • Other jobs

Other type of jobs might need a customized script, like MECP searches (via Harvey or via easyMECP) and others. To submit them use:

qsub scriptfile

The available queues for XTB 6.2.3 are:

c36m192 
c28m128
c24m128
c20m48
c12m24
cq4m4

SENDING THE CALCULATIONS TO SPECIFIC NODES[edit]

To send calculations to a specific group of nodes of a certain queue we just need to set the hostname in the .in.sub files. We will use some generic g09 calculation in c12m24 as example.

  • Generate the .in.sub without running the calculation. (This step has to be done in an ssh connection to kimik2, it can't be done from your computer)

For a single file

qs g09.c12m24 0 infile01.in 

For all files in the directory

for File in infile*.in; do qs g09.c12m24 0 ${File}; done 

Ensure that in the line where "#$ -pe" the number at the end is the same as the number of processors of the node. (c12m24 -> 12, c20m48 ->20 , c24m128 -> 24 ...) Check the line number in which the "#$ -l credits" appears.

  • Lets assume that its line 13th, therefore the next line is the line 14th.

Ensure that in that line the number of credits is correct (should be 5 for the c12m24). Add a line below to set the valid hostnames.

  • Let's imagine that we want to use the nodes: kimik2136, kimik2137 and kimik2138. We can indicate it as:
  1. hostname=kimik2136|kimik2137|kimik2138
  2. hostname=kimik213[6-8]
  • If we wanted to also include the nodes kimik2139 and kimik2140 we can do it as:
  1. hostname=kimik2136|kimik2137|kimik2138|kimik2139|kimik2140
  2. hostname=kimik213[6-9]|kimik2140

Check that it works properly for one file.

sed '14i#$ -l hostname=kimik2136|kimik2137|kimik2138' infile01.in.sub

where the 14i translates as insert the following text as line 14th,

If it does work properly, apply the change to all the files in the directory

sed -i '14i#$ -l hostname=kimik2136|kimik2137|kimik2138' infile*.in.sub

Submit the calculations (This step has to be done in an ssh connection to kimik2, it can't be done from your computer)

  • For a single file
qsub infile.in.sub
  • For all the files in the directory
for File in infile*.in.sub; do qsub ${File}; done