Tmole.sub HowTo: Difference between revisions

From Wiki
Jump to navigation Jump to search
Sdonald (talk | contribs)
No edit summary
 
(7 intermediate revisions by one other user not shown)
Line 1: Line 1:
Go back to [http://aliga.iciq.es/wiki/index.php/TURBOMOLE TURBOMOLE]
Go back to [[TURBOMOLE]]


== Basic Turbomole operation ==
== HowTo:- modify the script ==


Turbomole runs using MPI, and so can be run in parallel over several nodes, depending on how what you ask for.
In the tmole.sub script there are 3 key lines where you define exactly which queue and how many processors you want the job to have...


Turbomole can run individual parts of the calculations separately, like running just one link to do the SCF in Gaussian, but the most common usage is to use a script to combine all of the links (or modules, as they are called in Turbomole) together to run an optimization.
'''1.)''' In the SGE parallel environment setup


This script is called '''jobex'''
#$ -pe cq4m4_mpi 8


You call this in your script using the general command...
Defines the version of MPI for the queue that you are using followed by the number of processors you want SGE to give you... the options for the mpi-versions are...


  jobex > jobex.out
  cq4m4_mpi    (as shown above, the P4-quad-core nodes with 4GB ram)
mpi          (the Xeon-quad-cores with 8GB of ram)
FG_c8m24_mpi (the new octa-core nodes with 24GB ram)


However, there are many options on how to run the optimization, and in the sample tmole.sub script, you see how to call the jobex script for running a RI-DFT job with a maximum of 100 SCF steps, using the command...
'''2.)''' In the SGE queue setup line


  jobex -ri -c 100 > jobex.out
  #$ -masterq cq4m4.q


you can name the output anything you like, so, for example, jobex.out could be called water_test.out
Defines precicely which queue is being asked for... the options for the queues are...


== HowTo modify the script ==
cq4m4.q    (as shown above, the P4-quad-core nodes with 4GB ram)
kimik2.q  (the Xeon-quad-cores with 8GB of ram)
FG_c8m24.q (the new octa-core nodes with 24GB ram)


In the tmole.sub script there are 3 key lines where you define exactly what and where you want the job to be...
If you ask for the kimik2.q queue here, you must ask for "mpi" in the parallel environment line (1.), or the job will not work.


'''1.)''' In the SGE environment setup... 2 lines
'''3.)''' In the Turbomole environment setup...


  #$ -pe cq4m4_mpi 8
  export PARNODES=8


Defines both the version of MPI for the queue that you are using... namely cq4m4 (p4-quad-core with 4GB ram), and the number of processes that you will be given by SGE (here 8)
Defines the number of processors that Turbomole expects to use. It lets the queue system define where it goes and how the processes are to be split over the nodes. The number of processors you ask for here must be the same as in (1.), above, i.e. 8 in this case.


#$ -masterq cq4m4.q


Defines precicely which queue is being asked for... the options are...
Generally, ask for multiples of 4 processors for the quad-core machines, and multiples of 8 for the octa-core machines.


cq4m4.q (as shown above)
== Calling the correct module/link in tmole.sub ==


kimik2.q (the Xeon-quad cores with 8GB of ram)
Turbomole can run individual parts of the calculations separately, like running just one link to do the SCF in Gaussian, but the most common usage is to use a script to combine all of the links (or modules, as they are called in Turbomole) together to run an optimization.


FG_c8m24.q (the new octa-core nodes with 24GB ram)
In the script, this is done in the final line, using the command '''jobex'''


The queue you ask for must be the same in each of the above lines, or the job will not work.
This is called generally using...


'''2.)''' In the Turbomole environment setup...
jobex > jobex.out
 
export PARNODES=8


Defines the number of processors that Turbomole expects to use. It lets the queue system define where it goes and how the processes are to be split over the nodes. The number of processors you ask for here must be the same as in the first line, above.
However, there are many options on how to run the optimization (see the manuals for more info), and in the sample tmole.sub script, you see how to call the jobex script for running a RI-DFT job with a maximum of 100 SCF steps, using the command...


Generally, ask for multiples of 4 processors for the quad-core machines, and multiples of 8 for the octa-core machines.
jobex -ri -c 100 > jobex.out


Hopefully, a version of qs will be available soon to do all this automatically :)
you can name the output anything you like, so, for example, jobex.out could be called water_test.out

Latest revision as of 10:07, 21 September 2011

Go back to TURBOMOLE

HowTo:- modify the script[edit]

In the tmole.sub script there are 3 key lines where you define exactly which queue and how many processors you want the job to have...

1.) In the SGE parallel environment setup

#$ -pe cq4m4_mpi 8

Defines the version of MPI for the queue that you are using followed by the number of processors you want SGE to give you... the options for the mpi-versions are...

cq4m4_mpi    (as shown above, the P4-quad-core nodes with 4GB ram)
mpi          (the Xeon-quad-cores with 8GB of ram)
FG_c8m24_mpi (the new octa-core nodes with 24GB ram)

2.) In the SGE queue setup line

#$ -masterq cq4m4.q

Defines precicely which queue is being asked for... the options for the queues are...

cq4m4.q    (as shown above, the P4-quad-core nodes with 4GB ram)
kimik2.q   (the Xeon-quad-cores with 8GB of ram)
FG_c8m24.q (the new octa-core nodes with 24GB ram)

If you ask for the kimik2.q queue here, you must ask for "mpi" in the parallel environment line (1.), or the job will not work.

3.) In the Turbomole environment setup...

export PARNODES=8

Defines the number of processors that Turbomole expects to use. It lets the queue system define where it goes and how the processes are to be split over the nodes. The number of processors you ask for here must be the same as in (1.), above, i.e. 8 in this case.


Generally, ask for multiples of 4 processors for the quad-core machines, and multiples of 8 for the octa-core machines.

Calling the correct module/link in tmole.sub[edit]

Turbomole can run individual parts of the calculations separately, like running just one link to do the SCF in Gaussian, but the most common usage is to use a script to combine all of the links (or modules, as they are called in Turbomole) together to run an optimization.

In the script, this is done in the final line, using the command jobex

This is called generally using...

jobex > jobex.out

However, there are many options on how to run the optimization (see the manuals for more info), and in the sample tmole.sub script, you see how to call the jobex script for running a RI-DFT job with a maximum of 100 SCF steps, using the command...

jobex -ri -c 100 > jobex.out

you can name the output anything you like, so, for example, jobex.out could be called water_test.out