Pablo Picasso: Difference between revisions
No edit summary |
No edit summary |
||
| (2 intermediate revisions by one other user not shown) | |||
| Line 1: | Line 1: | ||
go back to [[Main Page]], [[Computational Resources]], [[Clusters]], [[External Resources]] | |||
== User details == | == User details == | ||
| Line 8: | Line 10: | ||
== Queues == | == Queues == | ||
% '''mnsubmit <job_script>''' | % '''mnsubmit <job_script>''' | ||
| Line 40: | Line 40: | ||
DLPOLY | DLPOLY | ||
Scale fine up to 64-128 CPUs. | Scale fine up to 64-128 CPUs in big systems. | ||
== Submission script == | == Submission script == | ||
Latest revision as of 10:22, 12 June 2009
go back to Main Page, Computational Resources, Clusters, External Resources
User details[edit]
The scratch directory to use is /scratch/usergroup/username (/gpfs/scratch/usergroup/username is available but jobs will run slower)
usergroup is usually the first six characters of your username. This can be easily checked by ls'ing the /scratch directory
Queues[edit]
% mnsubmit <job_script>
submits a “job script” to the queue system (see below a script example).
% mnq
shows all the jobs submitted.
% mncancel <job_id>
remove his/her job from the queue system, canceling the execution of the processes, if they were already running.
% checkjob <job_id>
obtains detailed information about a specific job, including the assigned nodes and the possible reasons preventing the job from running.
% mnstart <job_id>
shows information about the estimated time for the specified job to be executed.
Available Programs[edit]
DLPOLY Scale fine up to 64-128 CPUs in big systems.
Submission script[edit]
#!/bin/bash # @ job_name = job # @ initialdir = . # @ output = OUTPUT/mpi_%j.out # @ error = OUTPUT/mpi_%j.err # @ total_tasks = 64 # @ wall_clock_limit = 12:00:00 srun /gpfs/apps/NWCHEM/bin/LINUX64_POWERPC/nwchem job.nw >& job.out