[phenixbb] suggested usage of qsub?
d.rodionov at gmail.com
Thu Mar 28 10:16:06 PDT 2013
I have used MR_ROSETTA on MOAB cluster which should be fairly similar to MAUI submission-wise.
First of all, to submit jobs to a remote cluster you will need Phenix to be installed there.
Most likely, you can't submit jobs directly from your computer (my case). Probably, you need to SSH into one of the control nodes which are set up for job submission.
If Phenix is not installed, you need to install it in your home directory or wherever cluster admins tell you to. You also need to copy your starting files there via SFTP or SCP.
You can use GUI on your office computer to generate the parameter file and somehow transfer it to the control node. Copy-pasting text in terminal works fine, SFTP otherwise.
Then you need to change the path to your files and executables (in the parameters file) to reflect their location on the cluster. Avoid relative paths- most likely they won't work.
Fill in your queue- specific parameters in the parameter file. Something like this:
group_run_command = "ssh gm-1r14-n05 msub " ## the queue command for submitting subjobs by the master job (from an execution node). Probably just qsub in your case, but check with admins.
queue_commands = "#PBS -N mr_rosetta" ## name of your run
queue_commands = "#PBS -q sw" ## queue name
queue_commands = "#PBS -S /bin/bash" ## shell path
queue_commands = "#PBS -j oe /path/to/where/to/dump/logs" ## redirect STDOUT and STDERR to single file- can be useful for troubleshooting
queue_commands = "#PBS -l walltime=240:00:00" ## the amount of time your job can run before being killed. Requires fine-tuning to specific cases. I started with the longest time allowed(check).
queue_commands = "#PBS -l nodes=1:ppn=1" ## node requirements for a single job. just one node with one processor here for "embarrassingly parallel jobs"
queue_commands = "#PBS -V" ## export all your environment variables
queue_commands = "#PBS -m a" ## when to send e-mail: "a" for abort "e" for end "b" for begin
queue_commands = "#PBS -M your at email.here" ## where to send the e-mails
Make a submission file for your top-level job. Something like this (it's the same commands as above):
#PBS -q sw
#PBS -l nodes=1:ppn=1
#PBS -l walltime=720:00:00
#PBS -o /sb/project/hdd-365-aa/moablogs/top_output.log ## top-level log
#PBS -e /sb/project/hdd-365-aa/moablogs/top_error.log ## top-level error log
#PBS -N rosetta-wsmr
#PBS -S /bin/bash
#PBS -m abe ## I wanted to know when the job starts and finishes or aborts
#PBS -M your at email.here
cd /sb/project/hdd-365-aa/moablogs ## where I wanted the top-level logs to be. I don't remember anymore why it was necessary
phenix.mr_rosetta /sb/project/hdd-365-aa/files/mr_rosetta_params.eff ## actual top-level job
and qsub it:
On 2013-03-27, at 3:16 PM, Scott Classen wrote:
> Hi all,
> We have a HPC cluster configured with TORQUE/MAUI. I see in the Phenix GUI that there is some sort of ability to queue jobs to run via qsub. I can't find any documentation on how to set this up.
> If I'm working at my office computer and I want to submit to a remote server cluster for refinement, is this possible? Or is it assumed that qsub is available on the local machine?
> phenixbb mailing list
> phenixbb at phenix-online.org
More information about the phenixbb