Hoffman2:Submitting Jobs: Difference between revisions

From Center for Cognitive Neuroscience
Jump to navigation Jump to search
No edit summary
No edit summary
Line 71: Line 71:
If it needs to go through a bunch of directories and the files are large, this would be a good job to submit to the queue.  The command to do this would be:
If it needs to go through a bunch of directories and the files are large, this would be a good job to submit to the queue.  The command to do this would be:


  qsub -cwd -V -N J1 -l express,time=0:05:00 /u/home/FMRI/apps/examples/gather.sh
  qsub -cwd -V -N J1 -l express,time=0:05:00 /u/home/FMRI/apps/examples/qsub/gather.sh


And something like the following will be printed out:
And something like the following will be printed out:
Line 79: Line 79:
Where the number is your JOBID, a unique numerical identifier for your job.
Where the number is your JOBID, a unique numerical identifier for your job.


Let's now break down the arguments in the script
Let's break down the arguments in the script.


;-cwd
;-cwd

Revision as of 15:11, 15 March 2012

Back to all things Hoffman2

If you remember from Anatomy of the Computing Cluster, the Sun Grid Engine on Hoffman2 is the scheduler for all computing jobs. It takes your computing job request, considers what resources you are asking for and then puts your job in a line waiting for those resources to become available.

Ask for a simple 1GB of memory and a single computing core with a short time window, and your job will likely get placed at the front of the line and start running soon if not immediately. And for the vast majority of people, this will be the case.

Ask for a lot of memory or many computing cores, and your job will get put further back in the line because it will have to wait for more things to become available. If your job needs these types of resources, you are probably at a level where reading this tutorial isn't very helpful.

So how does one submit a computing job request? You've got some options:

  1. job.q - Use a simple yet effective tool that ATS wrote. It has a great menu and walks you through submitting things.
  2. qsub - Get under the hood and do it yourself. It can get messy but it can also be faster and you have more flexibility with options.


job.q

Once you've identified or written a script you'd like to run, SSH into Hoffman2 and enter job.q. Then it is just a matter of following its step-by-step instructions.

From the tool's main menu, you can type Info to read up about how to use it and we highly encourage you to do so.

But we know patience is a virtue that most of us aren't blessed with. So we'll walk you through submitting a basic job so you can hit the ground running.

Example

  1. Once on Hoffman2, you'll need to edit one file so pull out your favorite text editor and edit the file
    ~/.queuerc
  2. Add the line
    set qqodir = ~/job-output
  3. You've just set the default directory where your job command files will be created. Save the configuration file and close your text editor.
  4. Make that directory using the command
    mkdir ~/job-output
  5. Now run
    job.q
  6. Press enter to acknowledge the message about some files that get created (READ IT FIRST THOUGH).
  7. Type Build <ENTER> to begin creating an SGE command file.
  8. The program now asks you which script you'd like to run, enter the following text to use our example script
    /u/home/FMRI/apps/examples/qsub/gather.sh
  9. The program now asks how much memory the job will need (in megabytes). This script is really simple, so let's go with the minimum and enter 64.
  10. The program now asks how long will the job take (in hours). Go with the minimum 1 hour; it will complete in much less than this.
  11. The program now asks if your job should be limited to only your resource group's cores. Answer n because you do not need to be limiting yourself here and the job is not going to be running for more than 24 hours.
  12. Soon, the program will tell you that gather.sh.cmd has been built and saved.
  13. When it asks you if you would like to submit your job, say no. Then type Quit to leave the program.
  14. Now you should be able to run
    ls ~/job-output
    and see gather.sh.cmd. This file will stay there until you delete it and can be run over and over again. Making a command file like this is especially useful if there is a task you'll be running repeatedly on Hoffman2. But if this is something you only need to run once, you should delete the file so you don't needlessly approach your quota.
  15. The time has come to actually run the program (thought we'd never get to that, didn't you?). Type
    qsub job-output/gather.sh.cmd
    and after hitting enter, a message similar to this will pop up:
    Your job 1882940 ("gather.sh.cmd") has been submitted
    where the number is your JobID, a unique numerical identifier for the computer job you have submitted to the queue.
  16. Now you can check if the job has finished running by doing
    ls ~/job-output
    when two files named gather.sh.output.[JOBID] and gather.sh.joblog.[JOBID] (where JOBID is your job's unique identifier) appear, your job has run.
    gather.sh.output.[JOBID]
    This file has all the standard output generated by your script. In this case it will just have the line
    Standard output would appear here.
    gather.sh.joblog.[JOBID]
    This file has all the details about when, where, and how your job was processed. Useful information if you are going to be running this job over and over and need to fine tune the resources it uses.
  17. The script you ran is an aggregator. It looks in a list of directories, each assumed to contain a specifically named file, and gathers the contents of each of those files into one central file in your home directory. This file is named gather-[TIMESTAMP].txt where TIMESTAMP is when the script was run and follows ISO 8601 style encoding. You are encouraged to type
    /u/home/FMRI/apps/examples/qsub/gather.sh -h
    or
    /u/home/FMRI/apps/examples/qsub/gather.sh --help
    to see how this script works.


qsub

Everything that job.q did can be done on the command line.

Example

I have a script called gather.sh which can take a list of directories and aggregate the contents of a specific file in each directory into a single text file. This file actually exists and can be found in /u/home/FMRI/apps/examples/gather.sh

If it needs to go through a bunch of directories and the files are large, this would be a good job to submit to the queue. The command to do this would be:

qsub -cwd -V -N J1 -l express,time=0:05:00 /u/home/FMRI/apps/examples/qsub/gather.sh

And something like the following will be printed out:

Your job 1875395 ("J1") has been submitted

Where the number is your JOBID, a unique numerical identifier for your job.

Let's break down the arguments in the script.

-cwd
Change working directory
When your script runs, change the working directory to where you currently are in the filesystem.
e.g. If you were in the director /u/home/mscohen/data/ when you ran the command, the queue will change directories to that location and then execute the script you gave it. This means output and error directories will be placed here for that job.
-V
Exports all the environment variables in qsub to the context of the job. Useful if you passed a variable to the script.
-N J1
Names your job "J1." When you look at the queue, this will be the text that shows up in the "name" column. This will also be the beginning of the output (J1.o[JOBID]) and error (J1.e[JOBID]) files for your job.
-l
This is the resources flag meaning that the text immediately after it will ask for things like:
  • certain amount of memory (mem=1024MB)
  • certain number of processors (pe=8), or
  • certain length of time (time=HH:MM:SS)