Hoffman2:Introduction: Difference between revisions

From Center for Cognitive Neuroscience
Jump to navigation Jump to search
No edit summary
(Edits to reflect new state after home directory locations were changed.)
Line 31: Line 31:
There is also a GPU cluster that has more than 300 nodes, but access to this must be requested separately from a normal Hoffman2 account.  Look [http://fuji.ats.ucla.edu/for-transfer/hoffman2-cluster/computing/gpuq.htm here] for how to request access.
There is also a GPU cluster that has more than 300 nodes, but access to this must be requested separately from a normal Hoffman2 account.  Look [http://fuji.ats.ucla.edu/for-transfer/hoffman2-cluster/computing/gpuq.htm here] for how to request access.


The reason the number of computing cores continues to grow is because more resource groups (like individual research labs) join Hoffman2 and buy coresCores contributed by a resource group are guaranteed to that resource group and can be used to run longer jobs ([http://fuji.ats.ucla.edu/for-transfer/hoffman2-cluster/computing/policies.htm#highp up to 14 days]).  The Cohen and Bookheimer groups on Hoffman2 have 48 cores (each with 1GB of RAM) on 6 nodes as of March 2012Use the command <code>mygroup</code> to see what it is currently.
The reason the number of computing cores continues to grow is because more resource groups (like individual research labs) join Hoffman2 and buy nodes to be integrated into the clusterNodes contributed by a resource group are guaranteed to that resource group and can be used to run longer jobs ([http://fuji.ats.ucla.edu/for-transfer/hoffman2-cluster/computing/policies.htm#highp up to 14 days]).  As of June 2013, the Cohen and Bookheimer groups on Hoffman2 have 96 cores:
* 6 nodes (installed pre 2010) each with
** 8 cores
** 8GB RAM
* 3 nodes (installed Fall 2012) each with
** 16 cores
** 48GB RAM
Use the command <code>mygroup</code> to see what resources you have available.




Line 38: Line 45:


====Home Directory====
====Home Directory====
When you login to Hoffman2, you get dropped into your home directory immediately.  This is where you can keep your data and the scripts you work with. Data in your home directory is accessible on all login and computing nodes.
When you login to Hoffman2, you get dropped into your home directory immediately. Home directory locations follow the pattern
  /u/home/[u]/[username]
Where <code>[u]</code> is the first letter of the username, e.g.
/u/home/j/jbruin
/u/home/t/ttrojan
 
Your home directory is where you can keep your personal files, data and scripts you work with. Data in your home directory is accessible on all login and computing nodes.


ATS maintains high end storage systems (BlueArc and Panasas) for your home directory.  There have built in redundancies and are fault tolerant.  On top of that, ATS does tape backups regularly.  '''If all of that sounded Greek to you, the important thing to understand is that there is a lot of disk space on Hoffman2 and they take great pains to make sure your data is safe.'''
ATS maintains high end storage systems (BlueArc and Panasas) for your home directory.  There have built in redundancies and are fault tolerant.  On top of that, ATS does tape backups regularly.  '''If all of that sounded Greek to you, the important thing to understand is that there is a lot of disk space on Hoffman2 and they take great pains to make sure your data is safe.'''


If you are a general campus user, you have 20GB of space to play with.  If you are part of a cluster contributing group, your storage space is tied to the contributions made by your specific group.
Every user is allowed to store up to 20GB of data files in their home directory.  If you are part of a cluster contributing group, you can also store data files in that group's common space
/u/home/[GROUPNAME]
so long as that group is within their quota limits for file count and size.


[[Hoffman2:Quotas|Find out how much space you are using on Hoffman2.]]
[[Hoffman2:Quotas|Find out how much space your group is using on Hoffman2.]]


====Temporary Storage====
====Temporary Storage====

Revision as of 07:29, 22 June 2013

Back to all things Hoffman2

What is Hoffman2?

The Hoffman2 Cluster is a campus computing resource at UCLA and is named for Paul Hoffman (1947-2003). It is maintained by the Academic Technology Services Department at UCLA and they host a webpage about it here. With many high end processors and data storage and backup technologies, it is a useful tool for executing research computations especially when working with large datasets. More than 1000 users are currently registered and the cluster sees tremendous usage. Click here to find out how to join that user group. In February 2012 alone, there were more than 4 million compute hours logged. See more usage statistics here.


Anatomy of the Computing Cluster

What does Hoffman2 consist of?

  • Login Nodes
  • Computing Nodes
  • Storage Space
  • Sun Grid Engine (a brain of sorts)

File:Hoffman2layout.png
**Image taken from a previous ATS "Using Hoffman2 Cluster" slide deck and modified for our point.**


Login Nodes

There are four login nodes which allow you to access and interact with the Hoffman2 Cluster. These are essentially four dedicated computers that you can SSH into and use to look at and edit your files or submit computing jobs to the queue (more on what the queue is in a bit). It is important to remember that these are four computers being shared by ALL the Hoffman2 users. Doing ANY type of heavy computing on these nodes is frowned upon. If you are:

  • moving lots of files
  • calculating the inverse solution to an EEG signal, or
  • running a bunch of python scripts to extract tractography of a brain

you should NOT be doing this on a login node. If the sysadmins at ATS find any process that is taking up too many resources on the login nodes, they reserve the right to terminate the process immediately.


Computing Nodes

As of November 2012, Hoffman2 is made up of more than 9000 processors across three data centers and this number continues to grow as the cluster is expanded. The individual cores of the processors are where your programs gets executed when you submit a job to the cluster. There are ways to request that you only be given one core to use or that you be given many cores.

There is also a GPU cluster that has more than 300 nodes, but access to this must be requested separately from a normal Hoffman2 account. Look here for how to request access.

The reason the number of computing cores continues to grow is because more resource groups (like individual research labs) join Hoffman2 and buy nodes to be integrated into the cluster. Nodes contributed by a resource group are guaranteed to that resource group and can be used to run longer jobs (up to 14 days). As of June 2013, the Cohen and Bookheimer groups on Hoffman2 have 96 cores:

  • 6 nodes (installed pre 2010) each with
    • 8 cores
    • 8GB RAM
  • 3 nodes (installed Fall 2012) each with
    • 16 cores
    • 48GB RAM

Use the command mygroup to see what resources you have available.


Storage Space

For official and up-to-date information about storage space, click here. If you want a quick overview, see below.

Home Directory

When you login to Hoffman2, you get dropped into your home directory immediately. Home directory locations follow the pattern

/u/home/[u]/[username]

Where [u] is the first letter of the username, e.g.

/u/home/j/jbruin
/u/home/t/ttrojan

Your home directory is where you can keep your personal files, data and scripts you work with. Data in your home directory is accessible on all login and computing nodes.

ATS maintains high end storage systems (BlueArc and Panasas) for your home directory. There have built in redundancies and are fault tolerant. On top of that, ATS does tape backups regularly. If all of that sounded Greek to you, the important thing to understand is that there is a lot of disk space on Hoffman2 and they take great pains to make sure your data is safe.

Every user is allowed to store up to 20GB of data files in their home directory. If you are part of a cluster contributing group, you can also store data files in that group's common space

/u/home/[GROUPNAME]

so long as that group is within their quota limits for file count and size.

Find out how much space your group is using on Hoffman2.

Temporary Storage

When running a computing job on Hoffman2, reading and writing a bunch of files in your home directory can be slow. So faster temporary storage is available to use for ongoing jobs. Read the official description here.

/work
Each computing node has its own unique "work" directory. This is only accessible by jobs on that specific node. Any data your job may put on it will be removed as soon as your job finishes. There is at least 100GB of this space on each node, but you may only use a portion proportional to the number of cores you are using on that node (you have to share).
Every job is given a unique subdirectory on work where it can read and write files rapidly. The UNIX environment variable $TMPDIR points to this directory.
If your job reads from or writes to a file repeatedly, you may save time by keeping that file in this temporary directory and then moving it to your home directory at completion so it is not deleted.
/u/scratch/[UserID]
Where [UserID] is replaced with your Hoffman2 username. Data here is accessible on all login and computing nodes. You can use up to 2TB of space here, but data is not kept here for more than 7 days and can be overwritten sooner if there is a high demand for scratch space. Use the UNIX environment variable $SCRATCH to access your personal scratch directory.


Sun Grid Engine

The Sun Grid Engine is the brains behind how jobs get executed on the cluster. When you request that a script be run on Hoffman2, the SGE looks at the resources you requested (how much memory, how many computing cores, how many computing hours, etc) and puts your job in a queue (a waiting line for those not familiar with British English) based on your requirements. Less demanding jobs generally get front loaded while more demanding ones must wait for adequate resources to free up. The SGE tries to schedule jobs on computing nodes in order to make the most efficient use of the resources available.

Queues

There is more than one queue on Hoffman2. Each is for a slightly different purpose:

express
For jobs requesting at most 2 hours of computing time.
interactive
For jobs requesting at most 24 hours of computing time and requiring the ability for users to interact with the program running.
highp
For jobs requesting at most 14 days of computing time. These are required to run on nodes owned by your group.

And there are others. Read about them here.


Find out how to submit computing jobs to the Hoffman2 Cluster.


External Links