https://www.ccn.ucla.edu/wiki/api.php?action=feedcontributions&user=Elau&feedformat=atomCenter for Cognitive Neuroscience - User contributions [en]2024-03-29T07:07:43ZUser contributionsMediaWiki 1.39.3https://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Software_Tools&diff=2783Hoffman2:Software Tools2014-10-10T18:49:15Z<p>Elau: Updated FSL to 5.0.7</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
There is an FMRI usergroup on Hoffman2 which is maintained for groups doing Neuroimaging work at UCLA. Tools like FSL, FreeSurfer, AFNI and Nibabel are maintained for this group separate from normal Hoffman2 programs. In order to take advantage of these tools, you need to setup your bash profile [[Hoffman2:Profile|properly]].<br />
<br />
Below is a list of the available software tools. We will do our best to update it as changes are made.<br />
<br />
<br />
==AFNI==<br />
[http://afni.nimh.nih.gov/afni/ Official Webste]<br />
{| class="wikitable"<br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd"<br />
| 2011_12_21_1014 || circa 2012.03 || Current<br />
|}<br />
<br />
<br />
==BrainSuite==<br />
[http://brainsuite.org/ Official Website]<br />
{| class="wikitable" <br />
! class="ccn-table-header"| Version Number<br />
! class="ccn-table-header"| Install Date<br />
! class="ccn-table-header"| Notes<br />
|- class="ccn-table-odd"<br />
| 13a4 || 2014.03.05 || Current<br />
|}<br />
<br />
<br />
==Caret==<br />
[http://brainvis.wustl.edu/wiki/index.php/Caret:About Official Website]<br />
{| class="wikitable"<br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd"<br />
| 5.65 (2012.01.27) || 2013.07.15 || Current, not folded into the main profile<br />
|}<br />
<br />
<br />
==Chronux==<br />
[http://www.chronux.org Official Website]<br />
{| class="wikitable"<br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd"<br />
| 2.10 || 2013.02.26 || Current<br />
|}<br />
<br />
<br />
==dcm2nii==<br />
[http://www.mccauslandcenter.sc.edu/mricro/mricron/dcm2nii.html Official Website]<br />
{| class="wikitable"<br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd"<br />
| 2013.06.06 || 2014.03.06 ||<br />
|- class="ccn-table-even"<br />
| 2011.11.11 || circa 2011 || Current<br />
|}<br />
<br />
<br />
==EEGLAB==<br />
[http://sccn.ucsd.edu/eeglab/ Official Website]<br />
<br />
[http://sccn.ucsd.edu/wiki/EEGLAB_revision_history Release Notes]<br />
{| class="wikitable"<br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd"<br />
| 13.1.1b || 2014.01.29 || Current<br />
|- class="ccn-table-even"<br />
| 12.0.2.5b || 2013.11.14 || <br />
|- class="ccn-table-odd"<br />
| 11.0.5.4b || 2013.11.14 || <br />
|- class="ccn-table-even"<br />
| 12.0.0.0b || 2012.12.10 || <br />
|- class="ccn-table-odd"<br />
| 11.0.0.0b || 2012.02.21 || <br />
|- class="ccn-table-even"<br />
| 10.2.5.8b || 2012.02.21 ||<br />
|}<br />
<br />
<br />
==FreeSurfer==<br />
[http://surfer.nmr.mgh.harvard.edu/ Official Website]<br />
<br />
[http://freesurfer.net/fswiki/ReleaseNotes Release Notes]<br />
{| class="wikitable" <br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd" <br />
| 5.3.0 || 2013.06.18 || Current<br />
|- class="ccn-table-even" <br />
| 5.2.0 || 2013.03.27 ||<br />
|- class="ccn-table-odd" <br />
| 5.1.0 || 2011.11.14 ||<br />
|- class="ccn-table-even" <br />
| 5.0.0 || circa 2010 ||<br />
|- class="ccn-table-odd" <br />
| 4.4.0 || circa 2009 ||<br />
|- class="ccn-table-even" <br />
| 4.0.5 || circa 2008 ||<br />
|}<br />
<br />
<br />
==FSL==<br />
[http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/ Official Website]<br />
<br />
[http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/WhatsNew Revision History]<br />
{| class="wikitable" <br />
! class="ccn-table-header"| Version Number<br />
! class="ccn-table-header"| Install Date<br />
! class="ccn-table-header"| Notes<br />
|- class="ccn-table-even"<br />
| 5.0.7 || 2013.10.17 || Current (2014.10.10-)<br />
|- class="ccn-table-odd"<br />
| 5.0.6 || 2013.12.18 || (2013.12.18-2014.10.10)<br />
|- class="ccn-table-even"<br />
| 5.0.5 || 2013.10.17 || (2013.10.17-2013.12.17)<br />
|-class="ccn-table-odd"<br />
| 5.0.4 || 2013.06.18 ||<br />
|- class="ccn-table-even"<br />
| 5.0.2 || 2013.02.19 ||<br />
|- class="ccn-table-odd"<br />
| 5.0.1 || 2012.10.01 ||<br />
|- class="ccn-table-even"<br />
| 5.0.0 || 2012.09.14 ||<br />
|- class="ccn-table-odd"<br />
| 4.1.9 || 2011.12.01 ||<br />
|- class="ccn-table-even"<br />
| 4.1.8 || circa 2011.06 ||<br />
|- class="ccn-table-odd" <br />
| 4.1.7 || circa 2011.11 ||<br />
|- class="ccn-table-even"<br />
| 4.1.4 || circa 2009 ||<br />
|- class="ccn-table-odd"<br />
| 4.1.3 || circa 2009 ||<br />
|- class="ccn-table-even"<br />
| 4.1.1 || circa 2008 ||<br />
|- class="ccn-table-odd"<br />
| 4.1.0 || circa 2008 ||<br />
|- class="ccn-table-even"<br />
| 4.0.4 || circa 2008 ||<br />
|}<br />
<br />
<br />
==ITKGray==<br />
[http://vistalab.stanford.edu/newlm/index.php/ItkGray Official Website]<br />
{| class="wikitable" <br />
! class="ccn-table-header"| Version Number<br />
! class="ccn-table-header"| Install Date<br />
! class="ccn-table-header"| Notes<br />
|- class="ccn-table-odd"<br />
| 080803 || 2009.11.19 || Current<br />
|- class="ccn-table-even"<br />
| 080128 || 2009.11.13 ||<br />
|}<br />
<br />
<br />
==SPM==<br />
[http://www.fil.ion.ucl.ac.uk/spm/ Official Website]<br />
{| class="wikitable"<br />
! class="ccn-table-header"| Version<br />
! class="ccn-table-header"| Last Patch Applied<br />
! class="ccn-table-header"| Last Checked Date<br />
! class="ccn-table-header"| Notes<br />
|- class="ccn-table-odd"<br />
| SPM8 || 5236 || 2014.01 || Current<br />
|- class="ccn-table-even"<br />
| SPM5 || Unknown || N/A || No longer supported<br />
|}<br />
<br />
<br />
==TrackVis/Diffusion Toolkit==<br />
[http://trackvis.org/ Official Website 1]<br />
<br />
[http://trackvis.org/dtk/ Official Website 2]<br />
{| class="wikitable" <br />
! class="ccn-table-header"| Tool<br />
! class="ccn-table-header"| Version Number<br />
! class="ccn-table-header"| Last Checked Date<br />
! class="ccn-table-header"| Notes<br />
|- class="ccn-table-odd"<br />
| TrackVis || 0.5.2.2 || 2014.03.06 || <br />
|- class="ccn-table-even"<br />
| Diffusion Toolkit || 0.6.2.2 || 204.03.06 ||<br />
|}<br />
<br />
<br />
==WEKA==<br />
[http://www.cs.waikato.ac.nz/ml/weka/ Official Website]<br />
{| class="wikitable" <br />
! class="ccn-table-header"| Version Number<br />
! class="ccn-table-header"| Install Date<br />
! class="ccn-table-header"| Notes<br />
|- class="ccn-table-odd"<br />
| 3.7.10 || 2014.03.03 || Current<br />
|- class="ccn-table-even"<br />
| 3.6.5 || circa 2011.08 ||<br />
|}<br />
<br />
<br />
<br />
==Python2.7==<br />
Gentoo Prefix version of Python 2.7. Site packages listed below were installed using pip unless otherwise noted.<br />
Installed by pip unless otherwise noted<br />
===CVXOPT===<br />
===Cython===<br />
===Gnuplot===<br />
===IPython===<br />
===matplotlib===<br />
:Installed by Gentoo prefix<br />
===nibabel===<br />
===nifti===<br />
===nimfa===<br />
:Non-negative Matrix Factorization<br />
:<br />
:[http://nimfa.biolab.si/ http://nimfa.biolab.si/]<br />
===nipype===<br />
===nose===<br />
===numpy===<br />
===(p)lsa===<br />
:(probabilistic) Latent Semantic Analysis. Failed its tests.py though.<br />
<br />
:[http://www.mblondel.org/journal/2010/06/13/lsa-and-plsa-in-python/ http://www.mblondel.org/journal/2010/06/13/lsa-and-plsa-in-python/]<br />
===pydicom===<br />
===pygments===<br />
===PyMF===<br />
:python Matrix Factorization Module. Failed its tests though.<br />
<br />
:[http://pymf.googlecode.com http://pymf.googlecode.com]<br />
===pypr===<br />
===PyQt4===<br />
===pytz===<br />
===pywt===<br />
===pyximport===<br />
===scikits===<br />
===scipy===<br />
===sklearn===<br />
===sparsesvd===<br />
:Singular Value Decomposition. Passed both tests.<br />
<br />
:[http://pypi.python.org/pypi/sparsesvd http://pypi.python.org/pypi/sparsesvd]<br />
===sphinx===<br />
===sympy===<br />
===traits===<br />
===virtualenv===<br />
===xcbgen===<br />
<br />
<br />
==GCC==<br />
==LAPACK==<br />
==BLAS==<br />
==GLIB==<br />
==C++==<br />
==CMake==<br />
==CPACK==<br />
==MPI Kmeans==<br />
See this website for how to cite using the MPI Kmeans tool.<br />
[http://mloss.org/software/view/48/]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Software_Tools&diff=2757Hoffman2:Software Tools2014-10-03T04:26:22Z<p>Elau: Updating FSL and Python/Gentoo tool information</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
There is an FMRI usergroup on Hoffman2 which is maintained for groups doing Neuroimaging work at UCLA. Tools like FSL, FreeSurfer, AFNI and Nibabel are maintained for this group separate from normal Hoffman2 programs. In order to take advantage of these tools, you need to setup your bash profile [[Hoffman2:Profile|properly]].<br />
<br />
Below is a list of the available software tools. We will do our best to update it as changes are made.<br />
<br />
<br />
==AFNI==<br />
[http://afni.nimh.nih.gov/afni/ Official Webste]<br />
{| class="wikitable"<br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd"<br />
| 2011_12_21_1014 || circa 2012.03 || Current<br />
|}<br />
<br />
<br />
==BrainSuite==<br />
[http://brainsuite.org/ Official Website]<br />
{| class="wikitable" <br />
! class="ccn-table-header"| Version Number<br />
! class="ccn-table-header"| Install Date<br />
! class="ccn-table-header"| Notes<br />
|- class="ccn-table-odd"<br />
| 13a4 || 2014.03.05 || Current<br />
|}<br />
<br />
<br />
==Caret==<br />
[http://brainvis.wustl.edu/wiki/index.php/Caret:About Official Website]<br />
{| class="wikitable"<br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd"<br />
| 5.65 (2012.01.27) || 2013.07.15 || Current, not folded into the main profile<br />
|}<br />
<br />
<br />
==Chronux==<br />
[http://www.chronux.org Official Website]<br />
{| class="wikitable"<br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd"<br />
| 2.10 || 2013.02.26 || Current<br />
|}<br />
<br />
<br />
==dcm2nii==<br />
[http://www.mccauslandcenter.sc.edu/mricro/mricron/dcm2nii.html Official Website]<br />
{| class="wikitable"<br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd"<br />
| 2013.06.06 || 2014.03.06 ||<br />
|- class="ccn-table-even"<br />
| 2011.11.11 || circa 2011 || Current<br />
|}<br />
<br />
<br />
==EEGLAB==<br />
[http://sccn.ucsd.edu/eeglab/ Official Website]<br />
<br />
[http://sccn.ucsd.edu/wiki/EEGLAB_revision_history Release Notes]<br />
{| class="wikitable"<br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd"<br />
| 13.1.1b || 2014.01.29 || Current<br />
|- class="ccn-table-even"<br />
| 12.0.2.5b || 2013.11.14 || <br />
|- class="ccn-table-odd"<br />
| 11.0.5.4b || 2013.11.14 || <br />
|- class="ccn-table-even"<br />
| 12.0.0.0b || 2012.12.10 || <br />
|- class="ccn-table-odd"<br />
| 11.0.0.0b || 2012.02.21 || <br />
|- class="ccn-table-even"<br />
| 10.2.5.8b || 2012.02.21 ||<br />
|}<br />
<br />
<br />
==FreeSurfer==<br />
[http://surfer.nmr.mgh.harvard.edu/ Official Website]<br />
<br />
[http://freesurfer.net/fswiki/ReleaseNotes Release Notes]<br />
{| class="wikitable" <br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd" <br />
| 5.3.0 || 2013.06.18 || Current<br />
|- class="ccn-table-even" <br />
| 5.2.0 || 2013.03.27 ||<br />
|- class="ccn-table-odd" <br />
| 5.1.0 || 2011.11.14 ||<br />
|- class="ccn-table-even" <br />
| 5.0.0 || circa 2010 ||<br />
|- class="ccn-table-odd" <br />
| 4.4.0 || circa 2009 ||<br />
|- class="ccn-table-even" <br />
| 4.0.5 || circa 2008 ||<br />
|}<br />
<br />
<br />
==FSL==<br />
[http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/ Official Website]<br />
<br />
[http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/WhatsNew Revision History]<br />
{| class="wikitable" <br />
! class="ccn-table-header"| Version Number<br />
! class="ccn-table-header"| Install Date<br />
! class="ccn-table-header"| Notes<br />
|- class="ccn-table-even"<br />
| 5.0.7 || 2013.10.17 ||<br />
|- class="ccn-table-odd"<br />
| 5.0.6 || 2013.12.18 || Current (2013.12.18-)<br />
|- class="ccn-table-even"<br />
| 5.0.5 || 2013.10.17 || (2013.10.17-2013.12.17)<br />
|-class="ccn-table-odd"<br />
| 5.0.4 || 2013.06.18 ||<br />
|- class="ccn-table-even"<br />
| 5.0.2 || 2013.02.19 ||<br />
|- class="ccn-table-odd"<br />
| 5.0.1 || 2012.10.01 ||<br />
|- class="ccn-table-even"<br />
| 5.0.0 || 2012.09.14 ||<br />
|- class="ccn-table-odd"<br />
| 4.1.9 || 2011.12.01 ||<br />
|- class="ccn-table-even"<br />
| 4.1.8 || circa 2011.06 ||<br />
|- class="ccn-table-odd" <br />
| 4.1.7 || circa 2011.11 ||<br />
|- class="ccn-table-even"<br />
| 4.1.4 || circa 2009 ||<br />
|- class="ccn-table-odd"<br />
| 4.1.3 || circa 2009 ||<br />
|- class="ccn-table-even"<br />
| 4.1.1 || circa 2008 ||<br />
|- class="ccn-table-odd"<br />
| 4.1.0 || circa 2008 ||<br />
|- class="ccn-table-even"<br />
| 4.0.4 || circa 2008 ||<br />
|}<br />
<br />
<br />
==ITKGray==<br />
[http://vistalab.stanford.edu/newlm/index.php/ItkGray Official Website]<br />
{| class="wikitable" <br />
! class="ccn-table-header"| Version Number<br />
! class="ccn-table-header"| Install Date<br />
! class="ccn-table-header"| Notes<br />
|- class="ccn-table-odd"<br />
| 080803 || 2009.11.19 || Current<br />
|- class="ccn-table-even"<br />
| 080128 || 2009.11.13 ||<br />
|}<br />
<br />
<br />
==SPM==<br />
[http://www.fil.ion.ucl.ac.uk/spm/ Official Website]<br />
{| class="wikitable"<br />
! class="ccn-table-header"| Version<br />
! class="ccn-table-header"| Last Patch Applied<br />
! class="ccn-table-header"| Last Checked Date<br />
! class="ccn-table-header"| Notes<br />
|- class="ccn-table-odd"<br />
| SPM8 || 5236 || 2014.01 || Current<br />
|- class="ccn-table-even"<br />
| SPM5 || Unknown || N/A || No longer supported<br />
|}<br />
<br />
<br />
==TrackVis/Diffusion Toolkit==<br />
[http://trackvis.org/ Official Website 1]<br />
<br />
[http://trackvis.org/dtk/ Official Website 2]<br />
{| class="wikitable" <br />
! class="ccn-table-header"| Tool<br />
! class="ccn-table-header"| Version Number<br />
! class="ccn-table-header"| Last Checked Date<br />
! class="ccn-table-header"| Notes<br />
|- class="ccn-table-odd"<br />
| TrackVis || 0.5.2.2 || 2014.03.06 || <br />
|- class="ccn-table-even"<br />
| Diffusion Toolkit || 0.6.2.2 || 204.03.06 ||<br />
|}<br />
<br />
<br />
==WEKA==<br />
[http://www.cs.waikato.ac.nz/ml/weka/ Official Website]<br />
{| class="wikitable" <br />
! class="ccn-table-header"| Version Number<br />
! class="ccn-table-header"| Install Date<br />
! class="ccn-table-header"| Notes<br />
|- class="ccn-table-odd"<br />
| 3.7.10 || 2014.03.03 || Current<br />
|- class="ccn-table-even"<br />
| 3.6.5 || circa 2011.08 ||<br />
|}<br />
<br />
<br />
<br />
==Python2.7==<br />
Gentoo Prefix version of Python 2.7. Site packages listed below were installed using pip unless otherwise noted.<br />
Installed by pip unless otherwise noted<br />
===CVXOPT===<br />
===Cython===<br />
===Gnuplot===<br />
===IPython===<br />
===matplotlib===<br />
:Installed by Gentoo prefix<br />
===nibabel===<br />
===nifti===<br />
===nimfa===<br />
:Non-negative Matrix Factorization<br />
:<br />
:[http://nimfa.biolab.si/ http://nimfa.biolab.si/]<br />
===nipype===<br />
===nose===<br />
===numpy===<br />
===(p)lsa===<br />
:(probabilistic) Latent Semantic Analysis. Failed its tests.py though.<br />
<br />
:[http://www.mblondel.org/journal/2010/06/13/lsa-and-plsa-in-python/ http://www.mblondel.org/journal/2010/06/13/lsa-and-plsa-in-python/]<br />
===pydicom===<br />
===pygments===<br />
===PyMF===<br />
:python Matrix Factorization Module. Failed its tests though.<br />
<br />
:[http://pymf.googlecode.com http://pymf.googlecode.com]<br />
===pypr===<br />
===PyQt4===<br />
===pytz===<br />
===pywt===<br />
===pyximport===<br />
===scikits===<br />
===scipy===<br />
===sklearn===<br />
===sparsesvd===<br />
:Singular Value Decomposition. Passed both tests.<br />
<br />
:[http://pypi.python.org/pypi/sparsesvd http://pypi.python.org/pypi/sparsesvd]<br />
===sphinx===<br />
===sympy===<br />
===traits===<br />
===virtualenv===<br />
===xcbgen===<br />
<br />
<br />
==GCC==<br />
==LAPACK==<br />
==BLAS==<br />
==GLIB==<br />
==C++==<br />
==CMake==<br />
==CPACK==<br />
==MPI Kmeans==<br />
See this website for how to cite using the MPI Kmeans tool.<br />
[http://mloss.org/software/view/48/]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Profile&diff=2749Hoffman2:Profile2014-09-29T01:00:11Z<p>Elau: Updated paths to account for switch from "home" to "project'</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
In UNIX systems, there are certain configuration files that get executed every time you login. If you are using the Bash shell (default), you have a file called <code>.bash_profile</code> which is processed when you log in. In order to make the FMRI toolset available to you on Hoffman2 and so you can work well with others, we recommend that you follow the instructions in the [[Hoffman2:Profile#Basics|Basics]] section. Read [[Hoffman2:Profile#Extras|Extras]] for some bells and whistles.<br />
<br />
<br />
==Basics==<br />
You account has one last thing that needs to be edited before being usable.<br />
<br />
# [[Hoffman2:Accessing the Cluster#SSH - Command Line|SSH into Hoffman2]]<br />
# Use your favorite [[Text Editors|text editor]] to edit the file <code>~/.bash_profile</code><br />
=====vim=====<br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* <pre>$ vim ~/.bash_profile</pre><br />
# Insert these lines at the '''bottom''' of the file<br />
#:* Type <code>G</code> - (capital G) to go to the end of the file<br />
#:* Type <code>A</code> - (capital A) to go to the end of the line and enter insert mode<br />
#:* Type <code>ENTER</code> - to insert a newline<br />
#:* Type or paste in the lines below.<br />
#:* <pre>source /u/project/FMRI/apps/etc/profile&#10;umask 007</pre><br />
# Save the file by typing<br />
#:* <code>ESC + ":wq" + ENTER</code><br />
# Log out of Hoffman2 and the next time you log in, everything will be set for you to start working.<br />
<br />
=====emacs=====<br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs]]<br />
#:* <pre>$ emacs ~/.bash_profile</pre><br />
# Insert these lines at the '''bottom''' of the file<br />
#:* Use the arrow keys to scroll the cursor down to the bottom of the document and add a newline.<br />
#:* Type or paste in the specified lines below.<br />
#:* <pre>source /u/project/FMRI/apps/etc/profile&#10;umask 007</pre><br />
# Save the File by typing:<br />
#:* <code>CTRL+x, CTRL+c</code><br />
# Log out of Hoffman2 and the next time you log in, everything will be set for you to start working.<br />
<br />
=====nedit/gedit=====<br />
#: [[Text Editors#NEdit (H2) (OSX)|NEdit]]<br />
#:* <pre>$ nedit ~/.bash_profile</pre><br />
# Use the menu and Insert these lines at the '''bottom''' of the file<br />
#:* <pre>source /u/project/FMRI/apps/etc/profile&#10;umask 007</pre><br />
#: Click the Save menu button.<br />
# Log out of Hoffman2 and the next time you log in, everything will be set for you to start working.<br />
<br />
<br />
===Curious?===<br />
For those that care, what you are doing is asking the computer to execute the file<br />
/u/project/FMRI/apps/etc/profile<br />
every time you login. This file modifies your PATH variable so you have access to the FMRI toolset.<br />
<br />
The last line<br />
umask 007<br />
makes it so that any files you create will not allow "anyone" outside your group to read, write, or execute files and directories you make. This does not automatically grant read, write, and execute privileges to you and your group though.<br />
<br />
<br />
<br />
==Extras==<br />
===Collaboration===<br />
By default, any files and directories you create will not necessarily have permissions that allow your group to write on them. This can be a problem if other people are supposed to build on data you processed. We have a script ([[Hoffman2:Scripts:fix_perms.sh |fix_perms.sh]]) that will kindly find any files you own in a specified directory that don't have read/write/execute permissions for the group and make it so they do.<br />
<br />
You can build this script into your bash profile so that every time you log into Hoffman2, it will run in the background. It is also recommended that you run this script at the end of jobs to make results immediately available to collaborators.<br />
<br />
Adding the line<br />
fix_perms.sh -q /u/project/[GROUP]/data &<br />
to the end of your bash profile will run the permission fixer on your group's common data directory in the background quietly each time you log in. '''Make sure to replace [GROUP] with the name of your Hoffman2 group (e.g. mscohen, sbook, cbearden, laltshul, jfeusner or mgreen).'''<br />
<br />
<br />
===Colors===<br />
You can change the content and color of your command prompt by editing your bash_profile. There is a great explanation of how to do this [http://www.cyberciti.biz/tips/howto-linux-unix-bash-shell-setup-prompt.html here].<br />
<br />
Some of the content you can include in the command prompt:<br />
;Current time<br />
: You can format this however you want. This helps when looking back through your Terminal to find when you made certain changes to files.<br />
;Current working directory<br />
: So you always know where you are in a filesystem and don't need to constantly retype <code>pwd</code>.<br />
;Username<br />
: Who you are. Helpful if you are logged into multiple servers under multiple accounts and need help keeping track.<br />
;Host<br />
: The name of the computer you are logged into. This also helps you know where you are at all times.<br />
<br />
Line to add to your bash profile<br />
export PS1="\[\e[0;31m\]\h\[\e[1;37m\]:\[\e[1;34m\]\w\n\[\e[1;37m\]\D{%Y-%m-%d-%H-%M-%S} \[\e[22;32m\]\u \$ "<br />
Resulting prompt (on a black background)<br/><br />
<code style="background:#000000; padding:5pt"><span style="color:#FF0000">HOST</span><span style="color:#000000">:</span><span style="color:#0000FF">CURRENT WORKING DIRECTORY</span><br/><br />
<span style="color:#FFFFFF"> DATETIME IN ISO8601 FORMAT</span> <span style="color:#00FF00">USERNAME $</span></code><br />
<br />
<br />
<br />
==Example Bash Profile==<br />
<nowiki>#.bash_profile<br />
<br />
# Get the aliases and functions<br />
if [ -f ~/.bashrc ]; then<br />
. ~/.bashrc<br />
fi<br />
<br />
# Source to use FMRI Apps<br />
source /u/project/FMRI/apps/etc/profile<br />
<br />
# Umask (Revoke Permissions)<br />
umask 007<br />
<br />
# Collaborative permissions (Replace collabDirectory with your project Directory and Uncomment<br />
#fix_perms.sh -q /u/project/sbook/data/collabDirectory &<br />
<br />
# Happy Colors<br />
export PS1="\[\e[0;31m\]\h\[\e[1;37m\]:\[\e[1;34m\]\w\n\[\e[1;37m\]\D{%Y-%m-%d-%H-%M-%S} \[\e[22;32m\]\u \$ "<br />
<br />
# Fix for QRSH when consolidating job output files<br />
alias qrsh='qrsh -o /dev/null'<br />
</nowiki><br />
<br />
<br />
== Changing Passwords ==<br />
Use the command below to change password. It will prompt you for your old password, and then the new password.<br />
$ passwd<br />
Changing password for user joebruin.<br />
Please enter your current password:<br />
Please enter your new password:<br />
<br />
<br />
== Password-less Logins ==<br />
The steps below will show you how to login without typing your password every time!<br />
<br />
On your local computer:<br />
a@A:-> '''ssh-keygen -t rsa'''<br />
Generating public/private rsa key pair.<br />
Enter file in which to save the key (/Users/a/.ssh/id_rsa): '''[ENTER]'''<br />
Created directory '/Users/a/.ssh'.<br />
Enter passphrase (empty for no passphrase): '''[ENTER]'''<br />
Enter same passphrase again: '''[ENTER]'''<br />
Your identification has been saved in /Users/a/.ssh/id_rsa.<br />
Your public key has been saved in /Users/a/.ssh/id_rsa.pub.<br />
The key fingerprint is:<br />
3e:4f:05:79:3a:9f:96:7c:3b:ad:e9:58:37:bc:37:e4 a@A<br />
<br />
Now use ssh to create a directory ~/.ssh as user on Hoffman2. (The directory may already exist, which is fine):<br />
a@A:~> ssh user@hoffman2.idre.ucla.edu mkdir -p .ssh<br />
user@hoffman2.idre.ucla.edu's password: '''[PASSWORD]'''<br />
<br />
Finally append a's new public key to b@B:.ssh/authorized_keys and enter b's password one last time:<br />
a@A:~> cat .ssh/id_rsa.pub | ssh user@hoffman2.idre.ucla.edu 'cat >> .ssh/authorized_keys'<br />
user@hoffman2.idre.ucla.edu's password: '''[PASSWORD]'''<br />
<br />
Now you can login to hoffman from local computer without a password!<br />
a@A:~> '''ssh user@hoffman2.idre.ucla.edu'''<br />
<br />
<br />
==External Links==<br />
*[http://ss64.com/bash/period.html Explanation of source]<br />
*[http://linux.die.net/man/2/umask Man for umask]<br />
*[http://www.cyberciti.biz/tips/understanding-linux-unix-umask-value-usage.html Better explanation of umask]<br />
*[http://www.cyberciti.biz/tips/howto-linux-unix-bash-shell-setup-prompt.html Coloration]<br />
*[http://en.wikipedia.org/wiki/ISO_8601 ISO 8601 Datetime format]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:FSL&diff=2623Hoffman2:FSL2014-06-19T21:13:20Z<p>Elau: XQuartz</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
FSL is a comprehensive library of analysis tools for FMRI, MRI and DTI brain imaging data. FSL is written mainly by members of the Analysis Group, FMRIB, Oxford, UK. <br />
<br />
<br />
Multiple versions are maintained on the Hoffman2 cluster to allow researchers to be consistent in using the same version for data analysis within a single study. You can either:<br />
* do nothing, and always use the "current" version of FSL on the cluster<br />
* [[Hoffman2:FSL#switch_fsl|actively choose]] which version of FSL you would like to run<br />
We recommend the latter for data integrity and reproducibility.<br />
<br />
<br />
<br />
==FSL GUI==<br />
Make sure you source the FMRI Path in your [[Hoffman2:Profile | Profile]] before doing anything, or else you won't be able to access FSL.<br />
<br />
<br />
To run FSL using a GUI on hoffman2, use the following command:<br />
$ fsl &<br />
<br />
If you received this message while opening FSL<br />
DISPLAY is not set. Please set your DISPLAY environment variable!<br />
<br />
It means you did not open X11 along with your ssh connection. See [[Hoffman2:Accessing_the_Cluster#GUI-Enabled_SSH_.5BRecommended.5D | here]] for more information.<br />
<br />
<br />
<br />
==FSL TOOLS==<br />
A complete list of tools can be found [http://www.fmrib.ox.ac.uk/fsl/fsl/list.html here]<br />
<br />
Functional MRI (command line only)<br />
{| class="wikitable"<br />
|-<br />
! Tool<br />
! Explanation<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/feat5/index.html feat]<br />
| Model-based FMRI analysis: data preprocessing (including MCFLIRT motion correction); first-level FILM GLM timeseries analysis; higher-level FLAME Bayesian mixed effects analysis.<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/melodic/index.html melodic]<br />
| Model-free FMRI analysis using Probabilistic Independent Component Analysis (PICA). MELODIC automatically estimates the number of interesting noise and signal sources in the data and because of the associated "noise model", is able to assign significance ("p-values") to the output spatial maps. MELODIC can also analyse multiple subjects or sessions simultaneously using Tensor-ICA.<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/fabber/index.html fabber]<br />
| Fast ASL & BOLD Bayesian Estimation Routine. Efficient nonlinear modelling and estimation of BOLD and CBF from dual-echo ASL data, using Variational Bayes.<br />
|}<br />
<br />
Structural MRI (command line only)<br />
{| class="wikitable"<br />
|-<br />
! Tool<br />
! Explanation<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/bet2/index.html bet]<br />
| Brain Extraction Tool - segments brain from non-brain in structural and functional data, and models skull and scalp surfaces.<br />
|- <br />
| [http://www.fmrib.ox.ac.uk/fsl/fast4/index.html fast]<br />
| FMRIB's Automated Segmentation Tool - brain segmentation (into different tissue types) and bias field correction.<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/first/index.html | first]<br />
| FMRIB's Integrated Registration and Segmentation Tool. FIRST uses mesh models trained with a large amount of rich hand-segmented training data to segment subcortical brain structures.<br />
|}<br />
<br />
GUI Commands/Tools [Make sure to have X11 forwarding on]<br />
{| class="wikitable"<br />
|-<br />
! Tool<br />
! Explanation<br />
|-<br />
| fsl<br />
| Bring you to the FSL menu where you can choose what type of analysis.<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/fdt/index.html Fdt]<br />
| FMRIB's Diffusion Toolbox - tools for low-level diffusion parameter reconstruction and probabilistic tractography, including crossing-fibre modelling.<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/flirt/index.html Flirt]<br />
| FMRIB's Linear Image Registration Tool - linear inter- and intra-modal registration.<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/feat5/index.html Feat]<br />
| Model-based FMRI analysis: data preprocessing (including MCFLIRT motion correction); first-level FILM GLM timeseries analysis; higher-level FLAME Bayesian mixed effects analysis.<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/feat5/featquery.html Featquery]<br />
| A program which allows you to interrogate FEAT results by defining a mask or set of co-ordinates (in standard-space, highres-space or loweres-space) and get mean stats values and time-series. <br />
|-<br />
| Glm<br />
| A GUI for setting up just the design matrix and contrasts, in the same way as in FEAT, for use with other modelling/inference programs such as randomise. <br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/melodic/index.html Melodic]<br />
| Model-free FMRI analysis using Probabilistic Independent Component Analysis (PICA). MELODIC automatically estimates the number of interesting noise and signal sources in the data and because of the associated "noise model", is able to assign significance ("p-values") to the output spatial maps. MELODIC can also analyse multiple subjects or sessions simultaneously using Tensor-ICA.<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/possum/index.html Possum]<br />
| Physics-Oriented Simulated Scanner for Understanding MRI. An FMRI data simulator that produces realistic simulated images and FMRI time series given a gradient echo pulse sequence, a segmented object with known tissue parameters, and a motion sequence..<br />
|-<br />
| Renderhighres<br />
| Transforms all thresholded stats images in a FEAT directory into high resolution or standard space and overlays these onto the high resolution or standard space images. This then produces PNG format pictures of the overlays and, by default, deletes the 3D AVW colour overlay images. <br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/miscvis/index.html Renderstats]<br />
| This tool allows you to combine a background image (raw FMRI or high resolution MRI) image with one or two statistics images. The statistics image(s) must be in registration with the background image.<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/susan/index.html Susan]<br />
| Nonlinear noise reduction.<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/fslview/index.html fslview]<br />
| Interactive display tool for 3D and 4D data.<br />
|}<br />
<br />
<br />
<br />
==Cluster==<br />
{| class="wikitable"<br />
|-<br />
! Scripts that self-submit<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/fdt/index.html fdt]<br />
| [http://www.fmrib.ox.ac.uk/fsl/feat5/index.html feat]<br />
| [http://www.fmrib.ox.ac.uk/fsl/first/index.html first]<br />
| [http://www.fmrib.ox.ac.uk/fsl/fslvbm/index.html fslval]<br />
| [http://www.fmrib.ox.ac.uk/fsl/possum/index.html possom]<br />
| [http://www.fmrib.ox.ac.uk/fsl/randomise/index.html randomise]<br />
| [http://www.fmrib.ox.ac.uk/fsl/tbss/index.html tbss]<br />
|-<br />
! GUIs that self-submit<br />
| [http://www.fmrib.ox.ac.uk/fsl/fdt/index.html Fdt]<br />
| [http://www.fmrib.ox.ac.uk/fsl/feat5/index.html Feat]<br />
| [http://www.fmrib.ox.ac.uk/fsl/flirt/index.html Flirt]<br />
| [http://www.fmrib.ox.ac.uk/fsl/possum/index.html Possum]<br />
|-<br />
|}<br />
<br />
FSL_SUB<br />
<br />
<br />
<br />
==switch_fsl==<br />
After you have [[Hoffman2:Profile | properly configured your profile]] so you have access to FSL and the other FMRI tools on Hoffman2, you also have access to the handy <code>switch_fsl</code> tool. It allows you to actively choose which version of FSL you use for analyses so you can stay locked into one version throughout a project before switching for a new project.<br />
<br />
See its documentation [[Hoffman2:Scripts:switch_fsl | here]].<br />
<br />
<br />
<br />
<br />
==NO_FSL_JOBS==<br />
Sometimes FSL doesn't know how to allocate enough resources for its jobs properly. Specifically we have found the FEAT tool often unable to do this for group analyses or other complex tasks. So we did some tinkering with FSL to allow you to override its job submission on Hoffman2 and run it like it was just on your laptop. '''The trick is to set <code>NO_FSL_JOBS=true</code> in your environment and FSL will not submit jobs.'''<br />
<br />
<br />
===Interactive Session===<br />
If you want to watch FEAT run (kinda like paint drying, but to each their own), you can do the following<br />
#[[Hoffman2:Accessing_the_Cluster#SSH_-_Command_Line|SSH]] into the cluster<br />
#Check out an [[Hoffman2:Interactive_Sessions|interactive node]] with the necessary time and memory<br />
#*<code>qrsh -l i,time=3:00:00,mem=3G</code><br />
#Set the environment variable<br />
#*<code>export NO_FSL_JOBS=true</code><br />
#Run your FSL commands. '''This means not using qsub, or command files, but simply executing the FSL command'''<br />
#The commands will just run and not submit any jobs.<br />
<br />
<br />
===Submitting a Job===<br />
If you don't want to watch FEAT run (why would you?), you can do the following<br />
<br />
Create a shell script (e.g. myshellscript.sh) with the following contents<br />
#!/bin/bash<br />
export NO_FSL_JOBS=true<br />
feat design.fsf<br />
# any other FSL commands you want<br />
<br />
And make sure to run <code>chmod 755</code> to make the script executable<br />
chmod 755 myshellscript.sh<br />
<br />
Submit the shell script [[Hoffman2:Submitting_Jobs|as a job]] but with the adequate time and memory allocations<br />
qsub -l time=23:00:00,mem=4G -V -m bea -cwd /path/to/myshellscript.sh<br />
<br />
And the FSL commands will be sent into the queue to run with your time and memory constraints rather than FSL's. This may take some playing with to get the time and memory allocations correct, but at least you have the ability to tweak them.<br />
<br />
<br />
<br />
<br />
==External Links==<br />
* Official FSL website http://www.fmrib.ox.ac.uk/fsl/</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2&diff=2570Hoffman22014-04-22T01:21:19Z<p>Elau: SPM8 info linked</p>
<hr />
<div>A compilation of lab know-how regarding the Hoffman2 Computing Cluster.<br />
<br />
Anyone new to the lab and using Hoffman2 NEEDS to read the first section to have adequate working knowledge of the system.<br />
<br />
<br />
== Getting Started ==<br />
=== Introduction ===<br />
Hoffman2 is a Computing Cluster at UCLA, find out how it generally works so you know how to use it.<br />
: [[Hoffman2:Introduction]]<br />
<br />
=== Getting an Account ===<br />
You know what it is, now you want to use it. First you need an account.<br />
: [[Hoffman2:Getting an Account]]<br />
<br />
=== Accessing the Cluster ===<br />
Now how do you use that account to access the cluster?<br />
: [[Hoffman2:Accessing the Cluster]]<br />
<br />
=== Working in a UNIX Environment ===<br />
Never heard of a command line before today? Vaguely know what "permissions" are and have no idea how to navigate a filesystem? This page is meant to take the scary out of the words "command line" so you can actually use Hoffman2, because no matter how many GUIs there are you will still need to command line sometimes.<br />
: [[Hoffman2:UNIX Tutorial]]<br />
<br />
=== Quotas ===<br />
Resources are not infinite, and disk space is a resource. Find out how to manage your disk space usage to stay under quota.<br />
: [[Hoffman2:Quotas]]<br />
<br />
=== Profile ===<br />
You have an account, know how to get there, and now you need to make one last step for you account to be fully usable.<br />
: [[Hoffman2:Profile]]<br />
<br />
<br />
<br />
== Computing ==<br />
You can find your way through Hoffman2, now it is time to start making things happen.<br />
<br />
=== Software Tools ===<br />
You've got your account, you are logged on, now how do you get to using a real software tool?<br />
: [[Hoffman2:Software Tools]]<br />
<br />
=== Submitting Jobs ===<br />
Now you have the tools, but how does one ask Hoffman2 to run them for you as a job? Since you aren't supposed to be running them on a login node...<br />
: [[Hoffman2:Submitting Jobs]]<br />
<br />
=== Monitoring Jobs ===<br />
Right after they zap their monster to life, every mad scientist wishes they had the tools to check on or stop their creation. Now that you can submit jobs, you need to be able to check on them and stop them if they start terrorizing downtown Tokyo.<br />
: [[Hoffman2:Monitoring Jobs]]<br />
<br />
=== Interactive Sessions ===<br />
Some software tools need you to interact with them while they work. Other times you just need to be able to run your script over and over while you work to eradicate all of its bugs. Enter ''Interactive'' Sessions.<br />
: [[Hoffman2:Interactive Sessions]]<br />
<br />
<br />
<br />
== Software ==<br />
=== MATLAB ===<br />
How to use MATLAB on the cluster. It is easier than you think.<br />
: [[Hoffman2:MATLAB]]<br />
<br />
==== Compiling MATLAB ====<br />
So you have a MATLAB script, but you don't need to GUI open all night to have it process your data. How to submit MATLAB jobs to Hoffman2.<br />
: [[Hoffman2:Compiling MATLAB]]<br />
<br />
==== EEGLAB ====<br />
We try to maintain the three most recent versions of EEGLAB for your convenience. Make sure to add it to your MATLAB path.<br />
: [[Hoffman2:MATLAB:EEGLAB]]<br />
<br />
===== EEGLAB Jobs =====<br />
Processing multiple subjects through EEGLAB can be tiring and inconvenient if you do it by hand. Learn how to make scripts that run as jobs leveraging the power of Hoffman2.<br />
: [[Hoffman2:MATLAB:EEGLAB:Jobs]]<br />
<br />
==== SPM Compiled (Batch) ====<br />
Maybe FSL isn't your cup of tea for neuroimaging work. SPM is a capable alternative and, even though it is MATLAB based, it has a compiled version that will let you leverage the power of the cluster.<br />
: [[Hoffman2:MATLAB:SPM]]<br />
<br />
=== R ===<br />
You are probably a statistician, or you just prefer open source software. Here's how to run R on Hoffman2.<br />
: [[Hoffman2:R]]<br />
<br />
=== WEKA ===<br />
If machine learning is your thing, maybe you've heard of WEKA. If not, maybe it will be your new best friend.<br />
: [[Hoffman2:WEKA]]<br />
<br />
=== LONI Pipeline ===<br />
A Workflow application to make things easier.<br />
: [[Hoffman2:LONI]]<br />
<br />
=== FSL ===<br />
FSL is a comprehensive library of analysis tools for FMRI, MRI and DTI brain imaging data.<br />
: [[Hoffman2:FSL]]<br />
<br />
<br />
<br />
== Productivity ==<br />
How about streamlining some of those tasks, or getting more things done.<br />
<br />
=== Scripts ===<br />
All of the difficulties you are experiencing now have probably been experienced before by someone else. And for that reason we already have scripts to simplify your life.<br />
: [[Hoffman2:Scripts]]<br />
<br />
=== Data Transfer ===<br />
All dressed up with no where to go? That's how Hoffman2 feels if you don't give it any data to work with. Find out how to avoid hurting the Cluster's feelings.<br />
: [[Hoffman2:Data Transfer]]<br />
<br />
=== Sharing Filesystems ===<br />
All you want to do is be able to look at your precious data. But it is locked up on Hoffman2 and you want to use tools on your computer to look at it. There's an app for that.<br />
: [[Hoffman2:Sharing Filesystems]]<br />
<br />
<br />
<br />
== FAQ ==<br />
Wesley's Usage, so you can plan around it and ask him to stop beating the cluster up.<br />
: [[Hoffman2:WTK Usage]]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Accessing_the_Cluster&diff=2563Hoffman2:Accessing the Cluster2014-04-09T03:09:32Z<p>Elau: </p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
Here are some of our favorite ways to access the Hoffman2 Cluster login nodes.<br />
<br />
==SSH - Command Line==<br />
: ''The official description of how to do this is found [http://www.ats.ucla.edu/clusters/common/head_node_access/access.htm here]''<br />
SSH stands for ''Secure Shell'' and is a method of remotely logging into a computer using an encrypted connection. It is a command line tool and is available on most *nix-based operating systems with ports available for Windows.<br />
<br />
<br />
===If you are on a Mac/Linux/Unix...===<br />
Modern Macs (anything with Operating systems newer than Snow Leopard 10.6.x) no longer come with a X Window System Server pre-installed.<br />
<br />
'''Before doing the following steps, please install [http://xquartz.macosforge.org/ XQuartz] and restart your computer.'''<br />
<br />
For more information about XQuartz, read [http://support.apple.com/kb/ht5293 here].<br />
# Open up X11/XQuartz or Terminal. Both are under ''Applications > Utilities'' on Macs.<br />
# Type the command<br />
#: <pre>$ ssh -X [USERNAME]@hoffman2.idre.ucla.edu</pre><br />
#: filling in your Hoffman2 username.<br />
#: The <code>-X</code> is for X11 Forwarding so that any graphics that are rendered on Hoffman2 get forwarded to the screen of your computer. A <code>-Y</code> flag accomplishes the same thing but does not secure the connection. Correct us if we have that switched.<br />
# Press enter and type in your password when it asks for it. No characters or asterisks will show up while you type.<br />
# Provided your typing was good, you will be greeted by the Hoffman2 login message and have successfully SSHd into a login node.<br />
<br />
WARNING: Please note that in OSX 10.8 (Mountain Lion), Xorg is no longer support by Apple. Instead, use [http://xquartz.macosforge.org/landing/ XQuarts]<br />
<br />
===If you are on Windows...===<br />
# Go [http://hpc.ucla.edu/hoffman2/access/access.php here] and follow the instructions under ''Windows''. We recommend PuTTY or Cgywin.<br />
# Once you have that setup, the process is the same as if you were on a Mac or Linux/Unix machine<br />
<br />
<br />
==NX Client - GUI==<br />
: ''The official description of how to do this is found [http://hpc.ucla.edu/hoffman2/access/nx.php here]''<br />
The NX Client program allows you to set up a Virtual Network Computing (VNC)-like session with Hoffman2. This session will keep running even if your Internet connection drops in and out (much like [[Using Screen|screen]] on the command line).<br />
<br />
<br />
===PCs or Mac OS X 10.6 and earlier===<br />
# Go to the [http://www.nomachine.com/ No Machine website] and navigate to the ''Download'' tab.<br />
# Find the section titled ''NX Client Products'' and click on the one for the operating system you are running.<br />
# Download the appropriate installation file and install it on your computer.<br />
# Once it is installed on your computer, start up NX Client.<br />
# The window that appears will ask for:<br />
#*Login -- your Hoffman2 username<br />
#*Password -- your Hoffman2 password<br />
#*Session -- type ''Hoffman2''<br />
# Then click the ''Configure...'' button and fill out the necessary information<br />
#*Under the ''General'' tab<br />
#**In the ''Server'' section<br />
#***Host -- ''hoffman2.idre.ucla.edu''<br />
#***Port -- 22<br />
#***Key -- Click this button and delete the contents of the window that appears. Open up an [[Accessing Hoffman2#SSH_-_Command_Line|SSH session]] to Hoffman2 and run the command<br />
#***:<pre>$ cat /etc/nxserver/client.id_dsa.key</pre><br />
#***:The output is the key. Copy everything that was printed out and paste it into the Key window in NX Client and click ''Save''.<br />
#**In the ''Desktop'' section -- Use the drop down menus to select ''Unix'' and ''GNOME''<br />
#*Click ''Save''<br />
# Now you can click ''Login'' on the main window and a GUI environment connection will be established with a Hoffman2 login node.<br />
<br />
<br />
===Mac OS X 10.7 and newer===<br />
#: Go to the ''Download Preview'' section of the No Machine website ([http://www.nomachine.com/download-preview.php]).''' and download the NX Client 4 preview.<br />
# Open the DMG that you downloaded, copy "No Machine Player.app" into your "Applications" directory and open it up for the first time.<br />
# A window titled "New connection" will appear. Fill out the fields accordingly<br />
#* Name -- Something like "Hoffman2"<br />
#* Host -- "hoffman2.idre.ucla.edu" since this is the server you are connecting to<br />
#* Port -- 22<br />
#* Select "Use the NoMachine login" and click the button on the right labeled with "..."<br />
#** In the new window labeled "NoMachine login" check the box next to "Use an alternate server key"<br />
#** Open up a Terminal and run the following command (replacing USERNAME with your Hoffman2 username)<br />
#**:<code>$ scp USERNAME@login2.hoffman2.idre.ucla.edu:/etc/nxserver/client.id_dsa.key ~/Documents/</code><br />
#**and enter your Hoffman2 password when prompted.<br />
#** Back in NoMachine, click on the button labeled "..." and find the file you just downloaded (it is in your Documents folder labeled "client.id_dsa.key").<br />
#** Click on the X in the top right corner to return to the previous window.<br />
#* Click on the X in the top right corner to finish setting up the connection parameters.<br />
# Double click on the connection you just created (it should be the only one in the list).<br />
# A circular progress indicator will show up for a bit before giving way to an authentication screen asking for username and password.<br />
# Enter your Hoffman2 username and password and click "OK" (You may also check the box labeled "Save this setting in the configuration file" to avoid retyping this in the future)<br />
# A circular progress indicator will show up again until a menu appears. Select "Create a new session".<br />
# In the next menu, '''select GNOME'''.<br />
# After another circular progress indicator, a virtual desktop should appear.<br />
# Reconnections in this client are not currently supported for Hoffman2, so please make sure to logout and close your connections properly. [http://hpc.ucla.edu/hoffman2/access/nx.php#logout]<br />
<br />
<br />
<br />
===Troubleshooting===<br />
If your NX Client session freezes and you are unable to close it properly, open ''NX Session Administrator'' and disconnect your session from there. This freezing often occurs when your Internet connection is lost abruptly. Another possible cause for freezing is scrolling on certain Windows touchpads.<br />
<br />
==UCLA Grid Portal==<br />
: ''The official description of how to do this is found [http://www.ats.ucla.edu/clusters/grid_portal/ here]''<br />
<br />
We haven't used this one much yet, but we'll be trying it out and get back to you about our experiences.<br />
<br />
<br />
<br />
==External Links==<br />
*[http://www.ats.ucla.edu/clusters/common/head_node_access/access.htm Accessing Hoffman2 via Command Line]<br />
*[http://hpc.ucla.edu/hoffman2/access/nx.php Accessing Hoffman2 via NX Client]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/head_node_access/ Information about Hoffman2 Login Nodes] -- RSA Fingerprints, node addresses<br />
*[http://www.ats.ucla.edu/clusters/grid_portal/ Accessing Hoffman2 through UCLA Grid Portal]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Accessing_the_Cluster&diff=2562Hoffman2:Accessing the Cluster2014-04-09T03:08:13Z<p>Elau: Updating to include information about XQuartz and new Macs</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
Here are some of our favorite ways to access the Hoffman2 Cluster login nodes.<br />
<br />
==SSH - Command Line==<br />
: ''The official description of how to do this is found [http://www.ats.ucla.edu/clusters/common/head_node_access/access.htm here]''<br />
SSH stands for ''Secure Shell'' and is a method of remotely logging into a computer using an encrypted connection. It is a command line tool and is available on most *nix-based operating systems with ports available for Windows.<br />
<br />
<br />
===If you are on a Mac/Linux/Unix...===<br />
Modern Macs (anything with Operating systems newer than Snow Leopard 10.6.x) no longer come with a X Window Client pre-installed.<br />
<br />
'''Before doing the following steps, please install [http://xquartz.macosforge.org/ XQuartz] and restart your computer.'''<br />
<br />
For more information about XQuartz, read [http://support.apple.com/kb/ht5293 here].<br />
# Open up X11/XQuartz or Terminal. Both are under ''Applications > Utilities'' on Macs.<br />
# Type the command<br />
#: <pre>$ ssh -X [USERNAME]@hoffman2.idre.ucla.edu</pre><br />
#: filling in your Hoffman2 username.<br />
#: The <code>-X</code> is for X11 Forwarding so that any graphics that are rendered on Hoffman2 get forwarded to the screen of your computer. A <code>-Y</code> flag accomplishes the same thing but does not secure the connection. Correct us if we have that switched.<br />
# Press enter and type in your password when it asks for it. No characters or asterisks will show up while you type.<br />
# Provided your typing was good, you will be greeted by the Hoffman2 login message and have successfully SSHd into a login node.<br />
<br />
WARNING: Please note that in OSX 10.8 (Mountain Lion), Xorg is no longer support by Apple. Instead, use [http://xquartz.macosforge.org/landing/ XQuarts]<br />
<br />
===If you are on Windows...===<br />
# Go [http://hpc.ucla.edu/hoffman2/access/access.php here] and follow the instructions under ''Windows''. We recommend PuTTY or Cgywin.<br />
# Once you have that setup, the process is the same as if you were on a Mac or Linux/Unix machine<br />
<br />
<br />
==NX Client - GUI==<br />
: ''The official description of how to do this is found [http://hpc.ucla.edu/hoffman2/access/nx.php here]''<br />
The NX Client program allows you to set up a Virtual Network Computing (VNC)-like session with Hoffman2. This session will keep running even if your Internet connection drops in and out (much like [[Using Screen|screen]] on the command line).<br />
<br />
<br />
===PCs or Mac OS X 10.6 and earlier===<br />
# Go to the [http://www.nomachine.com/ No Machine website] and navigate to the ''Download'' tab.<br />
# Find the section titled ''NX Client Products'' and click on the one for the operating system you are running.<br />
# Download the appropriate installation file and install it on your computer.<br />
# Once it is installed on your computer, start up NX Client.<br />
# The window that appears will ask for:<br />
#*Login -- your Hoffman2 username<br />
#*Password -- your Hoffman2 password<br />
#*Session -- type ''Hoffman2''<br />
# Then click the ''Configure...'' button and fill out the necessary information<br />
#*Under the ''General'' tab<br />
#**In the ''Server'' section<br />
#***Host -- ''hoffman2.idre.ucla.edu''<br />
#***Port -- 22<br />
#***Key -- Click this button and delete the contents of the window that appears. Open up an [[Accessing Hoffman2#SSH_-_Command_Line|SSH session]] to Hoffman2 and run the command<br />
#***:<pre>$ cat /etc/nxserver/client.id_dsa.key</pre><br />
#***:The output is the key. Copy everything that was printed out and paste it into the Key window in NX Client and click ''Save''.<br />
#**In the ''Desktop'' section -- Use the drop down menus to select ''Unix'' and ''GNOME''<br />
#*Click ''Save''<br />
# Now you can click ''Login'' on the main window and a GUI environment connection will be established with a Hoffman2 login node.<br />
<br />
<br />
===Mac OS X 10.7 and newer===<br />
#: Go to the ''Download Preview'' section of the No Machine website ([http://www.nomachine.com/download-preview.php]).''' and download the NX Client 4 preview.<br />
# Open the DMG that you downloaded, copy "No Machine Player.app" into your "Applications" directory and open it up for the first time.<br />
# A window titled "New connection" will appear. Fill out the fields accordingly<br />
#* Name -- Something like "Hoffman2"<br />
#* Host -- "hoffman2.idre.ucla.edu" since this is the server you are connecting to<br />
#* Port -- 22<br />
#* Select "Use the NoMachine login" and click the button on the right labeled with "..."<br />
#** In the new window labeled "NoMachine login" check the box next to "Use an alternate server key"<br />
#** Open up a Terminal and run the following command (replacing USERNAME with your Hoffman2 username)<br />
#**:<code>$ scp USERNAME@login2.hoffman2.idre.ucla.edu:/etc/nxserver/client.id_dsa.key ~/Documents/</code><br />
#**and enter your Hoffman2 password when prompted.<br />
#** Back in NoMachine, click on the button labeled "..." and find the file you just downloaded (it is in your Documents folder labeled "client.id_dsa.key").<br />
#** Click on the X in the top right corner to return to the previous window.<br />
#* Click on the X in the top right corner to finish setting up the connection parameters.<br />
# Double click on the connection you just created (it should be the only one in the list).<br />
# A circular progress indicator will show up for a bit before giving way to an authentication screen asking for username and password.<br />
# Enter your Hoffman2 username and password and click "OK" (You may also check the box labeled "Save this setting in the configuration file" to avoid retyping this in the future)<br />
# A circular progress indicator will show up again until a menu appears. Select "Create a new session".<br />
# In the next menu, '''select GNOME'''.<br />
# After another circular progress indicator, a virtual desktop should appear.<br />
# Reconnections in this client are not currently supported for Hoffman2, so please make sure to logout and close your connections properly. [http://hpc.ucla.edu/hoffman2/access/nx.php#logout]<br />
<br />
<br />
<br />
===Troubleshooting===<br />
If your NX Client session freezes and you are unable to close it properly, open ''NX Session Administrator'' and disconnect your session from there. This freezing often occurs when your Internet connection is lost abruptly. Another possible cause for freezing is scrolling on certain Windows touchpads.<br />
<br />
==UCLA Grid Portal==<br />
: ''The official description of how to do this is found [http://www.ats.ucla.edu/clusters/grid_portal/ here]''<br />
<br />
We haven't used this one much yet, but we'll be trying it out and get back to you about our experiences.<br />
<br />
<br />
<br />
==External Links==<br />
*[http://www.ats.ucla.edu/clusters/common/head_node_access/access.htm Accessing Hoffman2 via Command Line]<br />
*[http://hpc.ucla.edu/hoffman2/access/nx.php Accessing Hoffman2 via NX Client]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/head_node_access/ Information about Hoffman2 Login Nodes] -- RSA Fingerprints, node addresses<br />
*[http://www.ats.ucla.edu/clusters/grid_portal/ Accessing Hoffman2 through UCLA Grid Portal]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:MATLAB:Jobs&diff=2524Hoffman2:MATLAB:Jobs2014-03-22T00:59:45Z<p>Elau: </p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
[[Hoffman2:MATLAB|Back to all things MATLAB]]<br />
<br />
If you have a very large M-file script that needs to run for a long time and doesn't need you to:<br />
*click things<br />
*type inputs<br />
*observe pictures<br />
or generally interact with the script, you can run that M-file as a job on Hoffman2 leveraging the full computational power of the system.<br />
<br />
An example would be if you had created an [[Hoffman2:MATLAB:EEGLAB:Jobs | EEGLAB script]] for processing lots of subjects' EEG data.<br />
<br />
If you are trying to do lots of SPM work, follow the guidelines [[Hoffman2:MATLAB:SPM#Jobs | here]].<br />
<br />
<br />
==Running the script==<br />
Once you have an M-file to run, you will now need to create a BASH command script to submit as a job. This script will setup MATLAB and tell it to run your M-file.<br />
<br />
Something like the following will do<br />
/u/home/FMRI/apps/examples/eeglab/run_eeglab_example_job.sh<br />
<br />
<pre><br />
#!/bin/bash<br />
#<br />
# run_eeglab_example_job.sh<br />
#<br />
# Edward Lau<br />
# eplau[at]ucla[dot]edu<br />
# 2014.03.06<br />
#<br />
# Runs the example EEGLAB script in a headless job.<br />
<br />
<br />
<br />
<br />
# Setup the environment to have modules like MATLAB<br />
source /u/local/Modules/default/init/modules.sh<br />
<br />
# Load the MATLAB module<br />
module load matlab<br />
<br />
# Run MATLAB and your script<br />
matlab -nosplash -nojvm -nodesktop -singleCompThread -r eeglab_example_job<br />
</pre><br />
<br />
<br />
<br />
==Submit the Job==<br />
All that remains is to [[Hoffman2:Submitting_Jobs | submit the job]] based on what you've already learned about submitting jobs.<br />
<br />
Be mindful of the time and memory that you request and then wait patiently as your job is queued and run on the cluster.<br />
<br />
In our example case, you would [[Hoffman2:Accessing_the_Cluster#SSH_-_Command_Line | SSH into Hoffman2]] and execute the following commands:<br />
<pre><br />
$ cd /u/home/FMRI/apps/examples/matlab<br />
$ ./q.sh ./run_exampleMatlabScript.sh<br />
</pre><br />
<br />
<br />
<br />
==Wait for the Job to Run==<br />
Remember how to [[Hoffman2:Monitoring_Jobs | check on job status?]]<br />
<br />
<br />
<br />
<br />
==Look at the Outputs==<br />
Fire up MATLAB afterward and look at the results.</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:MATLAB:SPM&diff=2523Hoffman2:MATLAB:SPM2014-03-22T00:59:03Z<p>Elau: SPM jobs</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
[[Hoffman2:MATLAB|Back to all things MATLAB]]<br />
<br />
If you have a bunch of subjects that need to be processed through SPM, why waste your laptop using the GUI interface to slowly process each subject? And '''why waste a MATLAB license''' doing rote work that can be accomplished with compiled MATLAB code that works without a license?<br />
<br />
Use Hoffman2's strengths and run that processing as one (or more!) jobs to get it done quickly and efficiently.<br />
<br />
<br />
==Create the Batch File(s)==<br />
#Launch SPM<br />
#Click on the "Batch" button (to the left of "Quit").<br />
#In the new window, go "File" > "New batch" to start with a new batch file.<br />
#Using the "SPM" menu at the top of the window, add and modify the appropriate steps you would like to take and specify which data should be worked with.<br />
#When you are satisfied with how things are set up, click "File" > "Save batch" and give your batch file a name (ending with ".mat").<br />
#This is the basis for your SPM job.<br />
<br />
After creating a Batch file for a single subject, those individuals comfortable with MATLAB programming should be capable of reading the contents of the Batch file and understanding how to apply its settings to N subjects with a few "for" loops and directory listings. We highly recommend that you save yourself the carpal tunnel from clicking and leverage the power of scripting.<br />
<br />
<br />
<br />
<br />
==Create the Wrapper Script==<br />
SPM is a wonderful tool because it is capable of being compiled into an executable using SPM8's own "make_exec" script. On Hoffman2, we have compiled it and placed it at<br />
/u/home/FMRI/apps/spm_exec/spm8<br />
Now you need to create a wrapper script that will call this executable properly. The example shown below may be found at<br />
/u/home/FMRI/apps/examples/spm/run_spm_job.sh<br />
<br />
<pre><br />
qsub <<CMD<br />
#!/bin/bash<br />
# No spaces before each pound (#) sign<br />
# Use current working directory<br />
#$ -cwd<br />
# Error stream is merged with the standard output<br />
#$ -j y<br />
# Use the bash shell for job execution<br />
#$ -S /bin/bash<br />
# Use your normal environment variables in the job<br />
#$ -V<br />
# Use 1GB of RAM and the main queue, estimating 2 hours for completion<br />
#$ -l h_data=1G,h_rt=2:00:00<br />
# Only email on abort<br />
#$ -m a<br />
# Name the job "spm"<br />
#$ -N spm<br />
#<br />
<br />
# Load the module environment<br />
. /u/local/Modules/default/init/modules.sh<br />
<br />
# Load the newest MATLAB<br />
module load matlab<br />
<br />
# Run the compiled version of SPM8 on your batch file<br />
/u/home/FMRI/apps/spm_exec/spm8 "batch" "PATH_TO_YOUR_BATCH_FILE"<br />
<br />
CMD<br />
</pre><br />
<br />
As always, normal [[Hoffman2:Submitting_Jobs | job submission guidelines]] should be followed when creating this file. Such as setting the time and memory limits appropriately.<br />
<br />
The words "PATH_TO_YOUR_BATCH_FILE" should also be replaced with the actual full path to the .mat batch file you saved previously and want processed.<br />
<br />
'''Make this wrapper script executable''' using the command<br />
$ chmod 755 /path/to/your/wrapper/script<br />
filling in the appropriate path to your script.<br />
<br />
<br />
<br />
==Submit the Job==<br />
Now just execute the wrapper script to submit the job. If we were running our example, we would do<br />
<pre><br />
$ cd /u/home/FMRI/apps/examples/spm<br />
$ ./run_spm_job.sh<br />
</pre><br />
<br />
<br />
<br />
==Wait for the Job to Run==<br />
Remember how to [[Hoffman2:Monitoring_Jobs | check on job status?]]<br />
<br />
<br />
<br />
==View the Results==<br />
Use SPM or whichever tool is appropriate to view the results.<br />
<br />
<br />
<br />
==Going Further==<br />
If you created a single batch file to process 100 subjects, that would be fine and dandy, but it would take a very long time to go through each one in sequence.<br />
<br />
But if you created 10 batch files with 10 subjects in each, and ran 10 separate jobs (since the compiled version of SPM does not use MATLAB licenses), the processing would be finished 10 times as quickly.<br />
<br />
Consider parallelising your jobs when working at scale.</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:MATLAB:EEGLAB:Jobs&diff=2522Hoffman2:MATLAB:EEGLAB:Jobs2014-03-21T23:20:25Z<p>Elau: Adding back in a portion that shouldn't have been cut out.</p>
<hr />
<div>[[Hoffman2 | Back to all things Hoffman2]]<br />
<br />
[[Hoffman2:MATLAB | Back to all things MATLAB]]<br />
<br />
[[Hoffman2:MATLAB:EEGLAB | Back to all things EEGLAB]]<br />
<br />
<br />
Normally, if you wanted to run something in EEGLAB, you start up MATLAB on Hoffman2, wait for a node to check out, wait for a license, and then start crunching numbers.<br />
<br />
But what if your analysis in EEGLAB is going to take 10 hours, and you need to shut your computer down and leave in five hours? Or you don't want your MATLAB job to crash just because your flaky WiFi decides to die in hour nine of the analysis?<br />
<br />
Well, if you have run this analysis before and have the "EEG.history" contents, you should be able to create a MATLAB script (*.m file) from said history and run it as a headless job on Hoffman2.<br />
<br />
{| class="wikitable"<br />
|+ Pros and Cons of Running your EEGLAB analysis as a job<br />
! Pros<br />
! Cons<br />
|- class="ccn-table-odd ccn-table-left" <br />
| style="width:30em;" | Learn how to submit command line scripts and feel powerful controlling lots of computers<br />
| style="width:30em;" | Have to learn command line scripts and how to submit jobs correctly<br />
|- class="ccn-table-even ccn-table-left"<br />
| style="width:30em;" | Be able to have your analysis run without tying up your computer for hours or days at a time<br />
| style="width:30em;" | Have to run the analysis through at least once partially to create your script<br />
|- class="ccn-table-odd ccn-table-left"<br />
| style="width:30em;" | Be able to run MATLAB jobs longer than 24 hours<br />
| style="width:30em;" | Have to be comfortable editing MATLAB code to create your *.m file<br />
|- class="ccn-table-even ccn-table-left"<br />
| style="width:30em;" | Be able to run MATLAB jobs that use way more memory than your laptop can.<br />
| style="width:30em;" | <br />
|- class="ccn-table-odd ccn-table-left" <br />
| style="width:30em;" | Possibly be able to run multiple analyses at once if you make the extra step and compile your code (so it doesn't need a MATLAB license)<br />
| style="width:30em;" | <br />
|}<br />
<br />
If you think the Pros outweigh the Cons, and they certainly do if you are processing more than one subject, then '''this is the solution for you'''.<br />
<br />
<br />
<br />
==Identify the processing commands==<br />
Using the EEGLAB GUI, process a data file the way your want your job to process a data file. Go from loading the file, to processing, to saving the file.<br />
<br />
Once that is complete, keep EEGLAB open and on the MATLAB command line, look at the contents of the history area:<br />
>> disp(EEG.history)<br />
<br />
Those commands you see are the ones you will need to put into your script in various places.<br />
<br />
<br />
<br />
==Make your script==<br />
If you know the exact commands you need to run, or have the contents of an "EEG.history" from a previous analysis, then the general *.m file to create will be like the following example, found at<br />
/u/home/FMRI/apps/examples/eeglab/eeglab_example_job.m<br />
<pre><br />
% eeglab_example_job.m<br />
%<br />
% Edward Lau<br />
% eplau[at]ucla[dot]edu<br />
% 2014.03.06<br />
% Example EEGLAB script that can be submitted as a job on Hoffman2.<br />
%<br />
% This M-file will:<br />
% 0. Set the paths for the input and output files.<br />
% 1. Load the EEGLAB "set" file<br />
% 2. Bandpass filter it to 1-45Hz<br />
% 3. Average re-reference the data<br />
% 4. Save the new data file with "preproc" appended to the old name (e.g. worked on "Subj1.set" resulting in "Subj1_preproc.set)<br />
%<br />
%<br />
% WARNING:<br />
% This example does not necessarily reflect the best practices for how EEG should be pre-processed and<br />
% should only be viewed as an example of how to script basic EEGLAB steps.<br />
<br />
<br />
<br />
%% 0. Variables to set<br />
% The directory where the data to be processed lives.<br />
INPUT_DIR = '/u/home/FMRI/apps/eeglab/current/sample_data/';<br />
<br />
% The EEGLAB ".set" file that should be processed<br />
INPUT_FILE = 'eeglab_data.set';<br />
<br />
% The directory where the processed data should be saved<br />
OUTPUT_DIR = getenv('SCRATCH');<br />
<br />
% The EEGLAB ".set" filename that should be saved after processing<br />
OUTPUT_FILE = sprintf('%s_%s.%s', INPUT_FILE(1:end-4), 'preproc', 'set');<br />
<br />
<br />
<br />
%% Make sure EEGLAB is in your PATH, change "current" to whichever version of EEGLAB you prefer<br />
addpath(genpath('/u/home/FMRI/apps/eeglab/current')); <br />
<br />
<br />
<br />
%% Load up EEGLAB in headless mode<br />
eeglab('nogui');<br />
<br />
<br />
<br />
%% 1. Load up your data<br />
EEG = pop_loadset('filename', INPUT_FILE, 'filepath', INPUT_DIR);<br />
<br />
% Set EEG variables appropriately<br />
EEG = eeg_checkset( EEG );<br />
ALLEEG = EEG;<br />
CURRENTSET = 1;<br />
<br />
<br />
<br />
%% Do your analyses<br />
% This could be built from the contents of an EEG.history array from a previous analysis<br />
<br />
% 2. Bandpass filter 1-45Hz<br />
EEG = pop_eegfiltnew(EEG, 1, 45, 424, 0, [], 0);<br />
EEG = eeg_checkset( EEG );<br />
<br />
% 3. Average re-reference the data<br />
EEG = pop_reref( EEG, []);<br />
EEG = eeg_checkset( EEG );<br />
<br />
<br />
<br />
%% 4. Don't forget to save your data afterward<br />
OUTDIR = getenv('SCRATCH');<br />
EEG = pop_saveset( EEG, 'filename', OUTPUT_FILE, 'filepath', OUTPUT_DIR);<br />
EEG = eeg_checkset( EEG );<br />
</pre><br />
<br />
<br />
<br />
==Submit it as a Job==<br />
Now that you have a proper M-file to run in MATLAB, go [[Hoffman2:MATLAB:Jobs | follow the steps for how to submit this as a job to the cluster]].<br />
<br />
<br />
<br />
==Look at the Outputs==<br />
Make sure that was all worthwhile and look at the outputs. If you are still on Hoffman2, go ahead and launch MATLAB and run the following commands<br />
<pre><br />
>> addpath(genpath('/u/home/FMRI/apps/eeglab/current/')); % Make sure EEGLAB is in the MATLAB PATH<br />
>> eeglab; % Launch EEGLAB<br />
</pre><br />
<br />
And then using the GUI, load up the resulting processed file. If you followed along with our example, this output file will be in your [[Hoffman2:Introduction#scratch | SCRATCH directory]].</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:MATLAB:EEGLAB:Jobs&diff=2521Hoffman2:MATLAB:EEGLAB:Jobs2014-03-21T22:54:18Z<p>Elau: Moving the common "matlab job" part to a separate page.</p>
<hr />
<div>[[Hoffman2 | Back to all things Hoffman2]]<br />
<br />
[[Hoffman2:MATLAB | Back to all things MATLAB]]<br />
<br />
[[Hoffman2:MATLAB:EEGLAB | Back to all things EEGLAB]]<br />
<br />
<br />
Normally, if you wanted to run something in EEGLAB, you start up MATLAB on Hoffman2, wait for a node to check out, wait for a license, and then start crunching numbers.<br />
<br />
But what if your analysis in EEGLAB is going to take 10 hours, and you need to shut your computer down and leave in five hours? Or you don't want your MATLAB job to crash just because your flaky WiFi decides to die in hour nine of the analysis?<br />
<br />
Well, if you have run this analysis before and have the "EEG.history" contents, you should be able to create a MATLAB script (*.m file) from said history and run it as a headless job on Hoffman2.<br />
<br />
{| class="wikitable"<br />
|+ Pros and Cons of Running your EEGLAB analysis as a job<br />
! Pros<br />
! Cons<br />
|- class="ccn-table-odd ccn-table-left" <br />
| style="width:30em;" | Learn how to submit command line scripts and feel powerful controlling lots of computers<br />
| style="width:30em;" | Have to learn command line scripts and how to submit jobs correctly<br />
|- class="ccn-table-even ccn-table-left"<br />
| style="width:30em;" | Be able to have your analysis run without tying up your computer for hours or days at a time<br />
| style="width:30em;" | Have to run the analysis through at least once partially to create your script<br />
|- class="ccn-table-odd ccn-table-left"<br />
| style="width:30em;" | Be able to run MATLAB jobs longer than 24 hours<br />
| style="width:30em;" | Have to be comfortable editing MATLAB code to create your *.m file<br />
|- class="ccn-table-even ccn-table-left"<br />
| style="width:30em;" | Be able to run MATLAB jobs that use way more memory than your laptop can.<br />
| style="width:30em;" | <br />
|- class="ccn-table-odd ccn-table-left" <br />
| style="width:30em;" | Possibly be able to run multiple analyses at once if you make the extra step and compile your code (so it doesn't need a MATLAB license)<br />
| style="width:30em;" | <br />
|}<br />
<br />
If you think the Pros outweigh the Cons, and they certainly do if you are processing more than one subject, then '''this is the solution for you'''.<br />
<br />
<br />
<br />
==Identify the processing commands==<br />
Using the EEGLAB GUI, process a data file the way your want your job to process a data file. Go from loading the file, to processing, to saving the file.<br />
<br />
Once that is complete, keep EEGLAB open and on the MATLAB command line, look at the contents of the history area:<br />
>> disp(EEG.history)<br />
<br />
Those commands you see are the ones you will need to put into your script in various places.<br />
<br />
<br />
<br />
==Make your script==<br />
If you know the exact commands you need to run, or have the contents of an "EEG.history" from a previous analysis, then the general *.m file to create will be like the following example, found at<br />
/u/home/FMRI/apps/examples/eeglab/eeglab_example_job.m<br />
<pre><br />
% eeglab_example_job.m<br />
%<br />
% Edward Lau<br />
% eplau[at]ucla[dot]edu<br />
% 2014.03.06<br />
% Example EEGLAB script that can be submitted as a job on Hoffman2.<br />
%<br />
% This M-file will:<br />
% 0. Set the paths for the input and output files.<br />
% 1. Load the EEGLAB "set" file<br />
% 2. Bandpass filter it to 1-45Hz<br />
% 3. Average re-reference the data<br />
% 4. Save the new data file with "preproc" appended to the old name (e.g. worked on "Subj1.set" resulting in "Subj1_preproc.set)<br />
%<br />
%<br />
% WARNING:<br />
% This example does not necessarily reflect the best practices for how EEG should be pre-processed and<br />
% should only be viewed as an example of how to script basic EEGLAB steps.<br />
<br />
<br />
<br />
%% 0. Variables to set<br />
% The directory where the data to be processed lives.<br />
INPUT_DIR = '/u/home/FMRI/apps/eeglab/current/sample_data/';<br />
<br />
% The EEGLAB ".set" file that should be processed<br />
INPUT_FILE = 'eeglab_data.set';<br />
<br />
% The directory where the processed data should be saved<br />
OUTPUT_DIR = getenv('SCRATCH');<br />
<br />
% The EEGLAB ".set" filename that should be saved after processing<br />
OUTPUT_FILE = sprintf('%s_%s.%s', INPUT_FILE(1:end-4), 'preproc', 'set');<br />
<br />
<br />
<br />
%% Make sure EEGLAB is in your PATH, change "current" to whichever version of EEGLAB you prefer<br />
addpath(genpath('/u/home/FMRI/apps/eeglab/current')); <br />
<br />
<br />
<br />
%% Load up EEGLAB in headless mode<br />
eeglab('nogui');<br />
<br />
<br />
<br />
%% 1. Load up your data<br />
EEG = pop_loadset('filename', INPUT_FILE, 'filepath', INPUT_DIR);<br />
<br />
% Set EEG variables appropriately<br />
EEG = eeg_checkset( EEG );<br />
ALLEEG = EEG;<br />
CURRENTSET = 1;<br />
<br />
<br />
<br />
%% Do your analyses<br />
% This could be built from the contents of an EEG.history array from a previous analysis<br />
<br />
% 2. Bandpass filter 1-45Hz<br />
EEG = pop_eegfiltnew(EEG, 1, 45, 424, 0, [], 0);<br />
EEG = eeg_checkset( EEG );<br />
<br />
% 3. Average re-reference the data<br />
EEG = pop_reref( EEG, []);<br />
EEG = eeg_checkset( EEG );<br />
<br />
<br />
<br />
%% 4. Don't forget to save your data afterward<br />
OUTDIR = getenv('SCRATCH');<br />
EEG = pop_saveset( EEG, 'filename', OUTPUT_FILE, 'filepath', OUTPUT_DIR);<br />
EEG = eeg_checkset( EEG );<br />
</pre><br />
<br />
<br />
<br />
==Submit it as a Job==<br />
Now that you have a proper M-file to run in MATLAB, go [[Hoffman2:MATLAB:Jobs | follow the steps for how to submit this as a job to the cluster]].</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:MATLAB:Jobs&diff=2520Hoffman2:MATLAB:Jobs2014-03-21T22:50:19Z<p>Elau: Make MATLAB jobs.</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
[[Hoffman2:MATLAB|Back to all things MATLAB]]<br />
<br />
If you have a very large M-file script that needs to run for a long time and doesn't need you to:<br />
*click things<br />
*type inputs<br />
*observe pictures<br />
or generally interact with the script, you can run that M-file as a job on Hoffman2 leveraging the full computational power of the system.<br />
<br />
An example would be if you had created an [[Hoffman2:MATLAB:EEGLAB:Jobs | EEGLAB script]] for processing lots of subjects' EEG data.<br />
<br />
If you are trying to do lots of SPM work, follow the guidelines [[Hoffman2:MATLAB:SPM#Jobs | here]].<br />
<br />
<br />
==Running the script==<br />
Once you have an M-file to run, you will now need to create a BASH command script to submit as a job. This script will setup MATLAB and tell it to run your M-file.<br />
<br />
Something like the following will do<br />
/u/home/FMRI/apps/examples/eeglab/run_eeglab_example_job.sh<br />
<br />
<pre><br />
#!/bin/bash<br />
#<br />
# run_eeglab_example_job.sh<br />
#<br />
# Edward Lau<br />
# eplau[at]ucla[dot]edu<br />
# 2014.03.06<br />
#<br />
# Runs the example EEGLAB script in a headless job.<br />
<br />
<br />
<br />
<br />
# Setup the environment to have modules like MATLAB<br />
source /u/local/Modules/default/init/modules.sh<br />
<br />
# Load the MATLAB module<br />
module load matlab<br />
<br />
# Run MATLAB and your script<br />
matlab -nosplash -nojvm -nodesktop -singleCompThread -r eeglab_example_job<br />
</pre><br />
<br />
<br />
<br />
==Submit the Job==<br />
All that remains is to [[Hoffman2:Submitting_Jobs | submit the job]] based on what you've already learned about submitting jobs.<br />
<br />
Be mindful of the time and memory that you request and then wait patiently as your job is queued and run on the cluster.<br />
<br />
In our example case, you would [[Hoffman2:Accessing_the_Cluster#SSH_-_Command_Line | SSH into Hoffman2]] and execute the following commands:<br />
<pre><br />
$ cd /u/home/FMRI/apps/examples/eeglab<br />
$ ./q.sh ./run_eeglab_example_job.sh<br />
</pre><br />
<br />
<br />
<br />
==Wait for the Job to Run==<br />
Remember how to [[Hoffman2:Monitoring_Jobs | check on job status?]]<br />
<br />
<br />
<br />
==Look at the Outputs==<br />
Make sure that was all worthwhile and look at the outputs. If you are still on Hoffman2, go ahead and launch MATLAB and run the following commands<br />
<pre><br />
>> addpath(genpath('/u/home/FMRI/apps/eeglab/current/')); % Make sure EEGLAB is in the MATLAB PATH<br />
>> eeglab; % Launch EEGLAB<br />
</pre><br />
<br />
And then using the GUI, load up the resulting processed file. If you followed along with our example, this output file will be in your [[Hoffman2:Introduction#scratch | SCRATCH directory]].</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:MATLAB:EEGLAB:Jobs&diff=2478Hoffman2:MATLAB:EEGLAB:Jobs2014-03-07T17:22:38Z<p>Elau: Added reminder about how to check on jobs.</p>
<hr />
<div>[[Hoffman2 | Back to all things Hoffman2]]<br />
<br />
[[Hoffman2:MATLAB | Back to all things MATLAB]]<br />
<br />
[[Hoffman2:MATLAB:EEGLAB | Back to all things EEGLAB]]<br />
<br />
<br />
Normally, if you wanted to run something in EEGLAB, you start up MATLAB on Hoffman2, wait for a node to check out, wait for a license, and then start crunching numbers.<br />
<br />
But what if your analysis in EEGLAB is going to take 10 hours, and you need to shut your computer down and leave in five hours? Or you don't want your MATLAB job to crash just because your flaky WiFi decides to die in hour nine of the analysis?<br />
<br />
Well, if you have run this analysis before and have the "EEG.history" contents, you should be able to create a MATLAB script (*.m file) from said history and run it as a headless job on Hoffman2.<br />
<br />
{| class="wikitable"<br />
|+ Pros and Cons of Running your EEGLAB analysis as a job<br />
! Pros<br />
! Cons<br />
|- class="ccn-table-odd ccn-table-left" <br />
| style="width:30em;" | Learn how to submit command line scripts and feel powerful controlling lots of computers<br />
| style="width:30em;" | Have to learn command line scripts and how to submit jobs correctly<br />
|- class="ccn-table-even ccn-table-left"<br />
| style="width:30em;" | Be able to have your analysis run without tying up your computer for hours or days at a time<br />
| style="width:30em;" | Have to run the analysis through at least once partially to create your script<br />
|- class="ccn-table-odd ccn-table-left"<br />
| style="width:30em;" | Be able to run MATLAB jobs longer than 24 hours<br />
| style="width:30em;" | Have to be comfortable editing MATLAB code to create your *.m file<br />
|- class="ccn-table-even ccn-table-left"<br />
| style="width:30em;" | Be able to run MATLAB jobs that use way more memory than your laptop can.<br />
| style="width:30em;" | <br />
|- class="ccn-table-odd ccn-table-left" <br />
| style="width:30em;" | Possibly be able to run multiple analyses at once if you make the extra step and compile your code (so it doesn't need a MATLAB license)<br />
| style="width:30em;" | <br />
|}<br />
<br />
If you think the Pros outweigh the Cons, and they certainly do if you are processing more than one subject, then this is [[Hoffman2:MATLAB:EEGLAB:Jobs | the solution for you.]]<br />
<br />
<br />
<br />
==Identify the processing commands==<br />
Using the EEGLAB GUI, process a data file the way your want your job to process a data file. Go from loading the file, to processing, to saving the file.<br />
<br />
Once that is complete, keep EEGLAB open and on the MATLAB command line, look at the contents of the history area:<br />
>> disp(EEG.history)<br />
<br />
Those commands you see are the ones you will need to put into your script in various places.<br />
<br />
<br />
<br />
==Make your script==<br />
If you know the exact commands you need to run, or have the contents of an "EEG.history" from a previous analysis, then the general *.m file to create will be like the following example, found at<br />
/u/home/FMRI/apps/examples/eeglab/eeglab_example_job.m<br />
<pre><br />
% eeglab_example_job.m<br />
%<br />
% Edward Lau<br />
% eplau[at]ucla[dot]edu<br />
% 2014.03.06<br />
% Example EEGLAB script that can be submitted as a job on Hoffman2.<br />
%<br />
% This M-file will:<br />
% 0. Set the paths for the input and output files.<br />
% 1. Load the EEGLAB "set" file<br />
% 2. Bandpass filter it to 1-45Hz<br />
% 3. Average re-reference the data<br />
% 4. Save the new data file with "preproc" appended to the old name (e.g. worked on "Subj1.set" resulting in "Subj1_preproc.set)<br />
%<br />
%<br />
% WARNING:<br />
% This example does not necessarily reflect the best practices for how EEG should be pre-processed and<br />
% should only be viewed as an example of how to script basic EEGLAB steps.<br />
<br />
<br />
<br />
%% 0. Variables to set<br />
% The directory where the data to be processed lives.<br />
INPUT_DIR = '/u/home/FMRI/apps/eeglab/current/sample_data/';<br />
<br />
% The EEGLAB ".set" file that should be processed<br />
INPUT_FILE = 'eeglab_data.set';<br />
<br />
% The directory where the processed data should be saved<br />
OUTPUT_DIR = getenv('SCRATCH');<br />
<br />
% The EEGLAB ".set" filename that should be saved after processing<br />
OUTPUT_FILE = sprintf('%s_%s.%s', INPUT_FILE(1:end-4), 'preproc', 'set');<br />
<br />
<br />
<br />
%% Make sure EEGLAB is in your PATH, change "current" to whichever version of EEGLAB you prefer<br />
addpath(genpath('/u/home/FMRI/apps/eeglab/current')); <br />
<br />
<br />
<br />
%% Load up EEGLAB in headless mode<br />
eeglab('nogui');<br />
<br />
<br />
<br />
%% 1. Load up your data<br />
EEG = pop_loadset('filename', INPUT_FILE, 'filepath', INPUT_DIR);<br />
<br />
% Set EEG variables appropriately<br />
EEG = eeg_checkset( EEG );<br />
ALLEEG = EEG;<br />
CURRENTSET = 1;<br />
<br />
<br />
<br />
%% Do your analyses<br />
% This could be built from the contents of an EEG.history array from a previous analysis<br />
<br />
% 2. Bandpass filter 1-45Hz<br />
EEG = pop_eegfiltnew(EEG, 1, 45, 424, 0, [], 0);<br />
EEG = eeg_checkset( EEG );<br />
<br />
% 3. Average re-reference the data<br />
EEG = pop_reref( EEG, []);<br />
EEG = eeg_checkset( EEG );<br />
<br />
<br />
<br />
%% 4. Don't forget to save your data afterward<br />
OUTDIR = getenv('SCRATCH');<br />
EEG = pop_saveset( EEG, 'filename', OUTPUT_FILE, 'filepath', OUTPUT_DIR);<br />
EEG = eeg_checkset( EEG );<br />
</pre><br />
<br />
<br />
<br />
==Running the script==<br />
You will now need to create a BASH command script to submit as a job. This script will setup MATLAB and tell it to run your M-file.<br />
<br />
Something like the following will do<br />
/u/home/FMRI/apps/examples/eeglab/run_eeglab_example_job.sh<br />
<br />
<pre><br />
#!/bin/bash<br />
#<br />
# run_eeglab_example_job.sh<br />
#<br />
# Edward Lau<br />
# eplau[at]ucla[dot]edu<br />
# 2014.03.06<br />
#<br />
# Runs the example EEGLAB script in a headless job.<br />
<br />
<br />
<br />
<br />
# Setup the environment to have modules like MATLAB<br />
source /u/local/Modules/default/init/modules.sh<br />
<br />
# Load the MATLAB module<br />
module load matlab<br />
<br />
# Run MATLAB and your script<br />
matlab -nosplash -nojvm -nodesktop -singleCompThread -r eeglab_example_job<br />
</pre><br />
<br />
<br />
<br />
==Submit the Job==<br />
All that remains is to [[Hoffman2:Submitting_Jobs | submit the job]] based on what you've already learned about submitting jobs.<br />
<br />
Be mindful of the time and memory that you request and then wait patiently as your job is queued and run on the cluster.<br />
<br />
In our example case, you would [[Hoffman2:Accessing_the_Cluster#SSH_-_Command_Line | SSH into Hoffman2]] and execute the following commands:<br />
<pre><br />
$ cd /u/home/FMRI/apps/examples/eeglab<br />
$ ./q.sh ./run_eeglab_example_job.sh<br />
</pre><br />
<br />
<br />
<br />
==Wait for the Job to Run==<br />
Remember how to [[Hoffman2:Monitoring_Jobs | check on job status?]]<br />
<br />
<br />
<br />
==Look at the Outputs==<br />
Make sure that was all worthwhile and look at the outputs. If you are still on Hoffman2, go ahead and launch MATLAB and run the following commands<br />
<pre><br />
>> addpath(genpath('/u/home/FMRI/apps/eeglab/current/')); % Make sure EEGLAB is in the MATLAB PATH<br />
>> eeglab; % Launch EEGLAB<br />
</pre><br />
<br />
And then using the GUI, load up the resulting processed file. If you followed along with our example, this output file will be in your [[Hoffman2:Introduction#scratch | SCRATCH directory]].</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2&diff=2477Hoffman22014-03-07T01:59:51Z<p>Elau: Added EEGLAB Jobs section</p>
<hr />
<div>A compilation of lab know-how regarding the Hoffman2 Computing Cluster.<br />
<br />
Anyone new to the lab and using Hoffman2 NEEDS to read the first section to have adequate working knowledge of the system.<br />
<br />
<br />
== Getting Started ==<br />
=== Introduction ===<br />
Hoffman2 is a Computing Cluster at UCLA, find out how it generally works so you know how to use it.<br />
: [[Hoffman2:Introduction]]<br />
<br />
=== Getting an Account ===<br />
You know what it is, now you want to use it. First you need an account.<br />
: [[Hoffman2:Getting an Account]]<br />
<br />
=== Accessing the Cluster ===<br />
Now how do you use that account to access the cluster?<br />
: [[Hoffman2:Accessing the Cluster]]<br />
<br />
=== Working in a UNIX Environment ===<br />
Never heard of a command line before today? Vaguely know what "permissions" are and have no idea how to navigate a filesystem? This page is meant to take the scary out of the words "command line" so you can actually use Hoffman2, because no matter how many GUIs there are you will still need to command line sometimes.<br />
: [[Hoffman2:UNIX Tutorial]]<br />
<br />
=== Quotas ===<br />
Resources are not infinite, and disk space is a resource. Find out how to manage your disk space usage to stay under quota.<br />
: [[Hoffman2:Quotas]]<br />
<br />
=== Profile ===<br />
You have an account, know how to get there, and now you need to make one last step for you account to be fully usable.<br />
: [[Hoffman2:Profile]]<br />
<br />
<br />
<br />
== Computing ==<br />
You can find your way through Hoffman2, now it is time to start making things happen.<br />
<br />
=== Software Tools ===<br />
You've got your account, you are logged on, now how do you get to using a real software tool?<br />
: [[Hoffman2:Software Tools]]<br />
<br />
=== Submitting Jobs ===<br />
Now you have the tools, but how does one ask Hoffman2 to run them for you as a job? Since you aren't supposed to be running them on a login node...<br />
: [[Hoffman2:Submitting Jobs]]<br />
<br />
=== Monitoring Jobs ===<br />
Right after they zap their monster to life, every mad scientist wishes they had the tools to check on or stop their creation. Now that you can submit jobs, you need to be able to check on them and stop them if they start terrorizing downtown Tokyo.<br />
: [[Hoffman2:Monitoring Jobs]]<br />
<br />
=== Interactive Sessions ===<br />
Some software tools need you to interact with them while they work. Other times you just need to be able to run your script over and over while you work to eradicate all of its bugs. Enter ''Interactive'' Sessions.<br />
: [[Hoffman2:Interactive Sessions]]<br />
<br />
<br />
<br />
== Software ==<br />
=== MATLAB ===<br />
How to use MATLAB on the cluster. It is easier than you think.<br />
: [[Hoffman2:MATLAB]]<br />
<br />
==== Compiling MATLAB ====<br />
So you have a MATLAB script, but you don't need to GUI open all night to have it process your data. How to submit MATLAB jobs to Hoffman2.<br />
: [[Hoffman2:Compiling MATLAB]]<br />
<br />
==== EEGLAB ====<br />
We try to maintain the three most recent versions of EEGLAB for your convenience. Make sure to add it to your MATLAB path.<br />
: [[Hoffman2:MATLAB:EEGLAB]]<br />
<br />
===== EEGLAB Jobs =====<br />
Processing multiple subjects through EEGLAB can be tiring and inconvenient if you do it by hand. Learn how to make scripts that run as jobs leveraging the power of Hoffman2.<br />
: [[Hoffman2:MATLAB:EEGLAB:Jobs]]<br />
<br />
=== R ===<br />
You are probably a statistician, or you just prefer open source software. Here's how to run R on Hoffman2.<br />
: [[Hoffman2:R]]<br />
<br />
=== WEKA ===<br />
If machine learning is your thing, maybe you've heard of WEKA. If not, maybe it will be your new best friend.<br />
: [[Hoffman2:WEKA]]<br />
<br />
=== LONI Pipeline ===<br />
A Workflow application to make things easier.<br />
: [[Hoffman2:LONI]]<br />
<br />
=== FSL ===<br />
FSL is a comprehensive library of analysis tools for FMRI, MRI and DTI brain imaging data.<br />
: [[Hoffman2:FSL]]<br />
<br />
<br />
<br />
== Productivity ==<br />
How about streamlining some of those tasks, or getting more things done.<br />
<br />
=== Scripts ===<br />
All of the difficulties you are experiencing now have probably been experienced before by someone else. And for that reason we already have scripts to simplify your life.<br />
: [[Hoffman2:Scripts]]<br />
<br />
=== Data Transfer ===<br />
All dressed up with no where to go? That's how Hoffman2 feels if you don't give it any data to work with. Find out how to avoid hurting the Cluster's feelings.<br />
: [[Hoffman2:Data Transfer]]<br />
<br />
=== Sharing Filesystems ===<br />
All you want to do is be able to look at your precious data. But it is locked up on Hoffman2 and you want to use tools on your computer to look at it. There's an app for that.<br />
: [[Hoffman2:Sharing Filesystems]]<br />
<br />
<br />
<br />
== FAQ ==<br />
Wesley's Usage, so you can plan around it and ask him to stop beating the cluster up.<br />
: [[Hoffman2:WTK Usage]]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:MATLAB:EEGLAB:Jobs&diff=2476Hoffman2:MATLAB:EEGLAB:Jobs2014-03-07T01:57:07Z<p>Elau: Command style updated</p>
<hr />
<div>[[Hoffman2 | Back to all things Hoffman2]]<br />
<br />
[[Hoffman2:MATLAB | Back to all things MATLAB]]<br />
<br />
[[Hoffman2:MATLAB:EEGLAB | Back to all things EEGLAB]]<br />
<br />
<br />
Normally, if you wanted to run something in EEGLAB, you start up MATLAB on Hoffman2, wait for a node to check out, wait for a license, and then start crunching numbers.<br />
<br />
But what if your analysis in EEGLAB is going to take 10 hours, and you need to shut your computer down and leave in five hours? Or you don't want your MATLAB job to crash just because your flaky WiFi decides to die in hour nine of the analysis?<br />
<br />
Well, if you have run this analysis before and have the "EEG.history" contents, you should be able to create a MATLAB script (*.m file) from said history and run it as a headless job on Hoffman2.<br />
<br />
{| class="wikitable"<br />
|+ Pros and Cons of Running your EEGLAB analysis as a job<br />
! Pros<br />
! Cons<br />
|- class="ccn-table-odd ccn-table-left" <br />
| style="width:30em;" | Learn how to submit command line scripts and feel powerful controlling lots of computers<br />
| style="width:30em;" | Have to learn command line scripts and how to submit jobs correctly<br />
|- class="ccn-table-even ccn-table-left"<br />
| style="width:30em;" | Be able to have your analysis run without tying up your computer for hours or days at a time<br />
| style="width:30em;" | Have to run the analysis through at least once partially to create your script<br />
|- class="ccn-table-odd ccn-table-left"<br />
| style="width:30em;" | Be able to run MATLAB jobs longer than 24 hours<br />
| style="width:30em;" | Have to be comfortable editing MATLAB code to create your *.m file<br />
|- class="ccn-table-even ccn-table-left"<br />
| style="width:30em;" | Be able to run MATLAB jobs that use way more memory than your laptop can.<br />
| style="width:30em;" | <br />
|- class="ccn-table-odd ccn-table-left" <br />
| style="width:30em;" | Possibly be able to run multiple analyses at once if you make the extra step and compile your code (so it doesn't need a MATLAB license)<br />
| style="width:30em;" | <br />
|}<br />
<br />
If you think the Pros outweigh the Cons, and they certainly do if you are processing more than one subject, then this is [[Hoffman2:MATLAB:EEGLAB:Jobs | the solution for you.]]<br />
<br />
<br />
<br />
===Identify the processing commands===<br />
Using the EEGLAB GUI, process a data file the way your want your job to process a data file. Go from loading the file, to processing, to saving the file.<br />
<br />
Once that is complete, keep EEGLAB open and on the MATLAB command line, look at the contents of the history area:<br />
>> disp(EEG.history)<br />
<br />
Those commands you see are the ones you will need to put into your script in various places.<br />
<br />
<br />
<br />
===Make your script===<br />
If you know the exact commands you need to run, or have the contents of an "EEG.history" from a previous analysis, then the general *.m file to create will be like the following example, found at<br />
/u/home/FMRI/apps/examples/eeglab/eeglab_example_job.m<br />
<pre><br />
% eeglab_example_job.m<br />
%<br />
% Edward Lau<br />
% eplau[at]ucla[dot]edu<br />
% 2014.03.06<br />
% Example EEGLAB script that can be submitted as a job on Hoffman2.<br />
%<br />
% This M-file will:<br />
% 0. Set the paths for the input and output files.<br />
% 1. Load the EEGLAB "set" file<br />
% 2. Bandpass filter it to 1-45Hz<br />
% 3. Average re-reference the data<br />
% 4. Save the new data file with "preproc" appended to the old name (e.g. worked on "Subj1.set" resulting in "Subj1_preproc.set)<br />
%<br />
%<br />
% WARNING:<br />
% This example does not necessarily reflect the best practices for how EEG should be pre-processed and<br />
% should only be viewed as an example of how to script basic EEGLAB steps.<br />
<br />
<br />
<br />
%% 0. Variables to set<br />
% The directory where the data to be processed lives.<br />
INPUT_DIR = '/u/home/FMRI/apps/eeglab/current/sample_data/';<br />
<br />
% The EEGLAB ".set" file that should be processed<br />
INPUT_FILE = 'eeglab_data.set';<br />
<br />
% The directory where the processed data should be saved<br />
OUTPUT_DIR = getenv('SCRATCH');<br />
<br />
% The EEGLAB ".set" filename that should be saved after processing<br />
OUTPUT_FILE = sprintf('%s_%s.%s', INPUT_FILE(1:end-4), 'preproc', 'set');<br />
<br />
<br />
<br />
%% Make sure EEGLAB is in your PATH, change "current" to whichever version of EEGLAB you prefer<br />
addpath(genpath('/u/home/FMRI/apps/eeglab/current')); <br />
<br />
<br />
<br />
%% Load up EEGLAB in headless mode<br />
eeglab('nogui');<br />
<br />
<br />
<br />
%% 1. Load up your data<br />
EEG = pop_loadset('filename', INPUT_FILE, 'filepath', INPUT_DIR);<br />
<br />
% Set EEG variables appropriately<br />
EEG = eeg_checkset( EEG );<br />
ALLEEG = EEG;<br />
CURRENTSET = 1;<br />
<br />
<br />
<br />
%% Do your analyses<br />
% This could be built from the contents of an EEG.history array from a previous analysis<br />
<br />
% 2. Bandpass filter 1-45Hz<br />
EEG = pop_eegfiltnew(EEG, 1, 45, 424, 0, [], 0);<br />
EEG = eeg_checkset( EEG );<br />
<br />
% 3. Average re-reference the data<br />
EEG = pop_reref( EEG, []);<br />
EEG = eeg_checkset( EEG );<br />
<br />
<br />
<br />
%% 4. Don't forget to save your data afterward<br />
OUTDIR = getenv('SCRATCH');<br />
EEG = pop_saveset( EEG, 'filename', OUTPUT_FILE, 'filepath', OUTPUT_DIR);<br />
EEG = eeg_checkset( EEG );<br />
</pre><br />
<br />
<br />
<br />
===Running the script===<br />
You will now need to create a BASH command script to submit as a job. This script will setup MATLAB and tell it to run your M-file.<br />
<br />
Something like the following will do<br />
/u/home/FMRI/apps/examples/eeglab/run_eeglab_example_job.sh<br />
<br />
<pre><br />
#!/bin/bash<br />
#<br />
# run_eeglab_example_job.sh<br />
#<br />
# Edward Lau<br />
# eplau[at]ucla[dot]edu<br />
# 2014.03.06<br />
#<br />
# Runs the example EEGLAB script in a headless job.<br />
<br />
<br />
<br />
<br />
# Setup the environment to have modules like MATLAB<br />
source /u/local/Modules/default/init/modules.sh<br />
<br />
# Load the MATLAB module<br />
module load matlab<br />
<br />
# Run MATLAB and your script<br />
matlab -nosplash -nojvm -nodesktop -singleCompThread -r eeglab_example_job<br />
</pre><br />
<br />
<br />
<br />
===Submit the Job===<br />
All that remains is to [[Hoffman2:Submitting_Jobs | submit the job]] based on what you've already learned about submitting jobs.<br />
<br />
Be mindful of the time and memory that you request and then wait patiently as your job is queued and run on the cluster.<br />
<br />
In our example case, you would [[Hoffman2:Accessing_the_Cluster#SSH_-_Command_Line | SSH into Hoffman2]] and execute the following commands:<br />
<pre><br />
$ cd /u/home/FMRI/apps/examples/eeglab<br />
$ ./q.sh ./run_eeglab_example_job.sh<br />
</pre><br />
<br />
<br />
<br />
===Look at the outputs===<br />
Make sure that was all worthwhile and look at the outputs. If you are still on Hoffman2, go ahead and launch MATLAB and run the following commands<br />
<pre><br />
>> addpath(genpath('/u/home/FMRI/apps/eeglab/current/')); % Make sure EEGLAB is in the PATH<br />
>> eeglab; % Launch EEGLAB<br />
</pre><br />
<br />
And then using the GUI, load up the resulting processed file. If you followed along with our example, this output file will be in your [[Hoffman2:Introduction#scratch | SCRATCH directory]].</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:MATLAB&diff=2475Hoffman2:MATLAB2014-03-07T01:54:56Z<p>Elau: Update about RAM constraints.</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
MATLAB is not a small program and it can handle some fairly complex graphics. As such, this is not something suitable to be used on a login node of Hoffman2. But that's already been thought of by the great people at ATS.<br />
<br />
<br />
==GUI==<br />
To run a full GUI session of MATLAB, execute<br />
$ matlab<br />
That's it, no flags, no frills, nothing else. Hoffman2 will automatically check out an appropriate interactive node for you to run MATLAB on. All you have to do is provide a time limit (in hours) when they ask you<br />
Enter a time limit for your session, in hours (default 2)<br />
<or quit>: <br />
<br />
<br />
<br />
==Command Line==<br />
If you don't need the fancy GUI and just want the command line, execute<br />
$ matlab -nodesktop<br />
and then supply a time limit when asked.<br />
<br />
'''Since this uses interactive nodes, the maximum time limit you can request is 24 hours.'''<br />
<br />
<br />
<br />
==License Check==<br />
With so many people using Hoffman2 and MATLAB, sometimes licenses run out. Using this helpful script will give you some insight as to the license situation.<br />
<br />
[[Hoffman2:Scripts:matlab_license_check.sh|matlab_license_check.sh]]<br />
<br />
<br />
<br />
==Large Computations==<br />
If you are doing a larger computation, '''running MATLAB normally will probably not work well.'''<br />
<br />
Using the default method of launching MATLAB on Hoffman2 checks out an [[Hoffman2:Interactive_Sessions | interactive node]] with only 1GB of RAM. This is woefully small if you are working with a ten minutes of dense array EEG. Use the following steps to launch a more capable MATLAB session.<br />
<br />
<pre><br />
$ # Request an interactive node with time and memory required. In this case, 10 hours and 4GB RAM<br />
$ qrsh -l i,h_rt=10:00:00,h_data=4G<br />
$ # Load the module MATLAB<br />
$ # You can also load different versions of MATLAB:<br />
$ # module load matlab/7.14<br />
$ # or<br />
$ # module load matlab/8.1<br />
$ # Try using "module help" for more information<br />
$ module load matlab<br />
$ # Launch MATLAB<br />
$ matlab<br />
</pre><br />
<br />
<br />
<br />
==External Links==<br />
*[http://www.ats.ucla.edu/clusters/common/software/engineering/matlab.htm MATLAB on Hoffman2]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:MATLAB:EEGLAB:Jobs&diff=2474Hoffman2:MATLAB:EEGLAB:Jobs2014-03-07T01:46:07Z<p>Elau: EEGLAB Jobs more fleshed out.</p>
<hr />
<div>[[Hoffman2 | Back to all things Hoffman2]]<br />
<br />
[[Hoffman2:MATLAB | Back to all things MATLAB]]<br />
<br />
[[Hoffman2:MATLAB:EEGLAB | Back to all things EEGLAB]]<br />
<br />
Normally, if you wanted to run something in EEGLAB, you start up MATLAB on Hoffman2, wait for a node to check out, wait for a license, and then start crunching numbers.<br />
<br />
But what if your analysis in EEGLAB is going to take 10 hours, and you need to shut your computer down and leave in five hours? Or you don't want your MATLAB job to crash just because your flaky WiFi decides to die in hour nine of the analysis?<br />
<br />
Well, if you have run this analysis before and have the "EEG.history" contents, you should be able to create a MATLAB script (*.m file) from said history and run it as a headless job on Hoffman2.<br />
<br />
{| class="wikitable"<br />
|+ Pros and Cons of Running your EEGLAB analysis as a job<br />
! Pros<br />
! Cons<br />
|- class="ccn-table-odd ccn-table-left" <br />
| style="width:30em;" | Learn how to submit command line scripts and feel powerful controlling lots of computers<br />
| style="width:30em;" | Have to learn command line scripts and how to submit jobs correctly<br />
|- class="ccn-table-even ccn-table-left"<br />
| style="width:30em;" | Be able to have your analysis run without tying up your computer for hours or days at a time<br />
| style="width:30em;" | Have to run the analysis through at least once partially to create your script<br />
|- class="ccn-table-odd ccn-table-left"<br />
| style="width:30em;" | Be able to run MATLAB jobs longer than 24 hours<br />
| style="width:30em;" | Have to be comfortable editing MATLAB code to create your *.m file<br />
|- class="ccn-table-even ccn-table-left"<br />
| style="width:30em;" | Be able to run MATLAB jobs that use way more memory than your laptop can.<br />
| style="width:30em;" | <br />
|- class="ccn-table-odd ccn-table-left" <br />
| style="width:30em;" | Possibly be able to run multiple analyses at once if you make the extra step and compile your code (so it doesn't need a MATLAB license)<br />
| style="width:30em;" | <br />
|}<br />
<br />
If you think the Pros outweigh the Cons, and they certainly do if you are processing more than one subject, then this is [[Hoffman2:MATLAB:EEGLAB:Jobs | the solution for you.]]<br />
<br />
<br />
<br />
===Identify the processing commands===<br />
Using the EEGLAB GUI, process a data file the way your want your job to process a data file. Go from loading the file, to processing, to saving the file.<br />
<br />
Once that is complete, keep EEGLAB open and on the MATLAB command line, look at the contents of the history area:<br />
>> disp(EEG.history)<br />
<br />
Those commands you see are the ones you will need to put into your script in various places.<br />
<br />
<br />
<br />
===Make your script===<br />
If you know the exact commands you need to run, or have the contents of an "EEG.history" from a previous analysis, then the general *.m file to create will be like the following example, found at<br />
/u/home/FMRI/apps/examples/eeglab/eeglab_example_job.m<br />
<pre><br />
% eeglab_example_job.m<br />
%<br />
% Edward Lau<br />
% eplau[at]ucla[dot]edu<br />
% 2014.03.06<br />
% Example EEGLAB script that can be submitted as a job on Hoffman2.<br />
%<br />
% This M-file will:<br />
% 0. Set the paths for the input and output files.<br />
% 1. Load the EEGLAB "set" file<br />
% 2. Bandpass filter it to 1-45Hz<br />
% 3. Average re-reference the data<br />
% 4. Save the new data file with "preproc" appended to the old name (e.g. worked on "Subj1.set" resulting in "Subj1_preproc.set)<br />
%<br />
%<br />
% WARNING:<br />
% This example does not necessarily reflect the best practices for how EEG should be pre-processed and<br />
% should only be viewed as an example of how to script basic EEGLAB steps.<br />
<br />
<br />
<br />
%% 0. Variables to set<br />
% The directory where the data to be processed lives.<br />
INPUT_DIR = '/u/home/FMRI/apps/eeglab/current/sample_data/';<br />
<br />
% The EEGLAB ".set" file that should be processed<br />
INPUT_FILE = 'eeglab_data.set';<br />
<br />
% The directory where the processed data should be saved<br />
OUTPUT_DIR = getenv('SCRATCH');<br />
<br />
% The EEGLAB ".set" filename that should be saved after processing<br />
OUTPUT_FILE = sprintf('%s_%s.%s', INPUT_FILE(1:end-4), 'preproc', 'set');<br />
<br />
<br />
<br />
%% Make sure EEGLAB is in your PATH, change "current" to whichever version of EEGLAB you prefer<br />
addpath(genpath('/u/home/FMRI/apps/eeglab/current')); <br />
<br />
<br />
<br />
%% Load up EEGLAB in headless mode<br />
eeglab('nogui');<br />
<br />
<br />
<br />
%% 1. Load up your data<br />
EEG = pop_loadset('filename', INPUT_FILE, 'filepath', INPUT_DIR);<br />
<br />
% Set EEG variables appropriately<br />
EEG = eeg_checkset( EEG );<br />
ALLEEG = EEG;<br />
CURRENTSET = 1;<br />
<br />
<br />
<br />
%% Do your analyses<br />
% This could be built from the contents of an EEG.history array from a previous analysis<br />
<br />
% 2. Bandpass filter 1-45Hz<br />
EEG = pop_eegfiltnew(EEG, 1, 45, 424, 0, [], 0);<br />
EEG = eeg_checkset( EEG );<br />
<br />
% 3. Average re-reference the data<br />
EEG = pop_reref( EEG, []);<br />
EEG = eeg_checkset( EEG );<br />
<br />
<br />
<br />
%% 4. Don't forget to save your data afterward<br />
OUTDIR = getenv('SCRATCH');<br />
EEG = pop_saveset( EEG, 'filename', OUTPUT_FILE, 'filepath', OUTPUT_DIR);<br />
EEG = eeg_checkset( EEG );<br />
</pre><br />
<br />
<br />
<br />
===Running the script===<br />
You will now need to create a BASH command script to submit as a job. This script will setup MATLAB and tell it to run your M-file.<br />
<br />
Something like the following will do<br />
/u/home/FMRI/apps/examples/eeglab/run_eeglab_example_job.sh<br />
<br />
<pre><br />
#!/bin/bash<br />
#<br />
# run_eeglab_example_job.sh<br />
#<br />
# Edward Lau<br />
# eplau[at]ucla[dot]edu<br />
# 2014.03.06<br />
#<br />
# Runs the example EEGLAB script in a headless job.<br />
<br />
<br />
<br />
<br />
# Setup the environment to have modules like MATLAB<br />
source /u/local/Modules/default/init/modules.sh<br />
<br />
# Load the MATLAB module<br />
module load matlab<br />
<br />
# Run MATLAB and your script<br />
matlab -nosplash -nojvm -nodesktop -singleCompThread -r eeglab_example_job<br />
</pre><br />
<br />
<br />
<br />
===Submit the Job===<br />
All that remains is to [[Hoffman2:Submitting_Jobs | submit the job]] based on what you've already learned about submitting jobs.<br />
<br />
Be mindful of the time and memory that you request and then wait patiently as your job is queued and run on the cluster.<br />
<br />
In our example case, you would [[Hoffman2:Accessing_the_Cluster#SSH_-_Command_Line | SSH into Hoffman2]] and execute the following commands:<br />
<pre><br />
cd /u/home/FMRI/apps/examples/eeglab<br />
./q.sh ./run_eeglab_example_job.sh<br />
</pre><br />
<br />
<br />
<br />
===Look at the outputs===<br />
Make sure that was all worthwhile and look at the outputs. If you are still on Hoffman2, go ahead and launch MATLAB and run the following commands<br />
<pre><br />
>> addpath(genpath('/u/home/FMRI/apps/eeglab/current/')); % Make sure EEGLAB is in the PATH<br />
>> eeglab; % Launch EEGLAB<br />
</pre><br />
<br />
And then using the GUI, load up the resulting processed file. If you followed along with our example, this output file will be in your [[Hoffman2:Introduction#scratch | SCRATCH directory]].</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:MATLAB:EEGLAB&diff=2473Hoffman2:MATLAB:EEGLAB2014-03-07T01:03:22Z<p>Elau: Adding information and link to EEGLAB Jobs.</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
[[Hoffman2:MATLAB|Back to all things MATLAB]]<br />
<br />
Three of the most recent versions of EEGLAB are maintained on Hoffman2 for the FMRI group in the directory<br />
/u/home/FMRI/apps/eeglab<br />
<br />
<br />
<br />
==Adding EEGLAB to your MATLAB path==<br />
Choose one of the versions from<br />
/u/home/FMRI/apps/eeglab<br />
and add it to your MATLAB path by doing.<br />
<br />
# [[Hoffman2:MATLAB#GUI|Start MATLAB on Hoffman2]]<br />
# Go to<br />
#: ''"File"'' > ''"Set Path..."''<br />
#* Click ''"Add with Subfolders..."''<br />
#* Navigate to <br />
#*: <pre>/u/home/FMRI/apps/eeglab</pre><br />
#*: choose which version you'd like to use and click on it.<br />
#* Click ''"Ok"''<br />
#* To save this for future sessions, click ''"Save."''<br />
#* If you don't want to save this path, click ''"Close"'' and then click ''"No"'' on the window that pops up.<br />
# Go to the MATLAB command line and start EEGLAB by typing<br />
#: <pre>>> eeglab</pre><br />
<br />
<br />
<br />
==Running jobs that involve EEGLAB==<br />
Normally, if you wanted to run something in EEGLAB, you start up MATLAB on Hoffman2, wait for a node to check out, wait for a license, and then start crunching numbers.<br />
<br />
But what if your analysis in EEGLAB is going to take 10 hours, and you need to shut your computer down and leave in five hours? Or you don't want your MATLAB job to crash just because your flaky WiFi decides to die in hour nine of the analysis?<br />
<br />
Well, if you have run this analysis before and have the "EEG.history" contents, you should be able to create a MATLAB script (*.m file) from said history and run it as a headless job on Hoffman2.<br />
<br />
{| class="wikitable"<br />
|+ Pros and Cons of Running your EEGLAB analysis as a job<br />
! Pros<br />
! Cons<br />
|- class="ccn-table-odd ccn-table-left" <br />
| style="width:30em;" | Learn how to submit command line scripts and feel powerful controlling lots of computers<br />
| style="width:30em;" | Have to learn command line scripts and how to submit jobs correctly<br />
|- class="ccn-table-even ccn-table-left"<br />
| style="width:30em;" | Be able to have your analysis run without tying up your computer for hours or days at a time<br />
| style="width:30em;" | Have to run the analysis through at least once partially to create your script<br />
|- class="ccn-table-odd ccn-table-left"<br />
| style="width:30em;" | Be able to run MATLAB jobs longer than 24 hours<br />
| style="width:30em;" | Have to be comfortable editing MATLAB code to create your *.m file<br />
|- class="ccn-table-even ccn-table-left"<br />
| style="width:30em;" | Be able to run MATLAB jobs that use way more memory than your laptop can.<br />
| style="width:30em;" | <br />
|- class="ccn-table-odd ccn-table-left" <br />
| style="width:30em;" | Possibly be able to run multiple analyses at once if you make the extra step and compile your code (so it doesn't need a MATLAB license)<br />
| style="width:30em;" | <br />
|}<br />
<br />
If you think the Pros outweigh the Cons, and they certainly do if you are processing more than one subject, then this is [[Hoffman2:MATLAB:EEGLAB:Jobs | the solution for you.]]<br />
<br />
<br />
<br />
==CIDAR ADHD==<br />
For steps on how to process CIDAR data in EEGLAB, [[Hoffman2:MATLAB:EEGLAB:CIDAR|click here]].<br />
<br />
<br />
<br />
==binica==<br />
If you want to run ICA decomposition on your data, it can often be a very time-intensive process. Doing this through the EEGLAB GUI (runica) is almost an order of magnitude slower than doing it with <code>binica</code> or via command line. So let's walk through how to do this the fastest (most parallelizable) way by submitting jobs on Hoffman2.<br />
<br />
<br />
===Export your data from NetStation===<br />
# If you are going to preprocess your data in NetStation, do so first.<br />
# Export your data in the '''NetStation simple binary''' format which produces files with the ''.raw'' extension. [[Hoffman2:Data Transfer|Upload this to Hoffman2 by your choice method]].<br />
<br />
<br />
===Prep your data for ICA===<br />
# [[Hoffman2:MATLAB|Start up MATLAB on Hoffman2]] and start EEGLAB<br />
# Go to<br />
#: ''"File"'' > ''"Import data"'' > ''"From Netstation binary simple file"''<br />
#* Find your ''.raw'' file and select it.<br />
#* Click ''"Ok"'' on the first pop up window, and in the second give your dataset a name before clicking ''"Ok.""<br />
# When EEGLAB finishes importing your data ''Done'' will appear above the command line.<br />
# At this time, do any preprocessing you'd like on the data in EEGLAB/MATLAB.<br />
# Add this special directory to your MATLAB path,<br />
#: <pre>>> addpath('/u/home/FMRI/apps/examples/binica');</pre><br />
# Use the command,<br />
#: <pre>>> prep4binica(ALLEEG.data, ALLEEG.nbchan, ALLEEG.trials*ALLEEG.pnts, 'verbose', 'on', 'filenum', FILENAME);</pre><br />
# This will save out the necessary files to run ICA:<br />
#* data in a float-point file, ''.fdt''<br />
#* parameters for ICA in a configuration file, ''.sc''<br />
# Exit out of EEGLAB (''"File"'' > ''"Quit"'').<br />
# Exit out of MATLAB<br />
<br />
<br />
===Run ICA===<br />
If you prepared multiple datasets for ICA, you could submit many jobs in parallel to speed up the processing time.<br />
# Use a [[Text Editors|text editor]] to make a script (e.g. icaScript.sh) on Hoffman2 with the following contents<br />
#: <pre>#!/bin/bash&#10;&#13;/u/home/FMRI/apps/eeglab/current/functions/resources/ica_linux < path/to/sc/file </pre><br />
#: where you replace '''/path/to/sc/file''' with the full path to the ''.sc'' file created by <code>prep4binica</code>.<br />
# Make the script executable<br />
#: <pre>chmod 750 /path/to/script</pre><br />
#: replacing '''/path/to/script''' with the path to your script file.<br />
# [[Hoffman2:Submitting Jobs|Submit this script as a job]], we recommend demanding at least 8 hours (<code>time=8:00:00</code>) and 2GB of RAM (<code>mem=2048M</code>) for your job.<br />
# Wait for the script to complete.<br />
<br />
<br />
===Import ICA data back to EEGLAB===<br />
# [[Hoffman2:MATLAB|Start up MATLAB on Hoffman2]] and start EEGLAB<br />
# Go to<br />
#: ''"File"'' > ''"Import data"'' > ''"From Netstation binary simple file"''<br />
#* Find your ''.raw'' file and select it.<br />
#* Click ''"Ok"'' on the first pop up window, and in the second give your dataset a name before clicking ''"Ok.""<br />
# When EEGLAB finishes importing your data ''Done'' will appear above the command line.<br />
# Go to <br />
#: ''"Edit"'' > ''"Dataset into"''<br />
#* Click ''"Browse"'' next to ''"ICA weights array or text/binary file (if any):"'' and find the ICA weight file (''.wts'' extension).<br />
#* Click ''"Browse"'' next to ''"ICA sphere array or text/binary file (if any):"'' and find the ICA sphere file (''.sph'' extension).<br />
#* Click ''"Ok"''.<br />
# You should now be able to view the ICA information within EEGLAB.<br />
<br />
<br />
<br />
==Only Using EGI's "Good" Segments in EEGLAB==<br />
NetStation preprocessing has built in methods of classifying data segments as "good" or "bad" depending on a variety of criteria related to noise measurements. Please refer to their pertinent documentation to understand how this is done. In the case where you do your preprocessing in NetStation but would like to use EEGLAB's ICA tool, you probably don't want all those "bad" segments to be used for ICA calculations or other work.<br />
<br />
Sadly, your only options for exporting only the "good" segments from NetStation directly to MATLAB is in the form of a ''.mat'' file. '''But EEGLAB doesn't play nicely with these files!'''<br />
<br />
In comes the tedious translation process to make your data usable...<br />
# Export your preprocessed data from NetStation as a "Net Station Simple Binary" file.<br />
#: '''Do not export them as "Net Station Simple Binary (Ignore Events)"'''<br />
#: '''Do not export them as "Net Station Simple Binary (Epoch Marked)"'''<br />
# Import the data file into EEGLAB<br />
# Open NetStation, and in the top menu bar go to ''"Tools"'' > ''"Browse Files..."'' which will open a new data explorer window.<br />
# In Finder, locate your final preprocessed data (the one that you ran the Export tool on earlier).<br />
# Drag this file into the left had bar labeled ''"Source Files."''<br />
# In the top right area of the window, click on the ''"Categories and Subjects"'' tab and you should see a grid of numbers and labels.<br />
#: The segment types will have their labels at the beginning of each row (e.g. Belief, Disbelief)<br />
#: Each source file will have a dedicated column labeled with something (I believe it to be the subject linked to the recording).<br />
# Click on the first segment type label and in the bottom right area of the window you will see a list of all segments of that type.<br />
# If you click on the column label ''"Time"'' it will order the segments chronologically which is the same way they are ordered when you export them as a "Net Station Simple Binary" file.<br />
# Back in EEGLAB, use the menu to go to ''"Edit"'' > ''"Select data"'' which opens another window.<br />
#: In this window, you can use different criteria for selecting portions of data. The method pertinent here is by ''"Epoch range."''<br />
#: I would recommend clicking the little check box to the right of the input box for ''"Epoch range"'' so you only have to specify which epochs you want removed. '''There should be fewer epochs to remove than there are epochs you want to keep...otherwise your data is really messy and using it might not be recommended...'''<br />
# Have fun counting your way down the lines of the NetStation File Browser to determine exactly which segments are "bad."<br />
# Say the sixth segment of the first category is bad (these are the ones with a red circle and line through them at the beginning of the line instead of a green circle). In the EEGLAB data selection window, you need to add ''"6"'' to the ''"Epoch range"'' input box.<br />
# Once you've gone through and told EEGLAB all the epochs you want removed, click the ''"Ok"'' button and it will start making a new dataset.<br />
# Run [[Hoffman2:MATLAB:EEGLAB#Prep_your_data_for_ICA|prep4binica]] on the resulting dataset.<br />
<br />
<br />
===Pro Tip===<br />
If you have two or more types of segments in NetStation, counting and adding can be a dangerous game. So make use of MATLAB to simplify things.<br />
<br />
e.g.<br />
* You have two segment types: B and D<br />
* There are 40 type B segments, of which Net Station claims only 37 are good.<br />
* There are 60 type D segments, of which Net Station claims only 50 are good.<br />
* In the EEGLAB ''"Select data"'' window, you make sure to click the check box to the right of the ''"Epoch range"'' input box so that you only have to list the epochs you want removed from the dataset.<br />
* In the NetStation you click on the "B" category and scrolling through the segments you find that segments 1, 21, and 37 have the little red crossed circle indicating they are bad.<br />
* In the EEGLAB ''"Select data"'' window's ''"Epoch range"'' input box you type<br />
1 21 37<br />
* In the NetStation you then click on the "D" category (since it would be the second category). Scrolling through the segments you find that the 40th through the 49th listed segments all have the red crossed circle indicating they are bad.<br />
* In the EEGLAB ''"Select data"'' window's ''"Epoch range"'' input box, you then edit it to read<br />
1 21 37 40+[40:49]<br />
* This will work because MATLAB is going to process these numbers and see<br />
1 21 37 80 81 82 83 84 85 86 87 88 89<br />
* The "40+[" part is important because the first "D" category segment is actually the 41st segment overall. If you have more than two categories, you'd have to do similar arithmetic to properly indicate which epochs you want removed.<br />
<br />
Happy counting.<br />
<br />
<br />
<br />
==External Links==<br />
*[http://sccn.ucsd.edu/eeglab/allfunctions/binica.html binica]<br />
*[http://sccn.ucsd.edu/wiki/A01:_Importing_Continuous_and_Epoched_Data#Importing_Netstation.2FEGI_files What type of EGI files EEGLAB understands]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:MATLAB:EEGLAB:Jobs&diff=2472Hoffman2:MATLAB:EEGLAB:Jobs2014-03-07T01:02:44Z<p>Elau: Starting the EEGLAB Jobs page.</p>
<hr />
<div>[[Hoffman2 | Back to all things Hoffman2]]<br />
<br />
[[Hoffman2:MATLAB | Back to all things MATLAB]]<br />
<br />
[[Hoffman2:MATLAB:EEGLAB | Back to all things EEGLAB]]<br />
<br />
Normally, if you wanted to run something in EEGLAB, you start up MATLAB on Hoffman2, wait for a node to check out, wait for a license, and then start crunching numbers.<br />
<br />
But what if your analysis in EEGLAB is going to take 10 hours, and you need to shut your computer down and leave in five hours? Or you don't want your MATLAB job to crash just because your flaky WiFi decides to die in hour nine of the analysis?<br />
<br />
Well, if you have run this analysis before and have the "EEG.history" contents, you should be able to create a MATLAB script (*.m file) from said history and run it as a headless job on Hoffman2.<br />
<br />
{| class="wikitable"<br />
|+ Pros and Cons of Running your EEGLAB analysis as a job<br />
! Pros<br />
! Cons<br />
|- class="ccn-table-odd ccn-table-left" <br />
| style="width:30em;" | Learn how to submit command line scripts and feel powerful controlling lots of computers<br />
| style="width:30em;" | Have to learn command line scripts and how to submit jobs correctly<br />
|- class="ccn-table-even ccn-table-left"<br />
| style="width:30em;" | Be able to have your analysis run without tying up your computer for hours or days at a time<br />
| style="width:30em;" | Have to run the analysis through at least once partially to create your script<br />
|- class="ccn-table-odd ccn-table-left"<br />
| style="width:30em;" | Be able to run MATLAB jobs longer than 24 hours<br />
| style="width:30em;" | Have to be comfortable editing MATLAB code to create your *.m file<br />
|- class="ccn-table-even ccn-table-left"<br />
| style="width:30em;" | Be able to run MATLAB jobs that use way more memory than your laptop can.<br />
| style="width:30em;" | <br />
|- class="ccn-table-odd ccn-table-left" <br />
| style="width:30em;" | Possibly be able to run multiple analyses at once if you make the extra step and compile your code (so it doesn't need a MATLAB license)<br />
| style="width:30em;" | <br />
|}<br />
<br />
If you think the Pros outweigh the Cons, and they certainly do if you are processing more than one subject, then this is [[Hoffman2:MATLAB:EEGLAB:Jobs | the solution for you.]]<br />
<br />
If you know the exact commands you need to run, or have the contents of an "EEG.history" from a previous analysis, then the general *.m file to create will be the the following example, found at<br />
/u/home/FMRI/apps/examples/eeglab/eeglab_example_job.m<br />
<pre><br />
% eeglab_example_job.m<br />
%<br />
% Edward Lau<br />
% eplau[at]ucla[dot]edu<br />
% 2014.03.06<br />
% Example EEGLAB script that can be submitted as a job on Hoffman2.<br />
%<br />
% This M-file will:<br />
% 0. Set the paths for the input and output files.<br />
% 1. Load the EEGLAB "set" file<br />
% 2. Bandpass filter it to 1-45Hz<br />
% 3. Average re-reference the data<br />
% 4. Save the new data file with "preproc" appended to the old name (e.g. worked on "Subj1.set" resulting in "Subj1_preproc.set)<br />
%<br />
%<br />
% WARNING:<br />
% This example does not necessarily reflect the best practices for how EEG should be pre-processed and<br />
% should only be viewed as an example of how to script basic EEGLAB steps.<br />
<br />
<br />
<br />
%% 0. Variables to set<br />
% The directory where the data to be processed lives.<br />
INPUT_DIR = '/u/home/FMRI/apps/eeglab/current/sample_data/';<br />
<br />
% The EEGLAB ".set" file that should be processed<br />
INPUT_FILE = 'eeglab_data.set';<br />
<br />
% The directory where the processed data should be saved<br />
OUTPUT_DIR = getenv('SCRATCH');<br />
<br />
% The EEGLAB ".set" filename that should be saved after processing<br />
OUTPUT_FILE = sprintf('%s_%s.%s', INPUT_FILE(1:end-4), 'preproc', 'set');<br />
<br />
<br />
<br />
%% Make sure EEGLAB is in your PATH, change "current" to whichever version of EEGLAB you prefer<br />
addpath(genpath('/u/home/FMRI/apps/eeglab/current')); <br />
<br />
<br />
<br />
%% Load up EEGLAB in headless mode<br />
eeglab('nogui');<br />
<br />
<br />
<br />
%% 1. Load up your data<br />
EEG = pop_loadset('filename', INPUT_FILE, 'filepath', INPUT_DIR);<br />
<br />
% Set EEG variables appropriately<br />
EEG = eeg_checkset( EEG );<br />
ALLEEG = EEG;<br />
CURRENTSET = 1;<br />
<br />
<br />
<br />
%% Do your analyses<br />
% This could be built from the contents of an EEG.history array from a previous analysis<br />
<br />
% 2. Bandpass filter 1-45Hz<br />
EEG = pop_eegfiltnew(EEG, 1, 45, 424, 0, [], 0);<br />
EEG = eeg_checkset( EEG );<br />
<br />
% 3. Average re-reference the data<br />
EEG = pop_reref( EEG, []);<br />
EEG = eeg_checkset( EEG );<br />
<br />
<br />
<br />
%% 4. Don't forget to save your data afterward<br />
OUTDIR = getenv('SCRATCH');<br />
EEG = pop_saveset( EEG, 'filename', OUTPUT_FILE, 'filepath', OUTPUT_DIR);<br />
EEG = eeg_checkset( EEG );<br />
</pre></div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Software_Tools&diff=2471Hoffman2:Software Tools2014-03-06T21:24:12Z<p>Elau: Added in some more tools.</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
There is an FMRI usergroup on Hoffman2 which is maintained for groups doing Neuroimaging work at UCLA. Tools like FSL, FreeSurfer, AFNI and Nibabel are maintained for this group separate from normal Hoffman2 programs. In order to take advantage of these tools, you need to setup your bash profile [[Hoffman2:Profile|properly]].<br />
<br />
Below is a list of the available software tools. We will do our best to update it in real-time.<br />
<br />
<br />
==List of Tools==<br />
Under Construction...<br />
<br />
<br />
===AFNI===<br />
[http://afni.nimh.nih.gov/afni/ Official Webste]<br />
{| class="wikitable"<br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd"<br />
| 2011_12_21_1014 || circa 2012.03 || Current<br />
|}<br />
<br />
<br />
===BrainSuite===<br />
[http://brainsuite.org/ Official Website]<br />
{| class="wikitable" <br />
! class="ccn-table-header"| Version Number<br />
! class="ccn-table-header"| Install Date<br />
! class="ccn-table-header"| Notes<br />
|- class="ccn-table-odd"<br />
| 13a4 || 2014.03.05 || Current<br />
|}<br />
<br />
<br />
===Caret===<br />
[http://brainvis.wustl.edu/wiki/index.php/Caret:About Official Website]<br />
{| class="wikitable"<br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd"<br />
| 5.65 (2012.01.27) || 2013.07.15 || Current, not folded into the main profile<br />
|}<br />
<br />
<br />
===Chronux===<br />
[http://www.chronux.org Official Website]<br />
{| class="wikitable"<br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd"<br />
| 2.10 || 2013.02.26 || Current<br />
|}<br />
<br />
<br />
===dcm2nii===<br />
[http://www.mccauslandcenter.sc.edu/mricro/mricron/dcm2nii.html Official Website]<br />
{| class="wikitable"<br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd"<br />
| 2013.06.06 || 2014.03.06 ||<br />
|- class="ccn-table-even"<br />
| 2011.11.11 || circa 2011 || Current<br />
|}<br />
<br />
<br />
===EEGLAB===<br />
[http://sccn.ucsd.edu/eeglab/ Official Website]<br />
<br />
[http://sccn.ucsd.edu/wiki/EEGLAB_revision_history Release Notes]<br />
{| class="wikitable"<br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd"<br />
| 13.1.1b || 2014.01.29 || Current<br />
|- class="ccn-table-even"<br />
| 12.0.2.5b || 2013.11.14 || <br />
|- class="ccn-table-odd"<br />
| 11.0.5.4b || 2013.11.14 || <br />
|- class="ccn-table-even"<br />
| 12.0.0.0b || 2012.12.10 || <br />
|- class="ccn-table-odd"<br />
| 11.0.0.0b || 2012.02.21 || <br />
|- class="ccn-table-even"<br />
| 10.2.5.8b || 2012.02.21 ||<br />
|}<br />
<br />
<br />
===FreeSurfer===<br />
[http://surfer.nmr.mgh.harvard.edu/ Official Website]<br />
<br />
[http://freesurfer.net/fswiki/ReleaseNotes Release Notes]<br />
{| class="wikitable" <br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd" <br />
| 5.3.0 || 2013.06.18 || Current<br />
|- class="ccn-table-even" <br />
| 5.2.0 || 2013.03.27 ||<br />
|- class="ccn-table-odd" <br />
| 5.1.0 || 2011.11.14 ||<br />
|- class="ccn-table-even" <br />
| 5.0.0 || circa 2010 ||<br />
|- class="ccn-table-odd" <br />
| 4.4.0 || circa 2009 ||<br />
|- class="ccn-table-even" <br />
| 4.0.5 || circa 2008 ||<br />
|}<br />
<br />
<br />
===FSL===<br />
[http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/ Official Website]<br />
<br />
[http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/WhatsNew Revision History]<br />
{| class="wikitable" <br />
! class="ccn-table-header"| Version Number<br />
! class="ccn-table-header"| Install Date<br />
! class="ccn-table-header"| Notes<br />
|- class="ccn-table-odd"<br />
| 5.0.6 || 2013.12.18 || Current<br />
|- class="ccn-table-even"<br />
| 5.0.5 || 2013.10.17 ||<br />
|-class="ccn-table-odd"<br />
| 5.0.4 || 2013.06.18 ||<br />
|- class="ccn-table-even"<br />
| 5.0.2 || 2013.02.19 ||<br />
|- class="ccn-table-odd"<br />
| 5.0.1 || 2012.10.01 ||<br />
|- class="ccn-table-even"<br />
| 5.0.0 || 2012.09.14 ||<br />
|- class="ccn-table-odd"<br />
| 4.1.9 || 2011.12.01 ||<br />
|- class="ccn-table-even"<br />
| 4.1.8 || circa 2011.06 ||<br />
|- class="ccn-table-odd" <br />
| 4.1.7 || circa 2011.11 ||<br />
|- class="ccn-table-even"<br />
| 4.1.4 || circa 2009 ||<br />
|- class="ccn-table-odd"<br />
| 4.1.3 || circa 2009 ||<br />
|- class="ccn-table-even"<br />
| 4.1.1 || circa 2008 ||<br />
|- class="ccn-table-odd"<br />
| 4.1.0 || circa 2008 ||<br />
|- class="ccn-table-even"<br />
| 4.0.4 || circa 2008 ||<br />
|}<br />
<br />
<br />
===ITKGray===<br />
[http://vistalab.stanford.edu/newlm/index.php/ItkGray Official Website]<br />
{| class="wikitable" <br />
! class="ccn-table-header"| Version Number<br />
! class="ccn-table-header"| Install Date<br />
! class="ccn-table-header"| Notes<br />
|- class="ccn-table-odd"<br />
| 080803 || 2009.11.19 || Current<br />
|- class="ccn-table-even"<br />
| 080128 || 2009.11.13 ||<br />
|}<br />
<br />
<br />
===SPM===<br />
[http://www.fil.ion.ucl.ac.uk/spm/ Official Website]<br />
{| class="wikitable"<br />
! class="ccn-table-header"| Version<br />
! class="ccn-table-header"| Last Patch Applied<br />
! class="ccn-table-header"| Last Checked Date<br />
! class="ccn-table-header"| Notes<br />
|- class="ccn-table-odd"<br />
| SPM8 || 5236 || 2014.01 || Current<br />
|- class="ccn-table-even"<br />
| SPM5 || Unknown || N/A || No longer supported<br />
|}<br />
<br />
<br />
===TrackVis/Diffusion Toolkit===<br />
[http://trackvis.org/ Official Website 1]<br />
<br />
[http://trackvis.org/dtk/ Official Website 2]<br />
{| class="wikitable" <br />
! class="ccn-table-header"| Tool<br />
! class="ccn-table-header"| Version Number<br />
! class="ccn-table-header"| Last Checked Date<br />
! class="ccn-table-header"| Notes<br />
|- class="ccn-table-odd"<br />
| TrackVis || 0.5.2.2 || 2014.03.06 || <br />
|- class="ccn-table-even"<br />
| Diffusion Toolkit || 0.6.2.2 || 204.03.06 ||<br />
|}<br />
<br />
<br />
===WEKA===<br />
[http://www.cs.waikato.ac.nz/ml/weka/ Official Website]<br />
{| class="wikitable" <br />
! class="ccn-table-header"| Version Number<br />
! class="ccn-table-header"| Install Date<br />
! class="ccn-table-header"| Notes<br />
|- class="ccn-table-odd"<br />
| 3.7.10 || 2014.03.03 || Current<br />
|- class="ccn-table-even"<br />
| 3.6.5 || circa 2011.08 ||<br />
|}<br />
<br />
<br />
===GCC===<br />
===LAPACK===<br />
===BLAS===<br />
===GLIB===<br />
===C++===<br />
===CMake===<br />
===CPACK===<br />
===MPI Kmeans===<br />
See this website for how to cite using the MPI Kmeans tool.<br />
[http://mloss.org/software/view/48/]<br />
<br />
===Python2.7===<br />
====Packages====<br />
=====CVXOPT=====<br />
=====Cython=====<br />
=====Gnuplot=====<br />
=====IPython=====<br />
=====matplotlib=====<br />
=====nibabel=====<br />
=====nifti=====<br />
=====nimfa=====<br />
:Non-negative Matrix Factorization<br />
:<br />
:[http://nimfa.biolab.si/ http://nimfa.biolab.si/]<br />
=====nipype=====<br />
=====nose=====<br />
=====numpy=====<br />
=====(p)lsa=====<br />
:(probabilistic) Latent Semantic Analysis. Failed its tests.py though.<br />
<br />
:[http://www.mblondel.org/journal/2010/06/13/lsa-and-plsa-in-python/ http://www.mblondel.org/journal/2010/06/13/lsa-and-plsa-in-python/]<br />
=====pydicom=====<br />
=====pygments=====<br />
=====PyMF=====<br />
:Python Matrix Factorization Module. Failed its tests though.<br />
<br />
:[http://pymf.googlecode.com http://pymf.googlecode.com]<br />
=====pypr=====<br />
=====PyQt4=====<br />
=====pytz=====<br />
=====pywt=====<br />
=====pyximport=====<br />
=====scikits=====<br />
=====scipy=====<br />
=====sklearn=====<br />
=====sparsesvd=====<br />
:Singular Value Decomposition. Passed both tests.<br />
<br />
:[http://pypi.python.org/pypi/sparsesvd http://pypi.python.org/pypi/sparsesvd]<br />
=====sphinx=====<br />
=====sympy=====<br />
=====traits=====<br />
=====virtualenv=====<br />
=====xcbgen=====</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Software_Tools&diff=2470Hoffman2:Software Tools2014-03-06T20:40:07Z<p>Elau: Re-sorted tools and finished switch to table method of listing.</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
There is an FMRI usergroup on Hoffman2 which is maintained for groups doing Neuroimaging work at UCLA. Tools like FSL, FreeSurfer, AFNI and Nibabel are maintained for this group separate from normal Hoffman2 programs. In order to take advantage of these tools, you need to setup your bash profile [[Hoffman2:Profile|properly]].<br />
<br />
Below is a list of the available software tools. We will do our best to update it in real-time.<br />
<br />
<br />
==List of Tools==<br />
Under Construction...<br />
<br />
<br />
===AFNI===<br />
[http://afni.nimh.nih.gov/afni/ Official Webste]<br />
{| class="wikitable"<br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd"<br />
| 2011_12_21_1014 || circa 2012.03 || Current<br />
|}<br />
<br />
<br />
===BrainSuite===<br />
[http://brainsuite.org/ Official Website]<br />
{| class="wikitable" <br />
! class="ccn-table-header"| Version Number<br />
! class="ccn-table-header"| Install Date<br />
! class="ccn-table-header"| Notes<br />
|- class="ccn-table-odd"<br />
| 13a4 || 2014.03.05 || Current<br />
|}<br />
<br />
<br />
===Caret===<br />
[http://brainvis.wustl.edu/wiki/index.php/Caret:About Official Website]<br />
{| class="wikitable"<br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd"<br />
| 5.65 (2012.01.27) || 2013.07.15 || Current, not folded into the main profile<br />
|}<br />
<br />
<br />
===Chronux===<br />
[http://www.chronux.org Official Website]<br />
{| class="wikitable"<br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd"<br />
| 2.10 || 2013.02.26 || Current<br />
|}<br />
<br />
<br />
===dcm2nii===<br />
[http://www.mccauslandcenter.sc.edu/mricro/mricron/dcm2nii.html Official Website]<br />
{| class="wikitable"<br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd"<br />
| 2013.06.06 || 2014.03.06 ||<br />
|- class="ccn-table-even"<br />
| 2011.11.11 || circa 2011 || Current<br />
|}<br />
<br />
<br />
===EEGLAB===<br />
[http://sccn.ucsd.edu/eeglab/ Official Website]<br />
<br />
[http://sccn.ucsd.edu/wiki/EEGLAB_revision_history Release Notes]<br />
{| class="wikitable"<br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd"<br />
| 13.1.1b || 2014.01.29 || Current<br />
|- class="ccn-table-even"<br />
| 12.0.2.5b || 2013.11.14 || <br />
|- class="ccn-table-odd"<br />
| 11.0.5.4b || 2013.11.14 || <br />
|- class="ccn-table-even"<br />
| 12.0.0.0b || 2012.12.10 || <br />
|- class="ccn-table-odd"<br />
| 11.0.0.0b || 2012.02.21 || <br />
|- class="ccn-table-even"<br />
| 10.2.5.8b || 2012.02.21 ||<br />
|}<br />
<br />
<br />
===FreeSurfer===<br />
[http://surfer.nmr.mgh.harvard.edu/ Official Website]<br />
<br />
[http://freesurfer.net/fswiki/ReleaseNotes Release Notes]<br />
{| class="wikitable" <br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
! class="ccn-table-header" | Notes<br />
|- class="ccn-table-odd" <br />
| 5.3.0 || 2013.06.18 || Current<br />
|- class="ccn-table-even" <br />
| 5.2.0 || 2013.03.27 ||<br />
|- class="ccn-table-odd" <br />
| 5.1.0 || 2011.11.14 ||<br />
|- class="ccn-table-even" <br />
| 5.0.0 || circa 2010 ||<br />
|- class="ccn-table-odd" <br />
| 4.4.0 || circa 2009 ||<br />
|- class="ccn-table-even" <br />
| 4.0.5 || circa 2008 ||<br />
|}<br />
<br />
<br />
===FSL===<br />
[http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/ Official Website]<br />
<br />
[http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/WhatsNew Revision History]<br />
{| class="wikitable" <br />
! class="ccn-table-header"| Version Number<br />
! class="ccn-table-header"| Install Date<br />
! class="ccn-table-header"| Notes<br />
|- class="ccn-table-odd"<br />
| 5.0.6 || 2013.12.18 || Current<br />
|- class="ccn-table-even"<br />
| 5.0.5 || 2013.10.17 ||<br />
|-class="ccn-table-odd"<br />
| 5.0.4 || 2013.06.18 ||<br />
|- class="ccn-table-even"<br />
| 5.0.2 || 2013.02.19 ||<br />
|- class="ccn-table-odd"<br />
| 5.0.1 || 2012.10.01 ||<br />
|- class="ccn-table-even"<br />
| 5.0.0 || 2012.09.14 ||<br />
|- class="ccn-table-odd"<br />
| 4.1.9 || 2011.12.01 ||<br />
|- class="ccn-table-even"<br />
| 4.1.8 || circa 2011.06 ||<br />
|- class="ccn-table-odd" <br />
| 4.1.7 || circa 2011.11 ||<br />
|- class="ccn-table-even"<br />
| 4.1.4 || circa 2009 ||<br />
|- class="ccn-table-odd"<br />
| 4.1.3 || circa 2009 ||<br />
|- class="ccn-table-even"<br />
| 4.1.1 || circa 2008 ||<br />
|- class="ccn-table-odd"<br />
| 4.1.0 || circa 2008 ||<br />
|- class="ccn-table-even"<br />
| 4.0.4 || circa 2008 ||<br />
|}<br />
<br />
<br />
<br />
===GCC===<br />
===LAPACK===<br />
===BLAS===<br />
===GLIB===<br />
===C++===<br />
===CMake===<br />
===CPACK===<br />
===MPI Kmeans===<br />
See this website for how to cite using the MPI Kmeans tool.<br />
[http://mloss.org/software/view/48/]<br />
<br />
===Python2.7===<br />
====Packages====<br />
=====CVXOPT=====<br />
=====Cython=====<br />
=====Gnuplot=====<br />
=====IPython=====<br />
=====matplotlib=====<br />
=====nibabel=====<br />
=====nifti=====<br />
=====nimfa=====<br />
:Non-negative Matrix Factorization<br />
:<br />
:[http://nimfa.biolab.si/ http://nimfa.biolab.si/]<br />
=====nipype=====<br />
=====nose=====<br />
=====numpy=====<br />
=====(p)lsa=====<br />
:(probabilistic) Latent Semantic Analysis. Failed its tests.py though.<br />
<br />
:[http://www.mblondel.org/journal/2010/06/13/lsa-and-plsa-in-python/ http://www.mblondel.org/journal/2010/06/13/lsa-and-plsa-in-python/]<br />
=====pydicom=====<br />
=====pygments=====<br />
=====PyMF=====<br />
:Python Matrix Factorization Module. Failed its tests though.<br />
<br />
:[http://pymf.googlecode.com http://pymf.googlecode.com]<br />
=====pypr=====<br />
=====PyQt4=====<br />
=====pytz=====<br />
=====pywt=====<br />
=====pyximport=====<br />
=====scikits=====<br />
=====scipy=====<br />
=====sklearn=====<br />
=====sparsesvd=====<br />
:Singular Value Decomposition. Passed both tests.<br />
<br />
:[http://pypi.python.org/pypi/sparsesvd http://pypi.python.org/pypi/sparsesvd]<br />
=====sphinx=====<br />
=====sympy=====<br />
=====traits=====<br />
=====virtualenv=====<br />
=====xcbgen=====</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Software_Tools&diff=2469Hoffman2:Software Tools2014-03-06T19:58:26Z<p>Elau: Updating versions listed and changing to table layout.</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
There is an FMRI usergroup on Hoffman2 which is maintained for groups doing Neuroimaging work at UCLA. Tools like FSL, FreeSurfer, AFNI and Nibabel are maintained for this group separate from normal Hoffman2 programs. In order to take advantage of these tools, you need to setup your bash profile [[Hoffman2:Profile|properly]].<br />
<br />
Below is a list of the available software tools. We will do our best to update it in real-time.<br />
<br />
<br />
==List of Tools==<br />
Under Construction...<br />
===FSL===<br />
[http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/ Official Website]<br />
<br />
[http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/WhatsNew Revision History]<br />
{| class="wikitable" <br />
! class="ccn-table-header"| Version Number<br />
! class="ccn-table-header"| Install Date<br />
|- class="ccn-table-odd"<br />
| 5.0.6 || 2013.12.18<br />
|- class="ccn-table-even"<br />
| 5.0.5 || 2013.10.17<br />
|-class="ccn-table-odd"<br />
| 5.0.4 || 2013.06.18<br />
|- class="ccn-table-even"<br />
| 5.0.2 || 2013.02.19<br />
|- class="ccn-table-odd"<br />
| 5.0.1 || 2012.10.01<br />
|- class="ccn-table-even"<br />
| 5.0.0 || 2012.09.14<br />
|- class="ccn-table-odd"<br />
| 4.1.9 || 2011.12.01<br />
|- class="ccn-table-even"<br />
| 4.1.8 || circa 2011.06<br />
|- class="ccn-table-odd"<br />
| 4.1.7 || circa 2011.11<br />
|- class="ccn-table-even"<br />
| 4.1.4 || circa 2009<br />
|- class="ccn-table-odd"<br />
| 4.1.3 || circa 2009<br />
|- class="ccn-table-even"<br />
| 4.1.1 || circa 2008<br />
|- class="ccn-table-odd"<br />
| 4.1.0 || circa 2008<br />
|- class="ccn-table-even"<br />
| 4.0.4 || circa 2008<br />
|}<br />
<br />
===FreeSurfer===<br />
[http://surfer.nmr.mgh.harvard.edu/ Official Website]<br />
<br />
[http://freesurfer.net/fswiki/ReleaseNotes Release Notes]<br />
<br />
{| class="wikitable" <br />
! class="ccn-table-header" | Version<br />
! class="ccn-table-header" | Install Date<br />
|- class="ccn-table-odd" <br />
| 5.3.0 || 2013.06.18<br />
|- class="ccn-table-even" <br />
| 5.2.0 || 2013.03.27<br />
|- class="ccn-table-odd" <br />
| 5.1.0 || 2011.11.14<br />
|- class="ccn-table-even" <br />
| 5.0.0 || circa 2010<br />
|- class="ccn-table-odd" <br />
| 4.4.0 || circa 2009<br />
|- class="ccn-table-even" <br />
| 4.0.5 || circa 2008<br />
|}<br />
<br />
===AFNI===<br />
[http://afni.nimh.nih.gov/afni/ Official Webste]<br />
*vAFNI_2011_12_21_1014<br />
**Install Date: 2012.03<br />
<br />
===Chronux===<br />
[http://www.chronux.org Official Website]<br />
*v2.10<br />
**Install Date: 2013.02.26<br />
<br />
===EEGLAB===<br />
[http://sccn.ucsd.edu/eeglab/ Official Website]<br />
*v12.0.0.0b<br />
**Instal Date: 2012.12.10<br />
*v11.0.0.0b<br />
**Install Date: 2012.02.21<br />
*v10.2.5.8b<br />
**Install Date: 2012.02.21<br />
<br />
===Caret===<br />
[http://brainvis.wustl.edu/wiki/index.php/Caret:About Official Website]<br />
*v5.65 2012.01.27<br />
**Install Date: 2013.07.15<br />
***Not folded into main profile yet.<br />
<br />
===GCC===<br />
===LAPACK===<br />
===BLAS===<br />
===GLIB===<br />
===C++===<br />
===CMake===<br />
===CPACK===<br />
===MPI Kmeans===<br />
See this website for how to cite using the MPI Kmeans tool.<br />
[http://mloss.org/software/view/48/]<br />
<br />
===Python2.7===<br />
====Packages====<br />
=====CVXOPT=====<br />
=====Cython=====<br />
=====Gnuplot=====<br />
=====IPython=====<br />
=====matplotlib=====<br />
=====nibabel=====<br />
=====nifti=====<br />
=====nimfa=====<br />
:Non-negative Matrix Factorization<br />
:<br />
:[http://nimfa.biolab.si/ http://nimfa.biolab.si/]<br />
=====nipype=====<br />
=====nose=====<br />
=====numpy=====<br />
=====(p)lsa=====<br />
:(probabilistic) Latent Semantic Analysis. Failed its tests.py though.<br />
<br />
:[http://www.mblondel.org/journal/2010/06/13/lsa-and-plsa-in-python/ http://www.mblondel.org/journal/2010/06/13/lsa-and-plsa-in-python/]<br />
=====pydicom=====<br />
=====pygments=====<br />
=====PyMF=====<br />
:Python Matrix Factorization Module. Failed its tests though.<br />
<br />
:[http://pymf.googlecode.com http://pymf.googlecode.com]<br />
=====pypr=====<br />
=====PyQt4=====<br />
=====pytz=====<br />
=====pywt=====<br />
=====pyximport=====<br />
=====scikits=====<br />
=====scipy=====<br />
=====sklearn=====<br />
=====sparsesvd=====<br />
:Singular Value Decomposition. Passed both tests.<br />
<br />
:[http://pypi.python.org/pypi/sparsesvd http://pypi.python.org/pypi/sparsesvd]<br />
=====sphinx=====<br />
=====sympy=====<br />
=====traits=====<br />
=====virtualenv=====<br />
=====xcbgen=====</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Profile&diff=886Hoffman2:Profile2013-12-11T00:21:07Z<p>Elau: Modifying example BASH profile for qrsh fix when consolidating job output files.</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
In UNIX systems, there are certain configuration files that get executed every time you login. If you are using the Bash shell (default), you have a file called <code>.bash_profile</code> which is processed when you log in. In order to make the FMRI toolset available to you on Hoffman2 and so you can work well with others, we recommend that you follow the instructions in the [[Hoffman2:Profile#Basics|Basics section]]. Read [[Hoffman2:Profile#Extras|Extras]] for some bells and whistles.<br />
<br />
<br />
==Basics==<br />
You account has one last thing that needs to be edited before being usable.<br />
<br />
# [[Hoffman2:Accessing the Cluster#SSH - Command Line|SSH into Hoffman2]]<br />
# Use your favorite [[Text Editors|text editor]] to edit the file <code>~/.bash_profile</code><br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* <pre>$ vim ~/.bash_profile</pre><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs]]<br />
#:* <pre>$ emacs ~/.bash_profile</pre><br />
#: [[Text Editors#NEdit (H2) (OSX)|NEdit]]<br />
#:* <pre>$ nedit ~/.bash_profile</pre><br />
# Insert these lines at the '''bottom''' of the file<br />
#:* <pre>source /u/home/FMRI/apps/etc/profile&#10;umask 007</pre><br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* Type <code>G</code> - capital G - to go to the end of the file<br />
#:* Type <code>A</code> - capital A - to go to the end of the line and enter insert mode<br />
#:* Type <code>ENTER</code> - to insert a newline<br />
#:* Type or paste in the specified lines.<br />
#: [[Text Editors#Emacs (H2)(OSX)|Emacs]]<br />
#:* Use the arrow keys to scroll the cursor down to the bottom of the document and add a newline.<br />
#:* Type or paste in the specified lines.<br />
#: [[Text Editors#NEdit (H2)|NEdit]]<br />
#:* Use the arrow keys to scroll the cursor down to the bottom of the document and add a newline.<br />
#:* Type or paste in the specified lines.<br />
# Save the file<br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* <code>ESC + ":wq" + ENTER</code><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs command line]]<br />
#:* <code>CTRL+x, CTRL+c</code><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs GUI]]<br />
#:* <code>CTRL+x, CTRL+c, y</code><br />
#:* or use the menu system<br />
#: [[Text Editors#NEdit (H2)|NEdit]]<br />
#:* Use the menu.<br />
# Log out of Hoffman2 and the next time you log in, everything will be set for you to start working.<br />
<br />
<br />
===Curious?===<br />
For those that care, what you are doing is asking the computer to execute the file<br />
/u/home/FMRI/apps/etc/profile<br />
every time you login. This file modifies your PATH variable so you have access to the FMRI toolset.<br />
<br />
The last line<br />
umask 007<br />
makes it so that any files you create will not allow "anyone" outside your group to read, write, or execute files and directories you make. This does not automatically grant read, write, and execute privileges to you and your group though.<br />
<br />
<br />
<br />
==Extras==<br />
===Collaboration===<br />
By default, any files and directories you create will not necessarily have permissions that allow your group to write on them. This can be a problem if other people are supposed to build on data you processed. We have a script ([[Hoffman2:Scripts:fix_perms.sh |fix_perms.sh]]) that will kindly find any files you own in a specified directory that don't have read/write/execute permissions for the group and make it so they do.<br />
<br />
You can build this script into your bash profile so that every time you log into Hoffman2, it will run in the background. It is also recommended that you run this script at the end of jobs to make results immediately available to collaborators.<br />
<br />
Adding the line<br />
fix_perms.sh -q /u/home/[GROUP]/data &<br />
to the end of your bash profile will run the permission fixer on your group's common data directory in the background quietly each time you log in. '''Make sure to replace [GROUP] with the name of your Hoffman2 group (e.g. mscohen, sbook, cbearden, laltshul, jfeusner or mgreen).'''<br />
<br />
<br />
===Colors===<br />
You can change the content and color of your command prompt by editing your bash_profile. There is a great explanation of how to do this [http://www.cyberciti.biz/tips/howto-linux-unix-bash-shell-setup-prompt.html here].<br />
<br />
Some of the content you can include in the command prompt:<br />
;Current time<br />
: You can format this however you want. This helps when looking back through your Terminal to find when you made certain changes to files.<br />
;Current working directory<br />
: So you always know where you are in a filesystem and don't need to constantly retype <code>pwd</code>.<br />
;Username<br />
: Who you are. Helpful if you are logged into multiple servers under multiple accounts and need help keeping track.<br />
;Host<br />
: The name of the computer you are logged into. This also helps you know where you are at all times.<br />
<br />
Line to add to your bash profile<br />
export PS1="\[\e[0;31m\]\h\[\e[1;37m\]:\[\e[1;34m\]\w\n\[\e[1;37m\]\D{%Y-%m-%d-%H-%M-%S} \[\e[22;32m\]\u \$ "<br />
Resulting prompt (on a black background)<br/><br />
<code style="background:#000000; padding:5pt"><span style="color:#FF0000">HOST</span><span style="color:#000000">:</span><span style="color:#0000FF">CURRENT WORKING DIRECTORY</span><br/><br />
<span style="color:#FFFFFF"> DATETIME IN ISO8601 FORMAT</span> <span style="color:#00FF00">USERNAME $</span></code><br />
<br />
<br />
<br />
==Example Bash Profile==<br />
<nowiki>#.bash_profile<br />
<br />
# Get the aliases and functions<br />
if [ -f ~/.bashrc ]; then<br />
. ~/.bashrc<br />
fi<br />
<br />
# Source to use FMRI Apps<br />
source /u/home/FMRI/apps/etc/profile<br />
<br />
# Umask (Revoke Permissions)<br />
umask 007<br />
<br />
# Collaborative permissions (Replace collabDirectory with your project Directory)<br />
fix_perms.sh -q /u/home/sbook/data/collabDirectory &<br />
<br />
# Happy Colors<br />
export PS1="\[\e[0;31m\]\h\[\e[1;37m\]:\[\e[1;34m\]\w\n\[\e[1;37m\]\D{%Y-%m-%d-%H-%M-%S} \[\e[22;32m\]\u \$ "<br />
<br />
# Fix for QRSH when consolidating job output files<br />
alias qrsh='qrsh -o /dev/null'<br />
</nowiki><br />
<br />
==External Links==<br />
*[http://ss64.com/bash/period.html Explanation of source]<br />
*[http://linux.die.net/man/2/umask Man for umask]<br />
*[http://www.cyberciti.biz/tips/understanding-linux-unix-umask-value-usage.html Better explanation of umask]<br />
*[http://www.cyberciti.biz/tips/howto-linux-unix-bash-shell-setup-prompt.html Coloration]<br />
*[http://en.wikipedia.org/wiki/ISO_8601 ISO 8601 Datetime format]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Profile&diff=884Hoffman2:Profile2013-09-24T21:12:26Z<p>Elau: Forgot a double quote at the end of the Happy Colors bit.</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
In UNIX systems, there are certain configuration files that get executed every time you login. If you are using the Bash shell (default), you have a file called <code>.bash_profile</code> which is processed when you log in. In order to make the FMRI toolset available to you on Hoffman2 and so you can work well with others, we recommend that you follow the instructions in the [[Hoffman2:Profile#Basics|Basics section]]. Read [[Hoffman2:Profile#Extras|Extras]] for some bells and whistles.<br />
<br />
<br />
==Basics==<br />
You account has one last thing that needs to be edited before being usable.<br />
<br />
# [[Hoffman2:Accessing the Cluster#SSH - Command Line|SSH into Hoffman2]]<br />
# Use your favorite [[Text Editors|text editor]] to edit the file <code>~/.bash_profile</code><br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* <pre>$ vim ~/.bash_profile</pre><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs]]<br />
#:* <pre>$ emacs ~/.bash_profile</pre><br />
#: [[Text Editors#NEdit (H2) (OSX)|NEdit]]<br />
#:* <pre>$ nedit ~/.bash_profile</pre><br />
# Insert these lines at the '''bottom''' of the file<br />
#:* <pre>source /u/home/FMRI/apps/etc/profile&#10;umask 007</pre><br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* Type <code>G</code> - capital G - to go to the end of the file<br />
#:* Type <code>A</code> - capital A - to go to the end of the line and enter insert mode<br />
#:* Type <code>ENTER</code> - to insert a newline<br />
#:* Type or paste in the specified lines.<br />
#: [[Text Editors#Emacs (H2)(OSX)|Emacs]]<br />
#:* Use the arrow keys to scroll the cursor down to the bottom of the document and add a newline.<br />
#:* Type or paste in the specified lines.<br />
#: [[Text Editors#NEdit (H2)|NEdit]]<br />
#:* Use the arrow keys to scroll the cursor down to the bottom of the document and add a newline.<br />
#:* Type or paste in the specified lines.<br />
# Save the file<br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* <code>ESC + ":wq" + ENTER</code><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs command line]]<br />
#:* <code>CTRL+x, CTRL+c</code><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs GUI]]<br />
#:* <code>CTRL+x, CTRL+c, y</code><br />
#:* or use the menu system<br />
#: [[Text Editors#NEdit (H2)|NEdit]]<br />
#:* Use the menu.<br />
# Log out of Hoffman2 and the next time you log in, everything will be set for you to start working.<br />
<br />
<br />
===Curious?===<br />
For those that care, what you are doing is asking the computer to execute the file<br />
/u/home/FMRI/apps/etc/profile<br />
every time you login. This file modifies your PATH variable so you have access to the FMRI toolset.<br />
<br />
The last line<br />
umask 007<br />
makes it so that any files you create will not allow "anyone" outside your group to read, write, or execute files and directories you make. This does not automatically grant read, write, and execute privileges to you and your group though.<br />
<br />
<br />
<br />
==Extras==<br />
===Collaboration===<br />
By default, any files and directories you create will not necessarily have permissions that allow your group to write on them. This can be a problem if other people are supposed to build on data you processed. We have a script ([[Hoffman2:Scripts:fix_perms.sh |fix_perms.sh]]) that will kindly find any files you own in a specified directory that don't have read/write/execute permissions for the group and make it so they do.<br />
<br />
You can build this script into your bash profile so that every time you log into Hoffman2, it will run in the background. It is also recommended that you run this script at the end of jobs to make results immediately available to collaborators.<br />
<br />
Adding the line<br />
fix_perms.sh -q /u/home/[GROUP]/data &<br />
to the end of your bash profile will run the permission fixer on your group's common data directory in the background quietly each time you log in. '''Make sure to replace [GROUP] with the name of your Hoffman2 group (e.g. mscohen, sbook, cbearden, laltshul, jfeusner or mgreen).'''<br />
<br />
<br />
===Colors===<br />
You can change the content and color of your command prompt by editing your bash_profile. There is a great explanation of how to do this [http://www.cyberciti.biz/tips/howto-linux-unix-bash-shell-setup-prompt.html here].<br />
<br />
Some of the content you can include in the command prompt:<br />
;Current time<br />
: You can format this however you want. This helps when looking back through your Terminal to find when you made certain changes to files.<br />
;Current working directory<br />
: So you always know where you are in a filesystem and don't need to constantly retype <code>pwd</code>.<br />
;Username<br />
: Who you are. Helpful if you are logged into multiple servers under multiple accounts and need help keeping track.<br />
;Host<br />
: The name of the computer you are logged into. This also helps you know where you are at all times.<br />
<br />
Line to add to your bash profile<br />
export PS1="\[\e[0;31m\]\h\[\e[1;37m\]:\[\e[1;34m\]\w\n\[\e[1;37m\]\D{%Y-%m-%d-%H-%M-%S} \[\e[22;32m\]\u \$ "<br />
Resulting prompt (on a black background)<br/><br />
<code style="background:#000000; padding:5pt"><span style="color:#FF0000">HOST</span><span style="color:#000000">:</span><span style="color:#0000FF">CURRENT WORKING DIRECTORY</span><br/><br />
<span style="color:#FFFFFF"> DATETIME IN ISO8601 FORMAT</span> <span style="color:#00FF00">USERNAME $</span></code><br />
<br />
<br />
<br />
==Example Bash Profile==<br />
<nowiki>#.bash_profile<br />
<br />
# Get the aliases and functions<br />
if [ -f ~/.bashrc ]; then<br />
. ~/.bashrc<br />
fi<br />
<br />
# Source to use FMRI Apps<br />
source /u/home/FMRI/apps/etc/profile<br />
<br />
# Umask (Revoke Permissions)<br />
umask 007<br />
<br />
# Collaborative permissions<br />
fix_perms.sh -q /u/home/sbook/data/collabDirectory<br />
<br />
# Happy Colors<br />
export PS1="\[\e[0;31m\]\h\[\e[1;37m\]:\[\e[1;34m\]\w\n\[\e[1;37m\]\D{%Y-%m-%d-%H-%M-%S} \[\e[22;32m\]\u \$ "<br />
</nowiki><br />
<br />
<br />
<br />
==External Links==<br />
*[http://ss64.com/bash/period.html Explanation of source]<br />
*[http://linux.die.net/man/2/umask Man for umask]<br />
*[http://www.cyberciti.biz/tips/understanding-linux-unix-umask-value-usage.html Better explanation of umask]<br />
*[http://www.cyberciti.biz/tips/howto-linux-unix-bash-shell-setup-prompt.html Coloration]<br />
*[http://en.wikipedia.org/wiki/ISO_8601 ISO 8601 Datetime format]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Getting_an_Account&diff=805Hoffman2:Getting an Account2013-09-17T00:14:56Z<p>Elau: </p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
==UCLA Grid Portal==<br />
The UCLA Grid Portal provides access to many clusters hosted at UCLA. By registering for an account, not only do you gain access to the Hoffman2 cluster and any other member clusters by request, but you also gain access to a variety of shared resources including matlab, R, Octave, Mathematica, FSL, and many more at UCLA and the UC system in general. Yes, the Grid Portal is a multi-university effort spanning from Northern to Southern California.<br />
<br />
You will probably be amazed at the resources you gain simply by signing up for the Grid Portal. For further information, please visit the [https://grid.ucla.edu:9443/gridsphere/gridsphere?cid=home&JavaScript=enabled Grid Portal] home page.<br />
<br />
<br />
<br />
==Requesting Hoffman2 Account==<br />
===What you need===<br />
A UCLA BOL account, available for free to any UCLA staff, student, or faculty member. If you do not have a BOL account, head to the [http://www.bol.ucla.edu/ UCLA BOL] services page. '''Click on "Create UCLA Logon ID" under "UCLA Logon ID Utilities"'''<br />
<br />
===Applying for the Account===<br />
ATTENTION: If you are a member of another lab or are a PI interested in obtaining Hoffman access, please see the section<br />
[[#Becoming A Faculty Sponsor | Becoming a Faculty Sponsor]]<br />
#Navigate to the [http://hpc.ucla.edu/hoffman2/getting-started/getting-started.php Account Applications] page.<br />
#Read over the application summary<br />
#Click "New User Registration"<br />
#Authenticate using your UCLA BOL username and pass<br />
#Fill out the form with appropriate information. Please read the information at the top of the form to familiarize yourself with the process of gaining an account. For Hoffman2, your Faculty Sponsor should be Mark Cohen, Alison Burggren (for Susan Bookheimer's lab), or your respective PI if they have created an account on Hoffman.<br />
;Proposed UserName<br />
:This will be the name you use to sign into the cluster<br />
;Select a Resource<br />
:For the Mark Cohen/Susan Bookheimer labs, choose "Hoffman2". However, you can request access to any cluster that is a member of the Grid Portal. <br />
<br />
Click Submit and you are done.<br />
<br />
==Becoming A Faculty Sponsor==<br />
If you are a PI or Lab Manager interested in the Hoffman2 server, you will want to create a Faculty Sponsor account first. Also, if you are a member of another lab collaborating with the Cohen or Bookheimer labs, you may want to forward this information to your PI or Lab Manager. Faculty Sponsors can approve (or deny) applications for membership to their group. They also receive a group folder and a unique group id so their users can work and share data easily with each other.<br />
<br />
#Navigate to the [http://www.ats.ucla.edu/clusters/common/account_applications/cluster.htm Account Applications] page.<br />
#Read over the application summary<br />
#Click "Request to become faculty sponsor"<br />
#Fill out the form with appropriate information.<br />
<br />
Under 'Reason', about any generic reason is appropriate for faculty members. For example, "To perform fMRI analysis." will likely suffice.<br />
<br />
<br />
<br />
==External Links==<br />
*[https://grid.ucla.edu:9443/gridsphere/gridsphere?cid=home&JavaScript=enabled UCLA Grid Portal]<br />
*[http://www.bol.ucla.edu/ UCLA BOL Home Page]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/ Hoffman2 Home Page]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Interactive_Sessions&diff=813Hoffman2:Interactive Sessions2013-09-11T23:33:20Z<p>Elau: Added info about RAM requests.</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
Interactive sessions on Hoffman2 let you have access to a computing node for up to 24 hours. This is ideal for:<br />
* running a intensive program like MATLAB (in fact that's how it [[Hoffman2:MATLAB|works]]), [[Hoffman2:WEKA|WEKA]], [[Hoffman2:R|R]] or FSLView<br />
* debugging a script you will be submitting to the queue later<br />
* moving/tar'ing/untar'ing lots of files<br />
* any other computing or graphics intensive operations<br />
since you aren't supposed to use the login nodes for such heavy lifting.<br />
<br />
<br />
<br />
==Basic Command==<br />
To get one, you need to use the <code>qrsh</code> command with the <code>-l i</code> flag.<br />
<br />
For example<br />
$ qrsh -l i<br />
will try to get you an interactive node. That dash-elle flag followed by the "i" is specifying that you want an interactive resource.<br />
<br />
Because you didn't specify a time limit this session will only last two hours after which you will be kicked off of the interactive node back to a login node.<br />
<br />
And because you didn't specify a memory limit, as of September 2013 job memory enforcement is strict so you will be kicked off if you cross ATS's default memory limit. As of 2013.09.09 this default was 1GB.<br />
<br />
And if all the interactive nodes are busy (there are only so many of them), then you will be told it was unable to secure one for you.<br />
<br />
If you successfully get a node, your prompt will change from something like<br />
[joebruin@login4 ~] $<br />
to something like<br />
[joebruin@n1234 ~] $<br />
indicating you are on node 1234.<br />
<br />
<br />
==Longer Time==<br />
If you wanted to specify a a time limit for your interactive session (anything less than 24 hours), use the resource flag again and specify time in the HH:MM:SS format.<br />
<br />
For example<br />
$ qrsh -l i,h_rt=4:00:00<br />
will try securing an interactive node for four hours with the default amount of RAM, but if they are all taken you will be kindly told you are out of luck.<br />
<br />
<br />
<br />
==More Memory==<br />
Doing something memory intensive? Like working with a lot of visualizations or multiple datasets? Use the resource flag again and specify a data request.<br />
<br />
For example<br />
$ qrsh -l i,h_rt=4:00:00,h_data=4G<br />
will try securing an interactive node for four hours with four gigabytes of RAM, but if no such node is available the cluster will deny your request.<br />
<br />
<br />
<br />
==Now!==<br />
If you absolutely need an interactive session now and can't take no for an answer, use a special flag <code>-now no</code>.<br />
<br />
For example<br />
$ qrsh -l i,h_rt=4:00:00 -now no<br />
will try securing an interactive node for four hours with the default amount of RAM. But if all of the interactive nodes are used up, it will put you in a queue waiting for one until you get it.<br />
<br />
<br />
<br />
==External Links==<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/computing/sge_qrsh.htm Getting an interactive node]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Submitting_Jobs&diff=972Hoffman2:Submitting Jobs2013-09-11T23:25:43Z<p>Elau: Updating the mem/h_data and time/h_rt information.</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
If you remember from [[Hoffman2:Introduction#Sun Grid Engine|Anatomy of the Computing Cluster]], the Sun Grid Engine on Hoffman2 is the scheduler for all computing jobs. It takes your computing job request, considers what resources you are asking for and then puts your job in a line waiting for those resources to become available.<br />
<br />
Ask for a simple 1GB of memory and a single computing core with a short time window, and your job will likely get placed at the front of the line and start running soon if not immediately. And for the vast majority of people, this will be the case.<br />
<br />
Ask for a lot of memory or many computing cores, and your job will get put further back in the line because it will have to wait for more things to become available. If your job needs these types of resources, you are probably at a level where reading this tutorial isn't very helpful.<br />
<br />
Ask for too little RAM or too little time and your job will be killed or end prematurely leaving you with no results to examine.<br />
<br />
Choose wisely.<br />
<br />
So how does one submit a computing job request? You've got some options:<br />
# '''job.q'''<br />
#: Use a simple tool that ATS wrote. It has a menu and walks you through submitting things but has been known to possibly forget certain necessary flags.<br />
# '''qsub'''<br />
#: Get under the hood and do it yourself. It can get messy but it can also be faster and you have more flexibility with options.<br />
# '''command files'''<br />
#: You've graduated to a higher level of operations, but we can help you get there with examples of our own command files.<br />
# '''job arrays'''<br />
#: You've got a lot of repetitive tasks to run, these will be your friend.<br />
<br />
<br />
<br />
==Aggregating Output Files==<br />
By default, whenever you submit a job, the standard output and error files get created in whichever directory you submitted the job from unless you tell qsub differently with the "-o" and "-e" arguments. '''This can be very annoying when trying to reduce your file count as output files can be everywhere.'''<br />
<br />
This is how you can avoid running around looking for these files:<br />
# [[Hoffman2:Accessing the Cluster#SSH - Command Line|SSH into Hoffman2]]<br />
# Use your favorite [[Text Editors|text editor]] to edit the file <code>~/.sge_request</code><br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* <pre>$ vim ~/.sge_request</pre><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs]]<br />
#:* <pre>$ emacs ~/.sge_request</pre><br />
#: [[Text Editors#NEdit (H2) (OSX)|NEdit]]<br />
#:* <pre>$ nedit ~/.sge_request</pre><br />
# Insert this line into the file<br />
#:* <pre>-o $HOME/job-output-files/</pre><br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* Type <code>A</code> - capital A - to go to the end of the line and enter insert mode<br />
#:* Type or paste in the specified lines.<br />
#: [[Text Editors#Emacs (H2)(OSX)|Emacs]]<br />
#:* Type or paste in the specified lines.<br />
#: [[Text Editors#NEdit (H2)|NEdit]]<br />
#:* Type or paste in the specified lines.<br />
# Save the file<br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* <code>ESC + ":wq" + ENTER</code><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs command line]]<br />
#:* <code>CTRL+x, CTRL+c</code><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs GUI]]<br />
#:* <code>CTRL+x, CTRL+c, y</code><br />
#:* or use the menu system<br />
#: [[Text Editors#NEdit (H2)|NEdit]]<br />
#:* Use the menu.<br />
# Now use the following command to create the special directory that will receive all of the output and error files for the jobs you run.<br />
#: <pre>mkdir ~/job-output-files</pre><br />
# Make an edit to your ~/.bash_profile so that you can run [[Hoffman2:Interactive Sessions]] without a problem<br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* <pre>$ vim ~/.bash_profile</pre><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs]]<br />
#:* <pre>$ emacs ~/.bash_profile</pre><br />
#: [[Text Editors#NEdit (H2) (OSX)|NEdit]]<br />
#:* <pre>$ nedit ~/.bash_profile</pre><br />
# Insert this line at the '''bottom''' of the file<br />
#:* <pre>alias qrsh='qrsh -o /dev/null'</pre><br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* Type <code>G</code> - capital G - to go to the end of the file<br />
#:* Type <code>A</code> - capital A - to go to the end of the line and enter insert mode<br />
#:* Type <code>ENTER</code> - to insert a newline<br />
#:* Type or paste in the specified lines.<br />
#: [[Text Editors#Emacs (H2)(OSX)|Emacs]]<br />
#:* Use the arrow keys to scroll the cursor down to the bottom of the document and add a newline.<br />
#:* Type or paste in the specified lines.<br />
#: [[Text Editors#NEdit (H2)|NEdit]]<br />
#:* Use the arrow keys to scroll the cursor down to the bottom of the document and add a newline.<br />
#:* Type or paste in the specified lines.<br />
# Save the file<br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* <code>ESC + ":wq" + ENTER</code><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs command line]]<br />
#:* <code>CTRL+x, CTRL+c</code><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs GUI]]<br />
#:* <code>CTRL+x, CTRL+c, y</code><br />
#:* or use the menu system<br />
#: [[Text Editors#NEdit (H2)|NEdit]]<br />
#:* Use the menu.<br />
<br />
<br />
<br />
==job.q==<br />
Once you've identified or written a script you'd like to run, [[Hoffman2:Accessing the Cluster#SSH - Command Line|SSH into Hoffman2]] and enter <code>job.q</code>. Then it is just a matter of following its step-by-step instructions.<br />
<br />
From the tool's main menu, you can type ''Info'' to read up about how to use it and we highly encourage you to do so.<br />
<br />
But we know patience is a virtue that most of us aren't blessed with. So we'll walk you through submitting a basic job so you can hit the ground running.<br />
<br />
===Example===<br />
# Once on Hoffman2, you'll need to edit one file so pull out your favorite [[Text Editors|text editor]] and edit the file<br />
#: <pre>~/.queuerc</pre><br />
# Add the line<br />
#: <pre>set qqodir = ~/job-output</pre><br />
# You've just set the default directory where your job command files will be created. Save the configuration file and close your text editor.<br />
# Make that directory using the command<br />
#: <pre>$ mkdir ~/job-output</pre><br />
# Now execute<br />
#:<pre>$ job.q</pre><br />
# Press enter to acknowledge the message about some files that get created (READ IT FIRST THOUGH).<br />
# Type ''Build <ENTER>'' to begin creating an SGE command file.<br />
# The program now asks you which script you'd like to run, enter the following text to use our example script<br />
#: <pre>/u/home/FMRI/apps/examples/qsub/gather.sh</pre><br />
# The program now asks how much memory the job will need (in [http://en.wikipedia.org/wiki/Megabyte Megabytes]). This script is really simple, so let's go with the minimum and enter ''64''.<br />
# The program now asks how long will the job take (in hours). Go with the minimum 1 hour; it will complete in much less than this.<br />
# The program now asks if your job should be limited to only your resource group's cores. Answer ''n'' because you do not need to be limiting yourself here and the job is not going to be running for more than 24 hours.<br />
# Soon, the program will tell you that ''gather.sh.cmd'' has been built and saved.<br />
# When it asks you if you would like to submit your job, say no. Then type ''Quit <ENTER>'' to leave the program.<br />
# Now you should be able to run<br />
#: <pre>ls ~/job-output</pre><br />
#: and see ''gather.sh.cmd''. This file will stay there until you delete it and can be run over and over again. Making a command file like this is especially useful if there is a task you'll be running repeatedly on Hoffman2. But if this is something you only need to run once, you should delete the file so you don't needlessly approach your [[Hoffman2:Quotas|quota]].<br />
# The time has come to actually run the program (thought we'd never get to that, didn't you?). Type<br />
#: <pre>qsub job-output/gather.sh.cmd</pre><br />
#: and after hitting enter, a message similar to this will pop up:<br />
#: <pre>Your job 1882940 ("gather.sh.cmd") has been submitted</pre><br />
#: where the number is your JobID, a unique numerical identifier for the computer job you have submitted to the queue.<br />
# Now you can check if the job has finished running by doing<br />
#: <pre>ls ~/job-output</pre><br />
# When two files named ''gather.sh.output.[JOBID]'' and ''gather.sh.joblog.[JOBID]'' (where JOBID is your job's unique identifier) appear, your job has run.<br />
#: ''gather.sh.output.[JOBID]''<br />
#:: This file has all the standard output generated by your script. In this case it will just have the line<br />
#::: ''Standard output would appear here.''<br />
#: ''gather.sh.joblog.[JOBID]''<br />
#:: This file has all the details about when, where, and how your job was processed. Useful information if you are going to be running this job over and over and need to fine tune the resources it uses.<br />
# Better ways of checking on your job can be found [[Hoffman2:Monitoring Jobs|here]].<br />
# The script you ran is an aggregator. It looks in a list of directories, each assumed to contain a specifically named file, and gathers the contents of each of those files into one central file in your home directory. This file is named ''gather-[TIMESTAMP].txt'' where TIMESTAMP is when the script was run and follows [http://en.wikipedia.org/wiki/ISO_8601 ISO 8601] style encoding. You are encouraged to type<br />
#: <pre>/u/home/FMRI/apps/examples/qsub/gather.sh -h</pre><br />
#: or<br />
#: <pre>/u/home/FMRI/apps/examples/qsub/gather.sh --help</pre><br />
#: to see how this script works.<br />
# Finally, go check the inbox of the email you used to sign up for your Hoffman2 account. There will be two emails from "root@mail.hoffman2.idre.ucla.edu" that indicate when the job was started and when the job was completed. This is one of the neat features of the queue so that you can be alerted about the progress of your job without having to stay logged into Hoffman2 and checking on it constantly.<br />
<br />
<br />
<br />
==qsub==<br />
Everything that job.q did can be done on the command line. And it can be done better.<br />
<br />
===Example===<br />
Run the command:<br />
$ qsub -cwd -V -N J1 -l h_data=64M,express,h_rt=00:05:00 -M eplau -m bea /u/home/FMRI/apps/examples/qsub/gather.sh<br />
<br />
And something like the following will be printed out:<br />
Your job 1875395 ("J1") has been submitted<br />
<br />
Where the number is your JOBID, a unique numerical identifier for your job.<br />
<br />
Let's break down the arguments in that command.<br />
<br />
;<code>-cwd</code><br />
: Change working directory<br />
: When your script runs, change the working directory to where you currently are in the filesystem.<br />
:: e.g. If you were in the director /u/home/mscohen/data/ when you ran the command, the queue will change directories to that location and then execute the script you gave it. This means output and error directories will be placed here for that job.<br />
<br />
;<code>-V</code><br />
: Export environment variables<br />
: Exports all the environment variables to the context of the job. Useful if you have extra environment variables that are needed in your script.<br />
:: e.g. If you had defined the variable SUBJECT_ID in your session on Hoffman2 (<code>export SUBJECT_ID=42</code>) before submitting a job and that variable was called on by your script, then you would need to use this flag. Tools like FreeSurfer look for certain environment variables to be set.<br />
<br />
;<code>-N J1</code><br />
: Name my job<br />
: Names your job "J1." When you [[Hoffman2:Monitoring Jobs#qstat|look at the queue]], this will be the text that shows up in the "name" column. This will also be the beginning of the output (<code>J1.o[JOBID]</code>) and error (<code>J1.e[JOBID]</code>) files for your job.<br />
<br />
;<code>-l h_data=64M,express,time=00:05:00</code><br />
: Resource allocation (that's a lower case "elle")<br />
: This is the resources flag meaning that the text immediately after it will ask for things like:<br />
:* certain amount of memory, in [http://en.wikipedia.org/wiki/Megabyte Megabytes], or [http://en.wikipedia.org/wiki/Gigabyte Gigabytes]<br />
:** h_data=64M (64 MB RAM) or h_data=1G (1 GB RAM)<br />
:** "mem" no longer works <br />
: In this case, our demands for RAM are really low, so we are requesting only 64MB.<br />
: '''Edit (2013.09)''' - If your job uses more RAM than it requested, your job WILL be killed in order to avoid it hurting other jobs running on the same node. It is imperative that you set this RAM request properly.<br />
:* certain length of computing time, in the form HH:MM:SS<br />
:** h_rt=00:05:00 or<br />
:** time=00:05:00<br />
: In this case the script will complete its task rapidly, hence we are only asking for 5 minutes of computing time.<br />
:* queue type, only a few options here<br />
:** [http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm#express express]<br />
:**: Time limit of 2 hours, and it tends to be overloaded so it isn't recommended<br />
:** [http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm#highp highp]<br />
:**: Job length maximum of 14 days but can only be run on nodes belonging to your resource group (type <code>mygroup</code> to see what type of resources you have available). If you are in the mscohen or sbook usergroups on Hoffman2, you have access to some of these highp nodes.<br />
:** [blank] (nothing, nada, zilch)<br />
:**: [http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm#day Standard queue], which has a maximum job length of 24 hours<br />
: In this case, we are asking to be put on the express queue since this is such a short job, but the standard queue would have worked just as well if not better.<br />
<br />
;<code>-M eplau</code><br />
: Define mailing list<br />
: This defines the list of users that will be mailed if email updates are requested. The default address is that of the job-owner, but multiple emails can be specified using a comma separated list.<br />
:: e.g. In this case, the email will be sent to the address on file for the user "eplau"<br />
<br />
;<code>-m bea</code><br />
: Define mailing rules<br />
: This defines when Hoffman2 should email you about your job. There are five options here<br />
:* b - when the job begins<br />
:* e - when the job ends<br />
:* a - when the job is aborted<br />
:* s - when the job is suspended<br />
:* n - never<br />
: The first four can be used in any combination, but the last obviously nullifies the others.<br />
<br />
There are many other flags that you could use, but these are the basics that will get you through most of your computing. Feel free to explore the others in the [http://www.ats.ucla.edu/clusters/common/computing/batch/man_submit.htm qsub Man page].<br />
<br />
<br />
<br />
==Command Files==<br />
Typing accurately can be difficult at times, so why put yourself through the trouble of having to retype the same arguments over and over if you will always be using about the same values? Enter command files.<br />
<br />
You already have experience making a command file (~/job-output/gather.sh.cmd) from when you used the tool <code>job.q</code>. But did you know that you can edit that command file to make changes to how it runs, or write your own?<br />
<br />
The command files generated by <code>job.q</code> are fairly well commented, so if you take a look at them with your favorite [Text Editors|text editor] you should be able to change their behavior. For instance, if you go into the command file from the job.q example, find the lines that say<br />
# Notify at beginning and end of job<br />
#$ -m bea<br />
You recognize that this is the flag about when to send email messages. Go ahead and change this to<br />
# Notify at the end and on abort<br />
#$ -m ae<br />
And you should only receive one email when your job finishes.<br />
<br />
<br />
<br />
===q.sh===<br />
You could make a generic command file that contains all the basic flags that you care about. We've even got an example ready and available for you at<br />
/u/home/FMRI/apps/examples/qsub/q.sh<br />
The script contents are shown below:<br />
qsub <<CMD<br />
#!/bin/bash<br />
# Use current working directory<br />
#$ -cwd<br />
# Error stream is merged with the standard output<br />
#$ -j y<br />
# Use the bash shell for job execution<br />
#$ -S /bin/bash<br />
# Use your normal environment variables in the job<br />
#$ -V<br />
# Use 1GB of RAM and the main queue, with a maximum of 2 hours computing time<br />
#$ -l h_data=1024M,h_rt=2:00:00<br />
$@<br />
CMD<br />
To use this command file to submit the ''gather.sh'' example script, you would execute the command:<br />
$ q.sh gather.sh<br />
You can do this because if you have [[Hoffman2:Software Tools#Setting Up Your Account to Access the Tools|set up your Bash profile correctly]], they are in your [[Hoffman2:UNIX Tutorial#PATH|Unix PATH variable]]. You can replace ''gather.sh'' with any script you want executed and it will be submitted as a job on the cluster. We recommend that you make your own copy of ''q.sh'' and keep it in your local ''bin'' directory (~/bin) so that you can edit it to suit your needs.<br />
<br />
<br />
<br />
==Job Arrays==<br />
There is an SGE qsub argument that allows you to submit multiple jobs in parallel that use the same script. It is<br />
-t lower-upper:interval<br />
where<br />
;<code>lower</code><br />
: is replaced with the starting number<br />
;<code>upper</code><br />
: is replaced with the ending number<br />
;<code>interval</code><br />
: is replaced with the step interval<br />
So adding the argument<br />
-t 10-100:5<br />
will step through the numbers 10, 15, 20, 25, ..., 100 submitting a job for each one.<br />
<br />
In jobs that are called with this flag, there will be an [[Hoffman2:UNIX Tutorial#Environment Variables|environment variable]] called <code>SGE_TASK_ID</code> whose value will be incremented over the range you specified. Each possible value of SGE_TASK_ID will be submitted as its own job, so your work will be parallelized.<br />
<br />
<br />
===Examples===<br />
Why would anyone use this? Here are some examples<br />
<br />
====Lots of numbers====<br />
Let's say you have a script, '''myFunc.sh''', that takes one numerical input and computes a bunch of values based on that input. But you need to run <code>myFunc.sh</code> for input values 1 to 100000. One solution would be to write a wrapper script '''myFuncSlowWrapper.sh''' as<br />
#!/bin/bash<br />
# myFuncSlowWrapper.sh<br />
for i in {1..100000};<br />
do<br />
myFunc.sh $i;<br />
done<br />
<br />
The only drawback is that this will take quite a while since all 100000 iterations will be done on a single processor. With job arrays, the computations will be split among many processors and can finish much more quickly. You would instead write a wrapper script called '''myFuncFastWrapper.sh''' as<br />
#!/bin/bash<br />
# myFuncFastWrapper.sh<br />
echo $SGE_TASK_ID<br />
myFunc.sh $SGE_TASK_ID<br />
<br />
And submit it with<br />
qsub -cwd -V -N PJ -l h_data=1024M,express,h_rt=01:00:00 -M eplau -m bea -t 1-100000:1 myFuncWrapper.sh<br />
<br />
<br />
====Lots of files====<br />
Let's say you have a script, '''myFunc2.sh''', that takes the name of a file as input and opens that file and runs a bunch of computations on its contents. But you have 100000 such files to process. One solution would be to write a wrapper script '''myFunc2SlowWrapper.sh''' as<br />
#!/bin/bash<br />
# myFunc2SlowWrapper.sh<br />
for FILE in `ls dir/of/files`;<br />
do<br />
myFunc2.sh $FILE<br />
done<br />
<br />
But this will take quite a while since all 100000 iterations will be done on a single processor. With job arrays, the computations will be split among many processors since they are submitted as their own jobs and can finish much more quickly. You could instead create a file that contains a list of all 100000 files that need to be processed and call it '''filesToProcess'''. Then write a wrapper script called '''myFunc2FastWrapper.sh''' as<br />
#!/bin/bash<br />
# myFunc2FastWrapper.sh<br />
echo $SGE_TASK_ID<br />
myFunc2.sh `sed -n ${SGE_TASK_ID}p /path/to/list/of/files`<br />
<br />
where you replace ''/path/to/list/of/file'' with the path to '''fileToProcess'''. The code<br />
`sed -n ${SGE_TASK_ID}p /path/to/list/of/files`<br />
uses <code>sed</code> to grab the ${SGE_TASK_ID}'th line from the file '''/path/to/list/of/files''' and returns it (thanks to the tick marks, <code>SHIFT + ~</code>).<br />
<br />
Then you'd submit it with<br />
qsub -cwd -V -N PJ -l h_data=1024M,express,h_rt=01:00:00 -M eplau -m bea -t 1-100000:1 myFunc2Wrapper.sh<br />
<br />
If your files were named regularly with a '-number' at the end (e.g. 'file-1', 'file-2', 'file-3', ... 'file-n'), you could just make '''myFunc2FastWrapperB.sh''' as<br />
#!/bin/bash<br />
# myFunc2FastWrapperB.sh<br />
echo $SGE_TASK_ID<br />
myFunc2.sh file-${SGE_TASK_ID}<br />
and submit it the same way.<br />
<br />
<br />
<br />
==External Links==<br />
*[http://www.ats.ucla.edu/clusters/common/computing/batch/man_submit.htm qsub Man page]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm Hoffman2 Types of Queues]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Submitting_Jobs&diff=971Hoffman2:Submitting Jobs2013-09-06T01:30:57Z<p>Elau: Aggregating job output</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
If you remember from [[Hoffman2:Introduction#Sun Grid Engine|Anatomy of the Computing Cluster]], the Sun Grid Engine on Hoffman2 is the scheduler for all computing jobs. It takes your computing job request, considers what resources you are asking for and then puts your job in a line waiting for those resources to become available.<br />
<br />
Ask for a simple 1GB of memory and a single computing core with a short time window, and your job will likely get placed at the front of the line and start running soon if not immediately. And for the vast majority of people, this will be the case.<br />
<br />
Ask for a lot of memory or many computing cores, and your job will get put further back in the line because it will have to wait for more things to become available. If your job needs these types of resources, you are probably at a level where reading this tutorial isn't very helpful.<br />
<br />
Ask for too little RAM or too little time and your job will be killed or end prematurely leaving you with no results to examine.<br />
<br />
Choose wisely.<br />
<br />
So how does one submit a computing job request? You've got some options:<br />
# '''job.q'''<br />
#: Use a simple tool that ATS wrote. It has a menu and walks you through submitting things but has been known to possibly forget certain necessary flags.<br />
# '''qsub'''<br />
#: Get under the hood and do it yourself. It can get messy but it can also be faster and you have more flexibility with options.<br />
# '''command files'''<br />
#: You've graduated to a higher level of operations, but we can help you get there with examples of our own command files.<br />
# '''job arrays'''<br />
#: You've got a lot of repetitive tasks to run, these will be your friend.<br />
<br />
<br />
<br />
==Aggregating Output Files==<br />
By default, whenever you submit a job, the standard output and error files get created in whichever directory you submitted the job from unless you tell qsub differently with the "-o" and "-e" arguments. '''This can be very annoying when trying to reduce your file count as output files can be everywhere.'''<br />
<br />
This is how you can avoid running around looking for these files:<br />
# [[Hoffman2:Accessing the Cluster#SSH - Command Line|SSH into Hoffman2]]<br />
# Use your favorite [[Text Editors|text editor]] to edit the file <code>~/.sge_request</code><br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* <pre>$ vim ~/.sge_request</pre><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs]]<br />
#:* <pre>$ emacs ~/.sge_request</pre><br />
#: [[Text Editors#NEdit (H2) (OSX)|NEdit]]<br />
#:* <pre>$ nedit ~/.sge_request</pre><br />
# Insert this line into the file<br />
#:* <pre>-o $HOME/job-output-files/</pre><br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* Type <code>A</code> - capital A - to go to the end of the line and enter insert mode<br />
#:* Type or paste in the specified lines.<br />
#: [[Text Editors#Emacs (H2)(OSX)|Emacs]]<br />
#:* Type or paste in the specified lines.<br />
#: [[Text Editors#NEdit (H2)|NEdit]]<br />
#:* Type or paste in the specified lines.<br />
# Save the file<br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* <code>ESC + ":wq" + ENTER</code><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs command line]]<br />
#:* <code>CTRL+x, CTRL+c</code><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs GUI]]<br />
#:* <code>CTRL+x, CTRL+c, y</code><br />
#:* or use the menu system<br />
#: [[Text Editors#NEdit (H2)|NEdit]]<br />
#:* Use the menu.<br />
# Now use the following command to create the special directory that will receive all of the output and error files for the jobs you run.<br />
#: <pre>mkdir ~/job-output-files</pre><br />
# Make an edit to your ~/.bash_profile so that you can run [[Hoffman2:Interactive Sessions]] without a problem<br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* <pre>$ vim ~/.bash_profile</pre><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs]]<br />
#:* <pre>$ emacs ~/.bash_profile</pre><br />
#: [[Text Editors#NEdit (H2) (OSX)|NEdit]]<br />
#:* <pre>$ nedit ~/.bash_profile</pre><br />
# Insert this line at the '''bottom''' of the file<br />
#:* <pre>alias qrsh='qrsh -o /dev/null'</pre><br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* Type <code>G</code> - capital G - to go to the end of the file<br />
#:* Type <code>A</code> - capital A - to go to the end of the line and enter insert mode<br />
#:* Type <code>ENTER</code> - to insert a newline<br />
#:* Type or paste in the specified lines.<br />
#: [[Text Editors#Emacs (H2)(OSX)|Emacs]]<br />
#:* Use the arrow keys to scroll the cursor down to the bottom of the document and add a newline.<br />
#:* Type or paste in the specified lines.<br />
#: [[Text Editors#NEdit (H2)|NEdit]]<br />
#:* Use the arrow keys to scroll the cursor down to the bottom of the document and add a newline.<br />
#:* Type or paste in the specified lines.<br />
# Save the file<br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* <code>ESC + ":wq" + ENTER</code><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs command line]]<br />
#:* <code>CTRL+x, CTRL+c</code><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs GUI]]<br />
#:* <code>CTRL+x, CTRL+c, y</code><br />
#:* or use the menu system<br />
#: [[Text Editors#NEdit (H2)|NEdit]]<br />
#:* Use the menu.<br />
<br />
<br />
<br />
==job.q==<br />
Once you've identified or written a script you'd like to run, [[Hoffman2:Accessing the Cluster#SSH - Command Line|SSH into Hoffman2]] and enter <code>job.q</code>. Then it is just a matter of following its step-by-step instructions.<br />
<br />
From the tool's main menu, you can type ''Info'' to read up about how to use it and we highly encourage you to do so.<br />
<br />
But we know patience is a virtue that most of us aren't blessed with. So we'll walk you through submitting a basic job so you can hit the ground running.<br />
<br />
===Example===<br />
# Once on Hoffman2, you'll need to edit one file so pull out your favorite [[Text Editors|text editor]] and edit the file<br />
#: <pre>~/.queuerc</pre><br />
# Add the line<br />
#: <pre>set qqodir = ~/job-output</pre><br />
# You've just set the default directory where your job command files will be created. Save the configuration file and close your text editor.<br />
# Make that directory using the command<br />
#: <pre>$ mkdir ~/job-output</pre><br />
# Now execute<br />
#:<pre>$ job.q</pre><br />
# Press enter to acknowledge the message about some files that get created (READ IT FIRST THOUGH).<br />
# Type ''Build <ENTER>'' to begin creating an SGE command file.<br />
# The program now asks you which script you'd like to run, enter the following text to use our example script<br />
#: <pre>/u/home/FMRI/apps/examples/qsub/gather.sh</pre><br />
# The program now asks how much memory the job will need (in [http://en.wikipedia.org/wiki/Megabyte Megabytes]). This script is really simple, so let's go with the minimum and enter ''64''.<br />
# The program now asks how long will the job take (in hours). Go with the minimum 1 hour; it will complete in much less than this.<br />
# The program now asks if your job should be limited to only your resource group's cores. Answer ''n'' because you do not need to be limiting yourself here and the job is not going to be running for more than 24 hours.<br />
# Soon, the program will tell you that ''gather.sh.cmd'' has been built and saved.<br />
# When it asks you if you would like to submit your job, say no. Then type ''Quit <ENTER>'' to leave the program.<br />
# Now you should be able to run<br />
#: <pre>ls ~/job-output</pre><br />
#: and see ''gather.sh.cmd''. This file will stay there until you delete it and can be run over and over again. Making a command file like this is especially useful if there is a task you'll be running repeatedly on Hoffman2. But if this is something you only need to run once, you should delete the file so you don't needlessly approach your [[Hoffman2:Quotas|quota]].<br />
# The time has come to actually run the program (thought we'd never get to that, didn't you?). Type<br />
#: <pre>qsub job-output/gather.sh.cmd</pre><br />
#: and after hitting enter, a message similar to this will pop up:<br />
#: <pre>Your job 1882940 ("gather.sh.cmd") has been submitted</pre><br />
#: where the number is your JobID, a unique numerical identifier for the computer job you have submitted to the queue.<br />
# Now you can check if the job has finished running by doing<br />
#: <pre>ls ~/job-output</pre><br />
# When two files named ''gather.sh.output.[JOBID]'' and ''gather.sh.joblog.[JOBID]'' (where JOBID is your job's unique identifier) appear, your job has run.<br />
#: ''gather.sh.output.[JOBID]''<br />
#:: This file has all the standard output generated by your script. In this case it will just have the line<br />
#::: ''Standard output would appear here.''<br />
#: ''gather.sh.joblog.[JOBID]''<br />
#:: This file has all the details about when, where, and how your job was processed. Useful information if you are going to be running this job over and over and need to fine tune the resources it uses.<br />
# Better ways of checking on your job can be found [[Hoffman2:Monitoring Jobs|here]].<br />
# The script you ran is an aggregator. It looks in a list of directories, each assumed to contain a specifically named file, and gathers the contents of each of those files into one central file in your home directory. This file is named ''gather-[TIMESTAMP].txt'' where TIMESTAMP is when the script was run and follows [http://en.wikipedia.org/wiki/ISO_8601 ISO 8601] style encoding. You are encouraged to type<br />
#: <pre>/u/home/FMRI/apps/examples/qsub/gather.sh -h</pre><br />
#: or<br />
#: <pre>/u/home/FMRI/apps/examples/qsub/gather.sh --help</pre><br />
#: to see how this script works.<br />
# Finally, go check the inbox of the email you used to sign up for your Hoffman2 account. There will be two emails from "root@mail.hoffman2.idre.ucla.edu" that indicate when the job was started and when the job was completed. This is one of the neat features of the queue so that you can be alerted about the progress of your job without having to stay logged into Hoffman2 and checking on it constantly.<br />
<br />
<br />
<br />
==qsub==<br />
Everything that job.q did can be done on the command line. And it can be done better.<br />
<br />
===Example===<br />
Run the command:<br />
$ qsub -cwd -V -N J1 -l mem=64M,express,time=00:05:00 -M eplau -m bea /u/home/FMRI/apps/examples/qsub/gather.sh<br />
<br />
And something like the following will be printed out:<br />
Your job 1875395 ("J1") has been submitted<br />
<br />
Where the number is your JOBID, a unique numerical identifier for your job.<br />
<br />
Let's break down the arguments in that command.<br />
<br />
;<code>-cwd</code><br />
: Change working directory<br />
: When your script runs, change the working directory to where you currently are in the filesystem.<br />
:: e.g. If you were in the director /u/home/mscohen/data/ when you ran the command, the queue will change directories to that location and then execute the script you gave it. This means output and error directories will be placed here for that job.<br />
<br />
;<code>-V</code><br />
: Export environment variables<br />
: Exports all the environment variables to the context of the job. Useful if you have extra environment variables that are needed in your script.<br />
:: e.g. If you had defined the variable SUBJECT_ID in your session on Hoffman2 (<code>export SUBJECT_ID=42</code>) before submitting a job and that variable was called on by your script, then you would need to use this flag. Tools like FreeSurfer look for certain environment variables to be set.<br />
<br />
;<code>-N J1</code><br />
: Name my job<br />
: Names your job "J1." When you [[Hoffman2:Monitoring Jobs#qstat|look at the queue]], this will be the text that shows up in the "name" column. This will also be the beginning of the output (<code>J1.o[JOBID]</code>) and error (<code>J1.e[JOBID]</code>) files for your job.<br />
<br />
;<code>-l mem=64M,express,time=00:05:00</code><br />
: Resource allocation (that's a lower case "elle")<br />
: This is the resources flag meaning that the text immediately after it will ask for things like:<br />
:* certain amount of memory, in [http://en.wikipedia.org/wiki/Megabyte Megabytes]<br />
:** mem=64M or<br />
:** h_data=64M<br />
: In this case, our demands for RAM are really low, so we are requesting only 64MB.<br />
: '''Edit (2013.09)''' - If your job uses more RAM than it requested, your job WILL be killed in order to avoid it hurting other jobs running on the same node. It is imperative that you set this RAM request properly.<br />
:* certain length of computing time, in the form HH:MM:SS<br />
:** time=00:05:00 or<br />
:** h_rt=00:05:00<br />
: In this case the script will complete its task rapidly, hence we are only asking for 5 minutes of computing time.<br />
:* queue type, only a few options here<br />
:** [http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm#express express]<br />
:**: Time limit of 2 hours, and it tends to be overloaded so it isn't recommended<br />
:** [http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm#highp highp]<br />
:**: Job length maximum of 14 days but can only be run on nodes belonging to your resource group (type <code>mygroup</code> to see what type of resources you have available). If you are in the mscohen or sbook usergroups on Hoffman2, you have access to some of these highp nodes.<br />
:** [blank] (nothing, nada, zilch)<br />
:**: [http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm#day Standard queue], which has a maximum job length of 24 hours<br />
: In this case, we are asking to be put on the express queue since this is such a short job, but the standard queue would have worked just as well if not better.<br />
<br />
;<code>-M eplau</code><br />
: Define mailing list<br />
: This defines the list of users that will be mailed if email updates are requested. The default address is that of the job-owner, but multiple emails can be specified using a comma separated list.<br />
:: e.g. In this case, the email will be sent to the address on file for the user "eplau"<br />
<br />
;<code>-m bea</code><br />
: Define mailing rules<br />
: This defines when Hoffman2 should email you about your job. There are five options here<br />
:* b - when the job begins<br />
:* e - when the job ends<br />
:* a - when the job is aborted<br />
:* s - when the job is suspended<br />
:* n - never<br />
: The first four can be used in any combination, but the last obviously nullifies the others.<br />
<br />
There are many other flags that you could use, but these are the basics that will get you through most of your computing. Feel free to explore the others in the [http://www.ats.ucla.edu/clusters/common/computing/batch/man_submit.htm qsub Man page].<br />
<br />
<br />
<br />
==Command Files==<br />
Typing accurately can be difficult at times, so why put yourself through the trouble of having to retype the same arguments over and over if you will always be using about the same values? Enter command files.<br />
<br />
You already have experience making a command file (~/job-output/gather.sh.cmd) from when you used the tool <code>job.q</code>. But did you know that you can edit that command file to make changes to how it runs, or write your own?<br />
<br />
The command files generated by <code>job.q</code> are fairly well commented, so if you take a look at them with your favorite [Text Editors|text editor] you should be able to change their behavior. For instance, if you go into the command file from the job.q example, find the lines that say<br />
# Notify at beginning and end of job<br />
#$ -m bea<br />
You recognize that this is the flag about when to send email messages. Go ahead and change this to<br />
# Notify at the end and on abort<br />
#$ -m ae<br />
And you should only receive one email when your job finishes.<br />
<br />
<br />
<br />
===q.sh===<br />
You could make a generic command file that contains all the basic flags that you care about. We've even got an example ready and available for you at<br />
/u/home/FMRI/apps/examples/qsub/q.sh<br />
The script contents are shown below:<br />
qsub <<CMD<br />
#!/bin/bash<br />
# Use current working directory<br />
#$ -cwd<br />
# Error stream is merged with the standard output<br />
#$ -j y<br />
# Use the bash shell for job execution<br />
#$ -S /bin/bash<br />
# Use your normal environment variables in the job<br />
#$ -V<br />
# Use 1GB of RAM and the main queue, with a maximum of 2 hours computing time<br />
#$ -l h_data=1024M,h_rt=2:00:00<br />
$@<br />
CMD<br />
To use this command file to submit the ''gather.sh'' example script, you would execute the command:<br />
$ q.sh gather.sh<br />
You can do this because if you have [[Hoffman2:Software Tools#Setting Up Your Account to Access the Tools|set up your Bash profile correctly]], they are in your [[Hoffman2:UNIX Tutorial#PATH|Unix PATH variable]]. You can replace ''gather.sh'' with any script you want executed and it will be submitted as a job on the cluster. We recommend that you make your own copy of ''q.sh'' and keep it in your local ''bin'' directory (~/bin) so that you can edit it to suit your needs.<br />
<br />
<br />
<br />
==Job Arrays==<br />
There is an SGE qsub argument that allows you to submit multiple jobs in parallel that use the same script. It is<br />
-t lower-upper:interval<br />
where<br />
;<code>lower</code><br />
: is replaced with the starting number<br />
;<code>upper</code><br />
: is replaced with the ending number<br />
;<code>interval</code><br />
: is replaced with the step interval<br />
So adding the argument<br />
-t 10-100:5<br />
will step through the numbers 10, 15, 20, 25, ..., 100 submitting a job for each one.<br />
<br />
In jobs that are called with this flag, there will be an [[Hoffman2:UNIX Tutorial#Environment Variables|environment variable]] called <code>SGE_TASK_ID</code> whose value will be incremented over the range you specified. Each possible value of SGE_TASK_ID will be submitted as its own job, so your work will be parallelized.<br />
<br />
<br />
===Examples===<br />
Why would anyone use this? Here are some examples<br />
<br />
====Lots of numbers====<br />
Let's say you have a script, '''myFunc.sh''', that takes one numerical input and computes a bunch of values based on that input. But you need to run <code>myFunc.sh</code> for input values 1 to 100000. One solution would be to write a wrapper script '''myFuncSlowWrapper.sh''' as<br />
#!/bin/bash<br />
# myFuncSlowWrapper.sh<br />
for i in {1..100000};<br />
do<br />
myFunc.sh $i;<br />
done<br />
<br />
The only drawback is that this will take quite a while since all 100000 iterations will be done on a single processor. With job arrays, the computations will be split among many processors and can finish much more quickly. You would instead write a wrapper script called '''myFuncFastWrapper.sh''' as<br />
#!/bin/bash<br />
# myFuncFastWrapper.sh<br />
echo $SGE_TASK_ID<br />
myFunc.sh $SGE_TASK_ID<br />
<br />
And submit it with<br />
qsub -cwd -V -N PJ -l mem=1024M,express,time=01:00:00 -M eplau -m bea -t 1-100000:1 myFuncWrapper.sh<br />
<br />
<br />
====Lots of files====<br />
Let's say you have a script, '''myFunc2.sh''', that takes the name of a file as input and opens that file and runs a bunch of computations on its contents. But you have 100000 such files to process. One solution would be to write a wrapper script '''myFunc2SlowWrapper.sh''' as<br />
#!/bin/bash<br />
# myFunc2SlowWrapper.sh<br />
for FILE in `ls dir/of/files`;<br />
do<br />
myFunc2.sh $FILE<br />
done<br />
<br />
But this will take quite a while since all 100000 iterations will be done on a single processor. With job arrays, the computations will be split among many processors since they are submitted as their own jobs and can finish much more quickly. You could instead create a file that contains a list of all 100000 files that need to be processed and call it '''filesToProcess'''. Then write a wrapper script called '''myFunc2FastWrapper.sh''' as<br />
#!/bin/bash<br />
# myFunc2FastWrapper.sh<br />
echo $SGE_TASK_ID<br />
myFunc2.sh `sed -n ${SGE_TASK_ID}p /path/to/list/of/files`<br />
<br />
where you replace ''/path/to/list/of/file'' with the path to '''fileToProcess'''. The code<br />
`sed -n ${SGE_TASK_ID}p /path/to/list/of/files`<br />
uses <code>sed</code> to grab the ${SGE_TASK_ID}'th line from the file '''/path/to/list/of/files''' and returns it (thanks to the tick marks, <code>SHIFT + ~</code>).<br />
<br />
Then you'd submit it with<br />
qsub -cwd -V -N PJ -l mem=1024M,express,time=01:00:00 -M eplau -m bea -t 1-100000:1 myFunc2Wrapper.sh<br />
<br />
If your files were named regularly with a '-number' at the end (e.g. 'file-1', 'file-2', 'file-3', ... 'file-n'), you could just make '''myFunc2FastWrapperB.sh''' as<br />
#!/bin/bash<br />
# myFunc2FastWrapperB.sh<br />
echo $SGE_TASK_ID<br />
myFunc2.sh file-${SGE_TASK_ID}<br />
and submit it the same way.<br />
<br />
<br />
<br />
==External Links==<br />
*[http://www.ats.ucla.edu/clusters/common/computing/batch/man_submit.htm qsub Man page]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm Hoffman2 Types of Queues]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Submitting_Jobs&diff=970Hoffman2:Submitting Jobs2013-09-05T23:01:34Z<p>Elau: </p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
If you remember from [[Hoffman2:Introduction#Sun Grid Engine|Anatomy of the Computing Cluster]], the Sun Grid Engine on Hoffman2 is the scheduler for all computing jobs. It takes your computing job request, considers what resources you are asking for and then puts your job in a line waiting for those resources to become available.<br />
<br />
Ask for a simple 1GB of memory and a single computing core with a short time window, and your job will likely get placed at the front of the line and start running soon if not immediately. And for the vast majority of people, this will be the case.<br />
<br />
Ask for a lot of memory or many computing cores, and your job will get put further back in the line because it will have to wait for more things to become available. If your job needs these types of resources, you are probably at a level where reading this tutorial isn't very helpful.<br />
<br />
So how does one submit a computing job request? You've got some options:<br />
# '''job.q'''<br />
#: Use a simple tool that ATS wrote. It has a menu and walks you through submitting things but has been known to possibly forget certain necessary flags.<br />
# '''qsub'''<br />
#: Get under the hood and do it yourself. It can get messy but it can also be faster and you have more flexibility with options.<br />
# '''command files'''<br />
#: You've graduated to a higher level of operations, but we can help you get there with examples of our own command files.<br />
# '''job arrays'''<br />
#: You've got a lot of repetitive tasks to run, these will be your friend.<br />
<br />
<br />
<br />
==Aggregating Output Files==<br />
By default, whenever you submit a job, the standard output and error files get created in whichever directory you submitted the job from unless you tell qsub differently with the "-o" and "-e" arguments. '''This can be very annoying when trying to reduce your file count as output files can be everywhere.'''<br />
<br />
This is how you can avoid running around looking for these files:<br />
# [[Hoffman2:Accessing the Cluster#SSH - Command Line|SSH into Hoffman2]]<br />
# Use your favorite [[Text Editors|text editor]] to edit the file <code>~/.sge_request</code><br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* <pre>$ vim ~/.sge_request</pre><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs]]<br />
#:* <pre>$ emacs ~/.sge_request</pre><br />
#: [[Text Editors#NEdit (H2) (OSX)|NEdit]]<br />
#:* <pre>$ nedit ~/.sge_request</pre><br />
# Insert this line into the file<br />
#:* <pre>-o $HOME/job-output-files/</pre><br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* Type <code>A</code> - capital A - to go to the end of the line and enter insert mode<br />
#:* Type or paste in the specified lines.<br />
#: [[Text Editors#Emacs (H2)(OSX)|Emacs]]<br />
#:* Type or paste in the specified lines.<br />
#: [[Text Editors#NEdit (H2)|NEdit]]<br />
#:* Type or paste in the specified lines.<br />
# Save the file<br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* <code>ESC + ":wq" + ENTER</code><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs command line]]<br />
#:* <code>CTRL+x, CTRL+c</code><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs GUI]]<br />
#:* <code>CTRL+x, CTRL+c, y</code><br />
#:* or use the menu system<br />
#: [[Text Editors#NEdit (H2)|NEdit]]<br />
#:* Use the menu.<br />
# Now use the following command to create the special directory that will receive all of the output and error files for the jobs you run.<br />
#: <pre>mkdir ~/job-output-files</pre><br />
<br />
<br />
<br />
==job.q==<br />
Once you've identified or written a script you'd like to run, [[Hoffman2:Accessing the Cluster#SSH - Command Line|SSH into Hoffman2]] and enter <code>job.q</code>. Then it is just a matter of following its step-by-step instructions.<br />
<br />
From the tool's main menu, you can type ''Info'' to read up about how to use it and we highly encourage you to do so.<br />
<br />
But we know patience is a virtue that most of us aren't blessed with. So we'll walk you through submitting a basic job so you can hit the ground running.<br />
<br />
===Example===<br />
# Once on Hoffman2, you'll need to edit one file so pull out your favorite [[Text Editors|text editor]] and edit the file<br />
#: <pre>~/.queuerc</pre><br />
# Add the line<br />
#: <pre>set qqodir = ~/job-output</pre><br />
# You've just set the default directory where your job command files will be created. Save the configuration file and close your text editor.<br />
# Make that directory using the command<br />
#: <pre>$ mkdir ~/job-output</pre><br />
# Now execute<br />
#:<pre>$ job.q</pre><br />
# Press enter to acknowledge the message about some files that get created (READ IT FIRST THOUGH).<br />
# Type ''Build <ENTER>'' to begin creating an SGE command file.<br />
# The program now asks you which script you'd like to run, enter the following text to use our example script<br />
#: <pre>/u/home/FMRI/apps/examples/qsub/gather.sh</pre><br />
# The program now asks how much memory the job will need (in [http://en.wikipedia.org/wiki/Megabyte Megabytes]). This script is really simple, so let's go with the minimum and enter ''64''.<br />
# The program now asks how long will the job take (in hours). Go with the minimum 1 hour; it will complete in much less than this.<br />
# The program now asks if your job should be limited to only your resource group's cores. Answer ''n'' because you do not need to be limiting yourself here and the job is not going to be running for more than 24 hours.<br />
# Soon, the program will tell you that ''gather.sh.cmd'' has been built and saved.<br />
# When it asks you if you would like to submit your job, say no. Then type ''Quit <ENTER>'' to leave the program.<br />
# Now you should be able to run<br />
#: <pre>ls ~/job-output</pre><br />
#: and see ''gather.sh.cmd''. This file will stay there until you delete it and can be run over and over again. Making a command file like this is especially useful if there is a task you'll be running repeatedly on Hoffman2. But if this is something you only need to run once, you should delete the file so you don't needlessly approach your [[Hoffman2:Quotas|quota]].<br />
# The time has come to actually run the program (thought we'd never get to that, didn't you?). Type<br />
#: <pre>qsub job-output/gather.sh.cmd</pre><br />
#: and after hitting enter, a message similar to this will pop up:<br />
#: <pre>Your job 1882940 ("gather.sh.cmd") has been submitted</pre><br />
#: where the number is your JobID, a unique numerical identifier for the computer job you have submitted to the queue.<br />
# Now you can check if the job has finished running by doing<br />
#: <pre>ls ~/job-output</pre><br />
# When two files named ''gather.sh.output.[JOBID]'' and ''gather.sh.joblog.[JOBID]'' (where JOBID is your job's unique identifier) appear, your job has run.<br />
#: ''gather.sh.output.[JOBID]''<br />
#:: This file has all the standard output generated by your script. In this case it will just have the line<br />
#::: ''Standard output would appear here.''<br />
#: ''gather.sh.joblog.[JOBID]''<br />
#:: This file has all the details about when, where, and how your job was processed. Useful information if you are going to be running this job over and over and need to fine tune the resources it uses.<br />
# Better ways of checking on your job can be found [[Hoffman2:Monitoring Jobs|here]].<br />
# The script you ran is an aggregator. It looks in a list of directories, each assumed to contain a specifically named file, and gathers the contents of each of those files into one central file in your home directory. This file is named ''gather-[TIMESTAMP].txt'' where TIMESTAMP is when the script was run and follows [http://en.wikipedia.org/wiki/ISO_8601 ISO 8601] style encoding. You are encouraged to type<br />
#: <pre>/u/home/FMRI/apps/examples/qsub/gather.sh -h</pre><br />
#: or<br />
#: <pre>/u/home/FMRI/apps/examples/qsub/gather.sh --help</pre><br />
#: to see how this script works.<br />
# Finally, go check the inbox of the email you used to sign up for your Hoffman2 account. There will be two emails from "root@mail.hoffman2.idre.ucla.edu" that indicate when the job was started and when the job was completed. This is one of the neat features of the queue so that you can be alerted about the progress of your job without having to stay logged into Hoffman2 and checking on it constantly.<br />
<br />
<br />
<br />
==qsub==<br />
Everything that job.q did can be done on the command line. And it can be done better.<br />
<br />
===Example===<br />
Run the command:<br />
$ qsub -cwd -V -N J1 -l mem=64M,express,time=00:05:00 -M eplau -m bea /u/home/FMRI/apps/examples/qsub/gather.sh<br />
<br />
And something like the following will be printed out:<br />
Your job 1875395 ("J1") has been submitted<br />
<br />
Where the number is your JOBID, a unique numerical identifier for your job.<br />
<br />
Let's break down the arguments in that command.<br />
<br />
;<code>-cwd</code><br />
: Change working directory<br />
: When your script runs, change the working directory to where you currently are in the filesystem.<br />
:: e.g. If you were in the director /u/home/mscohen/data/ when you ran the command, the queue will change directories to that location and then execute the script you gave it. This means output and error directories will be placed here for that job.<br />
<br />
;<code>-V</code><br />
: Export environment variables<br />
: Exports all the environment variables to the context of the job. Useful if you have extra environment variables that are needed in your script.<br />
:: e.g. If you had defined the variable SUBJECT_ID in your session on Hoffman2 (<code>export SUBJECT_ID=42</code>) before submitting a job and that variable was called on by your script, then you would need to use this flag. Tools like FreeSurfer look for certain environment variables to be set.<br />
<br />
;<code>-N J1</code><br />
: Name my job<br />
: Names your job "J1." When you [[Hoffman2:Monitoring Jobs#qstat|look at the queue]], this will be the text that shows up in the "name" column. This will also be the beginning of the output (<code>J1.o[JOBID]</code>) and error (<code>J1.e[JOBID]</code>) files for your job.<br />
<br />
;<code>-l mem=64M,express,time=00:05:00</code><br />
: Resource allocation (that's a lower case "elle")<br />
: This is the resources flag meaning that the text immediately after it will ask for things like:<br />
:* certain amount of memory, in [http://en.wikipedia.org/wiki/Megabyte Megabytes]<br />
:** mem=64M or<br />
:** h_data=64M<br />
: In this case, our demands for RAM are really low, so we are requesting only 64MB.<br />
: '''Edit (2013.09)''' - If your job uses more RAM than it requested, your job WILL be killed in order to avoid it hurting other jobs running on the same node. It is imperative that you set this RAM request properly.<br />
:* certain length of computing time, in the form HH:MM:SS<br />
:** time=00:05:00 or<br />
:** h_rt=00:05:00<br />
: In this case the script will complete its task rapidly, hence we are only asking for 5 minutes of computing time.<br />
:* queue type, only a few options here<br />
:** [http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm#express express]<br />
:**: Time limit of 2 hours, and it tends to be overloaded so it isn't recommended<br />
:** [http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm#highp highp]<br />
:**: Job length maximum of 14 days but can only be run on nodes belonging to your resource group (type <code>mygroup</code> to see what type of resources you have available). If you are in the mscohen or sbook usergroups on Hoffman2, you have access to some of these highp nodes.<br />
:** [blank] (nothing, nada, zilch)<br />
:**: [http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm#day Standard queue], which has a maximum job length of 24 hours<br />
: In this case, we are asking to be put on the express queue since this is such a short job, but the standard queue would have worked just as well if not better.<br />
<br />
;<code>-M eplau</code><br />
: Define mailing list<br />
: This defines the list of users that will be mailed if email updates are requested. The default address is that of the job-owner, but multiple emails can be specified using a comma separated list.<br />
:: e.g. In this case, the email will be sent to the address on file for the user "eplau"<br />
<br />
;<code>-m bea</code><br />
: Define mailing rules<br />
: This defines when Hoffman2 should email you about your job. There are five options here<br />
:* b - when the job begins<br />
:* e - when the job ends<br />
:* a - when the job is aborted<br />
:* s - when the job is suspended<br />
:* n - never<br />
: The first four can be used in any combination, but the last obviously nullifies the others.<br />
<br />
There are many other flags that you could use, but these are the basics that will get you through most of your computing. Feel free to explore the others in the [http://www.ats.ucla.edu/clusters/common/computing/batch/man_submit.htm qsub Man page].<br />
<br />
<br />
<br />
==Command Files==<br />
Typing accurately can be difficult at times, so why put yourself through the trouble of having to retype the same arguments over and over if you will always be using about the same values? Enter command files.<br />
<br />
You already have experience making a command file (~/job-output/gather.sh.cmd) from when you used the tool <code>job.q</code>. But did you know that you can edit that command file to make changes to how it runs, or write your own?<br />
<br />
The command files generated by <code>job.q</code> are fairly well commented, so if you take a look at them with your favorite [Text Editors|text editor] you should be able to change their behavior. For instance, if you go into the command file from the job.q example, find the lines that say<br />
# Notify at beginning and end of job<br />
#$ -m bea<br />
You recognize that this is the flag about when to send email messages. Go ahead and change this to<br />
# Notify at the end and on abort<br />
#$ -m ae<br />
And you should only receive one email when your job finishes.<br />
<br />
<br />
<br />
===q.sh===<br />
You could make a generic command file that contains all the basic flags that you care about. We've even got an example ready and available for you at<br />
/u/home/FMRI/apps/examples/qsub/q.sh<br />
The script contents are shown below:<br />
qsub <<CMD<br />
#!/bin/bash<br />
# Use current working directory<br />
#$ -cwd<br />
# Error stream is merged with the standard output<br />
#$ -j y<br />
# Use the bash shell for job execution<br />
#$ -S /bin/bash<br />
# Use your normal environment variables in the job<br />
#$ -V<br />
# Use 1GB of RAM and the main queue, with a maximum of 2 hours computing time<br />
#$ -l h_data=1024M,h_rt=2:00:00<br />
$@<br />
CMD<br />
To use this command file to submit the ''gather.sh'' example script, you would execute the command:<br />
$ q.sh gather.sh<br />
You can do this because if you have [[Hoffman2:Software Tools#Setting Up Your Account to Access the Tools|set up your Bash profile correctly]], they are in your [[Hoffman2:UNIX Tutorial#PATH|Unix PATH variable]]. You can replace ''gather.sh'' with any script you want executed and it will be submitted as a job on the cluster. We recommend that you make your own copy of ''q.sh'' and keep it in your local ''bin'' directory (~/bin) so that you can edit it to suit your needs.<br />
<br />
<br />
<br />
==Job Arrays==<br />
There is an SGE qsub argument that allows you to submit multiple jobs in parallel that use the same script. It is<br />
-t lower-upper:interval<br />
where<br />
;<code>lower</code><br />
: is replaced with the starting number<br />
;<code>upper</code><br />
: is replaced with the ending number<br />
;<code>interval</code><br />
: is replaced with the step interval<br />
So adding the argument<br />
-t 10-100:5<br />
will step through the numbers 10, 15, 20, 25, ..., 100 submitting a job for each one.<br />
<br />
In jobs that are called with this flag, there will be an [[Hoffman2:UNIX Tutorial#Environment Variables|environment variable]] called <code>SGE_TASK_ID</code> whose value will be incremented over the range you specified. Each possible value of SGE_TASK_ID will be submitted as its own job, so your work will be parallelized.<br />
<br />
<br />
===Examples===<br />
Why would anyone use this? Here are some examples<br />
<br />
====Lots of numbers====<br />
Let's say you have a script, '''myFunc.sh''', that takes one numerical input and computes a bunch of values based on that input. But you need to run <code>myFunc.sh</code> for input values 1 to 100000. One solution would be to write a wrapper script '''myFuncSlowWrapper.sh''' as<br />
#!/bin/bash<br />
# myFuncSlowWrapper.sh<br />
for i in {1..100000};<br />
do<br />
myFunc.sh $i;<br />
done<br />
<br />
The only drawback is that this will take quite a while since all 100000 iterations will be done on a single processor. With job arrays, the computations will be split among many processors and can finish much more quickly. You would instead write a wrapper script called '''myFuncFastWrapper.sh''' as<br />
#!/bin/bash<br />
# myFuncFastWrapper.sh<br />
echo $SGE_TASK_ID<br />
myFunc.sh $SGE_TASK_ID<br />
<br />
And submit it with<br />
qsub -cwd -V -N PJ -l mem=1024M,express,time=01:00:00 -M eplau -m bea -t 1-100000:1 myFuncWrapper.sh<br />
<br />
<br />
====Lots of files====<br />
Let's say you have a script, '''myFunc2.sh''', that takes the name of a file as input and opens that file and runs a bunch of computations on its contents. But you have 100000 such files to process. One solution would be to write a wrapper script '''myFunc2SlowWrapper.sh''' as<br />
#!/bin/bash<br />
# myFunc2SlowWrapper.sh<br />
for FILE in `ls dir/of/files`;<br />
do<br />
myFunc2.sh $FILE<br />
done<br />
<br />
But this will take quite a while since all 100000 iterations will be done on a single processor. With job arrays, the computations will be split among many processors since they are submitted as their own jobs and can finish much more quickly. You could instead create a file that contains a list of all 100000 files that need to be processed and call it '''filesToProcess'''. Then write a wrapper script called '''myFunc2FastWrapper.sh''' as<br />
#!/bin/bash<br />
# myFunc2FastWrapper.sh<br />
echo $SGE_TASK_ID<br />
myFunc2.sh `sed -n ${SGE_TASK_ID}p /path/to/list/of/files`<br />
<br />
where you replace ''/path/to/list/of/file'' with the path to '''fileToProcess'''. The code<br />
`sed -n ${SGE_TASK_ID}p /path/to/list/of/files`<br />
uses <code>sed</code> to grab the ${SGE_TASK_ID}'th line from the file '''/path/to/list/of/files''' and returns it (thanks to the tick marks, <code>SHIFT + ~</code>).<br />
<br />
Then you'd submit it with<br />
qsub -cwd -V -N PJ -l mem=1024M,express,time=01:00:00 -M eplau -m bea -t 1-100000:1 myFunc2Wrapper.sh<br />
<br />
If your files were named regularly with a '-number' at the end (e.g. 'file-1', 'file-2', 'file-3', ... 'file-n'), you could just make '''myFunc2FastWrapperB.sh''' as<br />
#!/bin/bash<br />
# myFunc2FastWrapperB.sh<br />
echo $SGE_TASK_ID<br />
myFunc2.sh file-${SGE_TASK_ID}<br />
and submit it the same way.<br />
<br />
<br />
<br />
==External Links==<br />
*[http://www.ats.ucla.edu/clusters/common/computing/batch/man_submit.htm qsub Man page]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm Hoffman2 Types of Queues]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Sharing_Filesystems&diff=928Hoffman2:Sharing Filesystems2013-09-05T02:15:46Z<p>Elau: </p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
There are apps for linking filesystems so that you can access data across machines. It's like mounting a shared drive. Here we present a [[Hoffman2:Sharing Filesystems#MacFusion|GUI]] and a [[Hoffman2:Sharing Filesystems#sshfs|command line]] way of accomplishing this.<br />
<br />
<br />
<br />
==MacFusion==<br />
GUIs are nice, and things just work here.<br />
<br />
<br />
===Installation===<br />
# Go to [http://osxfuse.github.com/ http://osxfuse.github.com/] and download and install OSXFuse.<br />
# Go to [https://github.com/ElDeveloper/macfusion2 MacFusion] and download and install MacFusion (Download a build of the development version).<br />
<br />
===Usage===<br />
Let's walk through how to connect to Hoffman2.<br />
<br />
# Start up the MacFusion program.<br />
# Let it start the ''macfusion agent process''.<br />
# Click the plus button and select ''"sshfs"'' from the drop-down menu<br />
# In the first blank, fill in a name for this link, like ''"Hoffman2."''<br />
# Fill in the other blanks in the ''SSH'' tab:<br />
#* '''Host''' - address of the server<br />
#*: <pre>hoffman2.idre.ucla.edu</pre><br />
#* '''User Name''' - your username on the server<br />
#*: <pre>joebruin</pre><br />
#* '''Password''' - leave this blank if you want to type your password (for the server) every time you connect (more secure). Or just plug in your password (to the server) for convenience.<br />
#* '''Path''' - by default it will connect to your home folder on the server, but you can specify any part of the filesystem you have access to.<br />
# In the "Advanced" tab:<br />
#* '''Options''' - to stop the program from giving you a hard time about what you do and don't have permission to read, add the text<br />
#*: <pre>-o defer_permissions</pre><br />
# In the ''Macfusion'' tab:<br />
#* '''Mount Point''' - where on your local filesystem you want this drive mounted.<br />
#* '''Volume Name''' - what name do you want the drive to have? This is the name that shows up under the icon that will appear on your desktop.<br />
# Click ''"OK"'' when you are done.<br />
# Click ''"Mount"'' on the new connection that appears and type in your password if <br />
<br />
<br />
==sshfs==<br />
This tool is pre-installed on all lab computers (type <code>which sshfs</code> on the command line to check). But if this is your own computer, follow the install instructions.<br />
<br />
<br />
===Installation===<br />
Command lines can be more extensible<br />
# [http://www.macports.org/install.php Install MacPorts]. The instructions they have over there are great. This is a package distribution tool where you can easily find and install other tools like [https://trac.macports.org/browser/trunk/dports/sysutils/watch/Portfile watch], different versions of python, and [https://trac.macports.org/browser/trunk/dports/sysutils/htop/Portfile htop]. The best part is that it installs all the dependencies for you.<br />
# Execute<br />
#: <pre>$ sudo port install sshfs</pre><br />
# Type your password when asked<br />
# Installation should complete smoothly.<br />
<br />
<br />
===Usage===<br />
Let's say you want to mount Hoffman2 locally. On the command line, execute<br />
$ id<br />
uid=1010(joebruinuser) gid=20(bruingroup1),23(bruingroup2),...<br />
$ mkdir /Volumes/MOUNTPOINT<br />
$ sshfs -o idmap=user -o uid=1010 -o gid=20 USERNAME@hoffman2.idre.ucla.edu:/path/to/mount /Volumes/MOUNTPOINT<br />
Where<br />
; <code>id</code><br />
: Gets information about your local user, including your numerical ID and group ID(s)<br />
; <code>-o idmap=user -o uid=1010 -o gid=20</code><br />
: Translates your local user and group IDs to that of the remote user so you can read and write files as if you were on the remote machine. Make sure to put the correct user and group IDs that were returned by the <code>id</code> command.<br />
; <code>USERNAME</code><br />
: Is your username at the remote computer<br />
; <code>hoffman2.idre.ucla.edu</code><br />
: Is the address of the remote computer you are connecting to.<br />
; <code>/path/to/mount</code><br />
: Could be left blank to mount your home directory from the remote computer, or it could specify any point in the remote filesystem<br />
; <code>MOUNTPOINT</code><br />
: Is the name of the directory where the remote filesystem will be mounted.<br />
<br />
<br />
To unmount:<br />
* Use the command<br />
umount /Volumes/MOUNTPOINT<br />
or<br />
* Right click on the desktop icon that appears and select ''"Eject."''<br />
<br />
<br />
<br />
==External Links==<br />
*[http://osxfuse.github.com/ OSXFuse]<br />
*[http://macfusionapp.org/ MacFusion]<br />
*[http://www.macports.org/ MacPorts]<br />
*[http://fuse.sourceforge.net/sshfs.html SSHFS]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Submitting_Jobs&diff=969Hoffman2:Submitting Jobs2013-09-04T02:07:31Z<p>Elau: </p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
If you remember from [[Hoffman2:Introduction#Sun Grid Engine|Anatomy of the Computing Cluster]], the Sun Grid Engine on Hoffman2 is the scheduler for all computing jobs. It takes your computing job request, considers what resources you are asking for and then puts your job in a line waiting for those resources to become available.<br />
<br />
Ask for a simple 1GB of memory and a single computing core with a short time window, and your job will likely get placed at the front of the line and start running soon if not immediately. And for the vast majority of people, this will be the case.<br />
<br />
Ask for a lot of memory or many computing cores, and your job will get put further back in the line because it will have to wait for more things to become available. If your job needs these types of resources, you are probably at a level where reading this tutorial isn't very helpful.<br />
<br />
So how does one submit a computing job request? You've got some options:<br />
# '''job.q'''<br />
#: Use a simple yet effective tool that ATS wrote. It has a great menu and walks you through submitting things.<br />
# '''qsub'''<br />
#: Get under the hood and do it yourself. It can get messy but it can also be faster and you have more flexibility with options.<br />
# '''command files'''<br />
#: You've graduated to a higher level of operations.<br />
# '''job arrays'''<br />
#: You've got a lot of repetitive tasks to run, these will be your friend.<br />
<br />
<br />
<br />
==job.q==<br />
Once you've identified or written a script you'd like to run, [[Hoffman2:Accessing the Cluster#SSH - Command Line|SSH into Hoffman2]] and enter <code>job.q</code>. Then it is just a matter of following its step-by-step instructions.<br />
<br />
From the tool's main menu, you can type ''Info'' to read up about how to use it and we highly encourage you to do so.<br />
<br />
But we know patience is a virtue that most of us aren't blessed with. So we'll walk you through submitting a basic job so you can hit the ground running.<br />
<br />
===Example===<br />
# Once on Hoffman2, you'll need to edit one file so pull out your favorite [[Text Editors|text editor]] and edit the file<br />
#: <pre>~/.queuerc</pre><br />
# Add the line<br />
#: <pre>set qqodir = ~/job-output</pre><br />
# You've just set the default directory where your job command files will be created. Save the configuration file and close your text editor.<br />
# Make that directory using the command<br />
#: <pre>$ mkdir ~/job-output</pre><br />
# Now execute<br />
#:<pre>$ job.q</pre><br />
# Press enter to acknowledge the message about some files that get created (READ IT FIRST THOUGH).<br />
# Type ''Build <ENTER>'' to begin creating an SGE command file.<br />
# The program now asks you which script you'd like to run, enter the following text to use our example script<br />
#: <pre>/u/home/FMRI/apps/examples/qsub/gather.sh</pre><br />
# The program now asks how much memory the job will need (in [http://en.wikipedia.org/wiki/Megabyte Megabytes]). This script is really simple, so let's go with the minimum and enter ''64''.<br />
# The program now asks how long will the job take (in hours). Go with the minimum 1 hour; it will complete in much less than this.<br />
# The program now asks if your job should be limited to only your resource group's cores. Answer ''n'' because you do not need to be limiting yourself here and the job is not going to be running for more than 24 hours.<br />
# Soon, the program will tell you that ''gather.sh.cmd'' has been built and saved.<br />
# When it asks you if you would like to submit your job, say no. Then type ''Quit <ENTER>'' to leave the program.<br />
# Now you should be able to run<br />
#: <pre>ls ~/job-output</pre><br />
#: and see ''gather.sh.cmd''. This file will stay there until you delete it and can be run over and over again. Making a command file like this is especially useful if there is a task you'll be running repeatedly on Hoffman2. But if this is something you only need to run once, you should delete the file so you don't needlessly approach your [[Hoffman2:Quotas|quota]].<br />
# The time has come to actually run the program (thought we'd never get to that, didn't you?). Type<br />
#: <pre>qsub job-output/gather.sh.cmd</pre><br />
#: and after hitting enter, a message similar to this will pop up:<br />
#: <pre>Your job 1882940 ("gather.sh.cmd") has been submitted</pre><br />
#: where the number is your JobID, a unique numerical identifier for the computer job you have submitted to the queue.<br />
# Now you can check if the job has finished running by doing<br />
#: <pre>ls ~/job-output</pre><br />
# When two files named ''gather.sh.output.[JOBID]'' and ''gather.sh.joblog.[JOBID]'' (where JOBID is your job's unique identifier) appear, your job has run.<br />
#: ''gather.sh.output.[JOBID]''<br />
#:: This file has all the standard output generated by your script. In this case it will just have the line<br />
#::: ''Standard output would appear here.''<br />
#: ''gather.sh.joblog.[JOBID]''<br />
#:: This file has all the details about when, where, and how your job was processed. Useful information if you are going to be running this job over and over and need to fine tune the resources it uses.<br />
# Better ways of checking on your job can be found [[Hoffman2:Monitoring Jobs|here]].<br />
# The script you ran is an aggregator. It looks in a list of directories, each assumed to contain a specifically named file, and gathers the contents of each of those files into one central file in your home directory. This file is named ''gather-[TIMESTAMP].txt'' where TIMESTAMP is when the script was run and follows [http://en.wikipedia.org/wiki/ISO_8601 ISO 8601] style encoding. You are encouraged to type<br />
#: <pre>/u/home/FMRI/apps/examples/qsub/gather.sh -h</pre><br />
#: or<br />
#: <pre>/u/home/FMRI/apps/examples/qsub/gather.sh --help</pre><br />
#: to see how this script works.<br />
# Finally, go check the inbox of the email you used to sign up for your Hoffman2 account. There will be two emails from "root@mail.hoffman2.idre.ucla.edu" that indicate when the job was started and when the job was completed. This is one of the neat features of the queue so that you can be alerted about the progress of your job without having to stay logged into Hoffman2 and checking on it constantly.<br />
<br />
<br />
<br />
==qsub==<br />
Everything that job.q did can be done on the command line. And it can be done better.<br />
<br />
===Example===<br />
Run the command:<br />
$ qsub -cwd -V -N J1 -l mem=64M,express,time=00:05:00 -M eplau -m bea /u/home/FMRI/apps/examples/qsub/gather.sh<br />
<br />
And something like the following will be printed out:<br />
Your job 1875395 ("J1") has been submitted<br />
<br />
Where the number is your JOBID, a unique numerical identifier for your job.<br />
<br />
Let's break down the arguments in that command.<br />
<br />
;<code>-cwd</code><br />
: Change working directory<br />
: When your script runs, change the working directory to where you currently are in the filesystem.<br />
:: e.g. If you were in the director /u/home/mscohen/data/ when you ran the command, the queue will change directories to that location and then execute the script you gave it. This means output and error directories will be placed here for that job.<br />
<br />
;<code>-V</code><br />
: Export environment variables<br />
: Exports all the environment variables to the context of the job. Useful if you have extra environment variables that are needed in your script.<br />
:: e.g. If you had defined the variable SUBJECT_ID in your session on Hoffman2 (<code>export SUBJECT_ID=42</code>) before submitting a job and that variable was called on by your script, then you would need to use this flag. Tools like FreeSurfer look for certain environment variables to be set.<br />
<br />
;<code>-N J1</code><br />
: Name my job<br />
: Names your job "J1." When you [[Hoffman2:Monitoring Jobs#qstat|look at the queue]], this will be the text that shows up in the "name" column. This will also be the beginning of the output (<code>J1.o[JOBID]</code>) and error (<code>J1.e[JOBID]</code>) files for your job.<br />
<br />
;<code>-l mem=64M,express,time=00:05:00</code><br />
: Resource allocation (that's a lower case "elle")<br />
: This is the resources flag meaning that the text immediately after it will ask for things like:<br />
:* certain amount of memory, in [http://en.wikipedia.org/wiki/Megabyte Megabytes]<br />
:** mem=64M or<br />
:** h_data=64M<br />
: In this case, our demands for RAM are really low, so we are requesting only 64MB.<br />
: '''Edit (2013.09)''' - If your job uses more RAM than it requested, your job WILL be killed in order to avoid it hurting other jobs running on the same node. It is imperative that you set this RAM request properly.<br />
:* certain length of computing time, in the form HH:MM:SS<br />
:** time=00:05:00 or<br />
:** h_rt=00:05:00<br />
: In this case the script will complete its task rapidly, hence we are only asking for 5 minutes of computing time.<br />
:* queue type, only a few options here<br />
:** [http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm#express express]<br />
:**: Time limit of 2 hours, and it tends to be overloaded so it isn't recommended<br />
:** [http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm#highp highp]<br />
:**: Job length maximum of 14 days but can only be run on nodes belonging to your resource group (type <code>mygroup</code> to see what type of resources you have available). If you are in the mscohen or sbook usergroups on Hoffman2, you have access to some of these highp nodes.<br />
:** [blank] (nothing, nada, zilch)<br />
:**: [http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm#day Standard queue], which has a maximum job length of 24 hours<br />
: In this case, we are asking to be put on the express queue since this is such a short job, but the standard queue would have worked just as well if not better.<br />
<br />
;<code>-M eplau</code><br />
: Define mailing list<br />
: This defines the list of users that will be mailed if email updates are requested. The default address is that of the job-owner, but multiple emails can be specified using a comma separated list.<br />
:: e.g. In this case, the email will be sent to the address on file for the user "eplau"<br />
<br />
;<code>-m bea</code><br />
: Define mailing rules<br />
: This defines when Hoffman2 should email you about your job. There are five options here<br />
:* b - when the job begins<br />
:* e - when the job ends<br />
:* a - when the job is aborted<br />
:* s - when the job is suspended<br />
:* n - never<br />
: The first four can be used in any combination, but the last obviously nullifies the others.<br />
<br />
There are many other flags that you could use, but these are the basics that will get you through most of your computing. Feel free to explore the others in the [http://www.ats.ucla.edu/clusters/common/computing/batch/man_submit.htm qsub Man page].<br />
<br />
<br />
<br />
==Command Files==<br />
Typing accurately can be difficult at times, so why put yourself through the trouble of having to retype the same arguments over and over if you will always be using about the same values? Enter command files.<br />
<br />
You already have experience making a command file (~/job-output/gather.sh.cmd) from when you used the tool <code>job.q</code>. But did you know that you can edit that command file to make changes to how it runs, or write your own?<br />
<br />
The command files generated by <code>job.q</code> are fairly well commented, so if you take a look at them with your favorite [Text Editors|text editor] you should be able to change their behavior. For instance, if you go into the command file from the job.q example, find the lines that say<br />
# Notify at beginning and end of job<br />
#$ -m bea<br />
You recognize that this is the flag about when to send email messages. Go ahead and change this to<br />
# Notify at the end and on abort<br />
#$ -m ae<br />
And you should only receive one email when your job finishes.<br />
<br />
<br />
<br />
===q.sh===<br />
You could make a generic command file that contains all the basic flags that you care about. We've even got an example ready and available for you at<br />
/u/home/FMRI/apps/examples/qsub/q.sh<br />
The script contents are shown below:<br />
qsub <<CMD<br />
#!/bin/bash<br />
# Use current working directory<br />
#$ -cwd<br />
# Error stream is merged with the standard output<br />
#$ -j y<br />
# Use the bash shell for job execution<br />
#$ -S /bin/bash<br />
# Use your normal environment variables in the job<br />
#$ -V<br />
# Use 1GB of RAM and the main queue, with a maximum of 2 hours computing time<br />
#$ -l h_data=1024M,h_rt=2:00:00<br />
$@<br />
CMD<br />
To use this command file to submit the ''gather.sh'' example script, you would execute the command:<br />
$ q.sh gather.sh<br />
You can do this because if you have [[Hoffman2:Software Tools#Setting Up Your Account to Access the Tools|set up your Bash profile correctly]], they are in your [[Hoffman2:UNIX Tutorial#PATH|Unix PATH variable]]. You can replace ''gather.sh'' with any script you want executed and it will be submitted as a job on the cluster. We recommend that you make your own copy of ''q.sh'' and keep it in your local ''bin'' directory (~/bin) so that you can edit it to suit your needs.<br />
<br />
<br />
<br />
==Job Arrays==<br />
There is an SGE qsub argument that allows you to submit multiple jobs in parallel that use the same script. It is<br />
-t lower-upper:interval<br />
where<br />
;<code>lower</code><br />
: is replaced with the starting number<br />
;<code>upper</code><br />
: is replaced with the ending number<br />
;<code>interval</code><br />
: is replaced with the step interval<br />
So adding the argument<br />
-t 10-100:5<br />
will step through the numbers 10, 15, 20, 25, ..., 100 submitting a job for each one.<br />
<br />
In jobs that are called with this flag, there will be an [[Hoffman2:UNIX Tutorial#Environment Variables|environment variable]] called <code>SGE_TASK_ID</code> whose value will be incremented over the range you specified. Each possible value of SGE_TASK_ID will be submitted as its own job, so your work will be parallelized.<br />
<br />
<br />
===Examples===<br />
Why would anyone use this? Here are some examples<br />
<br />
====Lots of numbers====<br />
Let's say you have a script, '''myFunc.sh''', that takes one numerical input and computes a bunch of values based on that input. But you need to run <code>myFunc.sh</code> for input values 1 to 100000. One solution would be to write a wrapper script '''myFuncSlowWrapper.sh''' as<br />
#!/bin/bash<br />
# myFuncSlowWrapper.sh<br />
for i in {1..100000};<br />
do<br />
myFunc.sh $i;<br />
done<br />
<br />
The only drawback is that this will take quite a while since all 100000 iterations will be done on a single processor. With job arrays, the computations will be split among many processors and can finish much more quickly. You would instead write a wrapper script called '''myFuncFastWrapper.sh''' as<br />
#!/bin/bash<br />
# myFuncFastWrapper.sh<br />
echo $SGE_TASK_ID<br />
myFunc.sh $SGE_TASK_ID<br />
<br />
And submit it with<br />
qsub -cwd -V -N PJ -l mem=1024M,express,time=01:00:00 -M eplau -m bea -t 1-100000:1 myFuncWrapper.sh<br />
<br />
<br />
====Lots of files====<br />
Let's say you have a script, '''myFunc2.sh''', that takes the name of a file as input and opens that file and runs a bunch of computations on its contents. But you have 100000 such files to process. One solution would be to write a wrapper script '''myFunc2SlowWrapper.sh''' as<br />
#!/bin/bash<br />
# myFunc2SlowWrapper.sh<br />
for FILE in `ls dir/of/files`;<br />
do<br />
myFunc2.sh $FILE<br />
done<br />
<br />
But this will take quite a while since all 100000 iterations will be done on a single processor. With job arrays, the computations will be split among many processors since they are submitted as their own jobs and can finish much more quickly. You could instead create a file that contains a list of all 100000 files that need to be processed and call it '''filesToProcess'''. Then write a wrapper script called '''myFunc2FastWrapper.sh''' as<br />
#!/bin/bash<br />
# myFunc2FastWrapper.sh<br />
echo $SGE_TASK_ID<br />
myFunc2.sh `sed -n ${SGE_TASK_ID}p /path/to/list/of/files`<br />
<br />
where you replace ''/path/to/list/of/file'' with the path to '''fileToProcess'''. The code<br />
`sed -n ${SGE_TASK_ID}p /path/to/list/of/files`<br />
uses <code>sed</code> to grab the ${SGE_TASK_ID}'th line from the file '''/path/to/list/of/files''' and returns it (thanks to the tick marks, <code>SHIFT + ~</code>).<br />
<br />
Then you'd submit it with<br />
qsub -cwd -V -N PJ -l mem=1024M,express,time=01:00:00 -M eplau -m bea -t 1-100000:1 myFunc2Wrapper.sh<br />
<br />
If your files were named regularly with a '-number' at the end (e.g. 'file-1', 'file-2', 'file-3', ... 'file-n'), you could just make '''myFunc2FastWrapperB.sh''' as<br />
#!/bin/bash<br />
# myFunc2FastWrapperB.sh<br />
echo $SGE_TASK_ID<br />
myFunc2.sh file-${SGE_TASK_ID}<br />
and submit it the same way.<br />
<br />
<br />
<br />
==External Links==<br />
*[http://www.ats.ucla.edu/clusters/common/computing/batch/man_submit.htm qsub Man page]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm Hoffman2 Types of Queues]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Software_Tools&diff=945Hoffman2:Software Tools2013-07-17T19:22:30Z<p>Elau: </p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
There is an FMRI usergroup on Hoffman2 which is maintained for groups doing Neuroimaging work at UCLA. Tools like FSL, FreeSurfer, AFNI and Nibabel are maintained for this group separate from normal Hoffman2 programs. In order to take advantage of these tools, you need to setup your bash profile [[Hoffman2:Profile|properly]].<br />
<br />
Below is a list of the available software tools. We will do our best to update it in real-time.<br />
<br />
<br />
==List of Tools==<br />
Under Construction...<br />
===FSL===<br />
[http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/ Official Website]<br />
*v5.0.4<br />
**Install Date: 2013.06.18<br />
*v5.0.2<br />
**Install Date: 2013.02.19<br />
*v5.0.1<br />
**Install Date: 2012.10.01<br />
*v5.0.0<br />
**Install Date: 2012.09.14<br />
*v4.1.9<br />
**Install Date: 2011.12.01<br />
*v4.1.8<br />
**Install Date: circa 2011.06<br />
*v4.1.7<br />
**Install Date: circa 2011.11<br />
*v4.1.4<br />
**Install Date: circa 2009<br />
*v4.1.3<br />
**Install Date: circa 2009<br />
*v4.1..1<br />
**Install Date: circa 2008<br />
*v4.1.0<br />
**Install Date: circa 2008<br />
*v4.0.4<br />
**Install Date: circa 2008<br />
<br />
===FreeSurfer===<br />
[http://surfer.nmr.mgh.harvard.edu/ Official Website]<br />
*v5.3.0<br />
**Install Date: 2013.06.18<br />
*v5.2.0<br />
**Install Date: 2013.03.27<br />
*v5.1.0<br />
**Install Date: 2011.11.14<br />
*v5.0.0<br />
**Install Date: circa 2010<br />
*v4.4.0<br />
**Install Date: circa 2009<br />
*v4.0.5<br />
**Install Date: circa 2008<br />
<br />
===AFNI===<br />
[http://afni.nimh.nih.gov/afni/ Official Webste]<br />
*vAFNI_2011_12_21_1014<br />
**Install Date: 2012.03<br />
<br />
===Chronux===<br />
[http://www.chronux.org Official Website]<br />
*v2.10<br />
**Install Date: 2013.02.26<br />
<br />
===EEGLAB===<br />
[http://sccn.ucsd.edu/eeglab/ Official Website]<br />
*v12.0.0.0b<br />
**Instal Date: 2012.12.10<br />
*v11.0.0.0b<br />
**Install Date: 2012.02.21<br />
*v10.2.5.8b<br />
**Install Date: 2012.02.21<br />
<br />
===Caret===<br />
[http://brainvis.wustl.edu/wiki/index.php/Caret:About Official Website]<br />
*v5.65 2012.01.27<br />
**Install Date: 2013.07.15<br />
***Not folded into main profile yet.<br />
<br />
===GCC===<br />
===LAPACK===<br />
===BLAS===<br />
===GLIB===<br />
===C++===<br />
===CMake===<br />
===CPACK===<br />
===MPI Kmeans===<br />
See this website for how to cite using the MPI Kmeans tool.<br />
[http://mloss.org/software/view/48/]<br />
<br />
===Python2.7===<br />
====Packages====<br />
=====CVXOPT=====<br />
=====Cython=====<br />
=====Gnuplot=====<br />
=====IPython=====<br />
=====matplotlib=====<br />
=====nibabel=====<br />
=====nifti=====<br />
=====nimfa=====<br />
:Non-negative Matrix Factorization<br />
:<br />
:[http://nimfa.biolab.si/ http://nimfa.biolab.si/]<br />
=====nipype=====<br />
=====nose=====<br />
=====numpy=====<br />
=====(p)lsa=====<br />
:(probabilistic) Latent Semantic Analysis. Failed its tests.py though.<br />
<br />
:[http://www.mblondel.org/journal/2010/06/13/lsa-and-plsa-in-python/ http://www.mblondel.org/journal/2010/06/13/lsa-and-plsa-in-python/]<br />
=====pydicom=====<br />
=====pygments=====<br />
=====PyMF=====<br />
:Python Matrix Factorization Module. Failed its tests though.<br />
<br />
:[http://pymf.googlecode.com http://pymf.googlecode.com]<br />
=====pypr=====<br />
=====PyQt4=====<br />
=====pytz=====<br />
=====pywt=====<br />
=====pyximport=====<br />
=====scikits=====<br />
=====scipy=====<br />
=====sklearn=====<br />
=====sparsesvd=====<br />
:Singular Value Decomposition. Passed both tests.<br />
<br />
:[http://pypi.python.org/pypi/sparsesvd http://pypi.python.org/pypi/sparsesvd]<br />
=====sphinx=====<br />
=====sympy=====<br />
=====traits=====<br />
=====virtualenv=====<br />
=====xcbgen=====</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:FSL&diff=782Hoffman2:FSL2013-07-11T02:32:17Z<p>Elau: </p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
FSL is a comprehensive library of analysis tools for FMRI, MRI and DTI brain imaging data. FSL is written mainly by members of the Analysis Group, FMRIB, Oxford, UK. <br />
<br />
<br />
Multiple versions are maintained on the Hoffman2 cluster to allow researchers to be consistent in using the same version for data analysis within a single study. You can either:<br />
* do nothing, and always use the "current" version of FSL on the cluster<br />
* [[Hoffman2:FSL#switch_fsl|actively choose]] which version of FSL you would like to run<br />
We recommend the latter for data integrity and reproducibility.<br />
<br />
<br />
<br />
==FSL GUI==<br />
Make sure you source the FMRI Path in your [[Hoffman2:Profile | Profile]] before doing anything, or else you won't be able to access FSL.<br />
<br />
<br />
To run FSL using a GUI on hoffman2, use the following command:<br />
$ fsl &<br />
<br />
If you received this message while opening FSL<br />
DISPLAY is not set. Please set your DISPLAY environment variable!<br />
<br />
It means you did not open X11 along with your ssh connection. Re-log back in making sure you use the (-X) op tion.<br />
$ ssh -X [USERNAME]@hoffman2.idre.ucla.edu<br />
<br />
<br />
<br />
==FSL TOOLS==<br />
A complete list of tools can be found [http://www.fmrib.ox.ac.uk/fsl/fsl/list.html here]<br />
<br />
Functional MRI (command line only)<br />
{| class="wikitable"<br />
|-<br />
! Tool<br />
! Explanation<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/feat5/index.html feat]<br />
| Model-based FMRI analysis: data preprocessing (including MCFLIRT motion correction); first-level FILM GLM timeseries analysis; higher-level FLAME Bayesian mixed effects analysis.<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/melodic/index.html melodic]<br />
| Model-free FMRI analysis using Probabilistic Independent Component Analysis (PICA). MELODIC automatically estimates the number of interesting noise and signal sources in the data and because of the associated "noise model", is able to assign significance ("p-values") to the output spatial maps. MELODIC can also analyse multiple subjects or sessions simultaneously using Tensor-ICA.<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/fabber/index.html fabber]<br />
| Fast ASL & BOLD Bayesian Estimation Routine. Efficient nonlinear modelling and estimation of BOLD and CBF from dual-echo ASL data, using Variational Bayes.<br />
|}<br />
<br />
Structural MRI (command line only)<br />
{| class="wikitable"<br />
|-<br />
! Tool<br />
! Explanation<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/bet2/index.html bet]<br />
| Brain Extraction Tool - segments brain from non-brain in structural and functional data, and models skull and scalp surfaces.<br />
|- <br />
| [http://www.fmrib.ox.ac.uk/fsl/fast4/index.html fast]<br />
| FMRIB's Automated Segmentation Tool - brain segmentation (into different tissue types) and bias field correction.<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/first/index.html | first]<br />
| FMRIB's Integrated Registration and Segmentation Tool. FIRST uses mesh models trained with a large amount of rich hand-segmented training data to segment subcortical brain structures.<br />
|}<br />
<br />
GUI Commands/Tools [Make sure to have X11 forwarding on]<br />
{| class="wikitable"<br />
|-<br />
! Tool<br />
! Explanation<br />
|-<br />
| fsl<br />
| Bring you to the FSL menu where you can choose what type of analysis.<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/fdt/index.html Fdt]<br />
| FMRIB's Diffusion Toolbox - tools for low-level diffusion parameter reconstruction and probabilistic tractography, including crossing-fibre modelling.<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/flirt/index.html Flirt]<br />
| FMRIB's Linear Image Registration Tool - linear inter- and intra-modal registration.<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/feat5/index.html Feat]<br />
| Model-based FMRI analysis: data preprocessing (including MCFLIRT motion correction); first-level FILM GLM timeseries analysis; higher-level FLAME Bayesian mixed effects analysis.<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/feat5/featquery.html Featquery]<br />
| A program which allows you to interrogate FEAT results by defining a mask or set of co-ordinates (in standard-space, highres-space or loweres-space) and get mean stats values and time-series. <br />
|-<br />
| Glm<br />
| A GUI for setting up just the design matrix and contrasts, in the same way as in FEAT, for use with other modelling/inference programs such as randomise. <br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/melodic/index.html Melodic]<br />
| Model-free FMRI analysis using Probabilistic Independent Component Analysis (PICA). MELODIC automatically estimates the number of interesting noise and signal sources in the data and because of the associated "noise model", is able to assign significance ("p-values") to the output spatial maps. MELODIC can also analyse multiple subjects or sessions simultaneously using Tensor-ICA.<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/possum/index.html Possum]<br />
| Physics-Oriented Simulated Scanner for Understanding MRI. An FMRI data simulator that produces realistic simulated images and FMRI time series given a gradient echo pulse sequence, a segmented object with known tissue parameters, and a motion sequence..<br />
|-<br />
| Renderhighres<br />
| Transforms all thresholded stats images in a FEAT directory into high resolution or standard space and overlays these onto the high resolution or standard space images. This then produces PNG format pictures of the overlays and, by default, deletes the 3D AVW colour overlay images. <br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/miscvis/index.html Renderstats]<br />
| This tool allows you to combine a background image (raw FMRI or high resolution MRI) image with one or two statistics images. The statistics image(s) must be in registration with the background image.<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/susan/index.html Susan]<br />
| Nonlinear noise reduction.<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/fslview/index.html fslview]<br />
| Interactive display tool for 3D and 4D data.<br />
|}<br />
<br />
<br />
<br />
==Cluster==<br />
{| class="wikitable"<br />
|-<br />
! Scripts that self-submit<br />
|-<br />
| [http://www.fmrib.ox.ac.uk/fsl/fdt/index.html fdt]<br />
| [http://www.fmrib.ox.ac.uk/fsl/feat5/index.html feat]<br />
| [http://www.fmrib.ox.ac.uk/fsl/first/index.html first]<br />
| [http://www.fmrib.ox.ac.uk/fsl/fslvbm/index.html fslval]<br />
| [http://www.fmrib.ox.ac.uk/fsl/possum/index.html possom]<br />
| [http://www.fmrib.ox.ac.uk/fsl/randomise/index.html randomise]<br />
| [http://www.fmrib.ox.ac.uk/fsl/tbss/index.html tbss]<br />
|-<br />
! GUIs that self-submit<br />
| [http://www.fmrib.ox.ac.uk/fsl/fdt/index.html Fdt]<br />
| [http://www.fmrib.ox.ac.uk/fsl/feat5/index.html Feat]<br />
| [http://www.fmrib.ox.ac.uk/fsl/flirt/index.html Flirt]<br />
| [http://www.fmrib.ox.ac.uk/fsl/possum/index.html Possum]<br />
|-<br />
|}<br />
<br />
FSL_SUB<br />
<br />
<br />
<br />
==switch_fsl==<br />
After you have [[Hoffman2:Profile | properly configured your profile]] so you have access to FSL and the other FMRI tools on Hoffman2, you also have access to the handy <code>switch_fsl</code> tool. It allows you to actively choose which version of FSL you use for analyses so you can stay locked into one version throughout a project before switching for a new project.<br />
<br />
See its documentation [[Hoffman2:Scripts:switch_fsl | here]].<br />
<br />
<br />
<br />
<br />
==NO_FSL_JOBS==<br />
Sometimes FSL doesn't know how to allocate enough resources for its jobs properly. Specifically we have found the FEAT tool often unable to do this for group analyses or other complex tasks. So we did some tinkering with FSL to allow you to override its job submission on Hoffman2 and run it like it was just on your laptop. '''The trick is to set <code>NO_FSL_JOBS=true</code> in your environment and FSL will not submit jobs.'''<br />
<br />
<br />
===Interactive Session===<br />
If you want to watch FEAT run (kinda like paint drying, but to each their own), you can do the following<br />
#[[Hoffman2:Accessing_the_Cluster#SSH_-_Command_Line|SSH]] into the cluster<br />
#Check out an [[Hoffman2:Interactive_Sessions|interactive node]] with the necessary time and memory<br />
#*<code>qrsh -l i,time=3:00:00,mem=3G</code><br />
#Set the environment variable<br />
#*<code>export NO_FSL_JOBS=true</code><br />
#Run your FSL commands. '''This means not using qsub, or command files, but simply executing the FSL command'''<br />
#The commands will just run and not submit any jobs.<br />
<br />
<br />
===Submitting a Job===<br />
If you don't want to watch FEAT run (why would you?), you can do the following<br />
<br />
Create a shell script (e.g. myshellscript.sh) with the following contents<br />
#!/bin/bash<br />
export NO_FSL_JOBS=true<br />
feat design.fsf<br />
# any other FSL commands you want<br />
<br />
And make sure to run <code>chmod 755</code> to make the script executable<br />
chmod 755 myshellscript.sh<br />
<br />
Submit the shell script [[Hoffman2:Submitting_Jobs|as a job]] but with the adequate time and memory allocations<br />
qsub -l time=23:00:00,mem=4G -V -m bea -cwd /path/to/myshellscript.sh<br />
<br />
And the FSL commands will be sent into the queue to run with your time and memory constraints rather than FSL's. This may take some playing with to get the time and memory allocations correct, but at least you have the ability to tweak them.<br />
<br />
<br />
<br />
<br />
==External Links==<br />
* Official FSL website http://www.fmrib.ox.ac.uk/fsl/</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Scripts:matlab_license_check.sh&diff=914Hoffman2:Scripts:matlab license check.sh2013-07-02T18:49:32Z<p>Elau: </p>
<hr />
<div>[[Hoffman2:Scripts|Back to Hoffman2 Scripts]]<br />
<br />
==Use Cases==<br />
* You need to run a MATLAB script and want to check that there are sufficient licenses. If not, you are grabbing a coffee or going for a run until they free up.<br />
* You tried running a MATLAB script and found that you couldn't check out sufficient licenses to do your work. Now you want to see if the users taking up precious licenses are part of your group so you can ask/yell for them to release the license and step away from the computer for a bit.<br />
<br />
<br />
<br />
<br />
==A bit about licenses==<br />
Every time you run the MATLAB program, you need a MATLAB license. If you also make use of the utilities in a specific toolbox, say some statistical plotting commands line <code>boxplot</code> from the Statistics Toolbox, you need a license for that specific toolbox in addition to the one just for MATLAB.<br />
<br />
If you were trying to run multiple MATLAB instances at once and each used a specific toolbox, you can see how the number of licenses you'd need could get out of hand.<br />
<br />
[[Hoffman2:Compiling_MATLAB|Compiled MATLAB]] only needs the requisite licenses to create the compiled code. After that it can run without taking up a license making compiled code very attractive for massively parallel processing.<br />
<br />
<br />
<br />
==Help/Usage==<br />
<pre><br />
matlab_license_check.sh<br />
2013.07.02<br />
<br />
Wrapper script for the command<br />
/u/local/licenses/lic_manager.sh users matlab<br />
which is used to check on the status of MATLAB licenses on Hoffman2. The<br />
output is slightly modified and managed to make it more readable for users.<br />
<br />
<br />
USAGE:<br />
$ matlab_license_check.sh [-h --help -r -f]<br />
<br />
<br />
ARGUMENTS:<br />
With no argument, a simple digest version is displayed listing the<br />
number of used and avaiable licenses of each MATLAB tool.<br />
<br />
Optional arguments:<br />
-h, --help Show this usage message<br />
-r Show the digest version plus reservation information,<br />
because reserved licenses are counted as in-use, so even if<br />
it says all licenses are used, it is important to check to<br />
see if there are any reserved licenses that you are able to<br />
make use of.<br />
-u Show the digest version plus reservations plus in-use licenses.<br />
The layout of this is more readable than the "-f" full output.<br />
-f Show the full output, including details about which users<br />
are using licenses and on which nodes<br />
</pre><br />
<br />
<br />
<br />
==Example Output==<br />
===Digest===<br />
$ matlab_license_check.sh<br />
20130702T114020-0700<br />
MATLAB<br />
---- (Total of 37 licenses issued; Total of 35 licenses in use)<br />
Compiler<br />
---- (Total of 7 licenses issued; Total of 4 licenses in use)<br />
Curve_Fitting_Toolbox<br />
---- (Total of 1 license issued; Total of 1 license in use)<br />
Image_Toolbox<br />
---- (Total of 8 licenses issued; Total of 7 licenses in use)<br />
Optimization_Toolbox<br />
---- (Total of 4 licenses issued; Total of 2 licenses in use)<br />
Signal_Toolbox<br />
---- (Total of 5 licenses issued; Total of 4 licenses in use)<br />
Statistics_Toolbox<br />
---- (Total of 15 licenses issued; Total of 12 licenses in use)<br />
Wavelet_Toolbox<br />
---- (Total of 2 licenses issued; Total of 2 licenses in use)<br />
Bioinformatics_Toolbox<br />
---- (Total of 8 licenses issued; Total of 8 licenses in use)<br />
Distrib_Computing_Toolbox<br />
---- (Total of 3 licenses issued; Total of 3 licenses in use)<br />
MATLAB_Distrib_Comp_Engine<br />
---- (Total of 16 licenses issued; Total of 16 licenses in use)<br />
SIMULINK<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Control_Toolbox<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Simulink_Control_Design<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Simulink_Design_Optim<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Symbolic_Toolbox<br />
---- (Total of 1 license issued; Total of 0 licenses in use)<br />
<br />
This shows that all but two MATLAB licenses are either in use or reserved. It also shows that things like SIMULINK and Symbolic Toolbox are not in use by anyone right now but things like the Wavelet and Bioinformatics Toolboxes are in use or reserved.<br />
<br />
<br />
===Reservations===<br />
$ matlab_license_check.sh -r<br />
20130702T113736-0700<br />
MATLAB<br />
---- (Total of 37 licenses issued; Total of 35 licenses in use)<br />
------ 1 RESERVATION for GROUP bueragroup (lm/27010)<br />
------ 4 RESERVATIONs for GROUP cohenlab (lm/27010)<br />
------ 6 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
------ 4 RESERVATIONs for GROUP galaxygroup (lm/27010)<br />
------ 1 RESERVATION for GROUP ghoniemgroup (lm/27010)<br />
------ 3 RESERVATIONs for GROUP kluggroup (lm/27010)<br />
------ 2 RESERVATIONs for GROUP miaogroup (lm/27010)<br />
------ 1 RESERVATION for GROUP miaogroup (lm/27010)<br />
------ 1 RESERVATION for GROUP moshegroup (lm/27010)<br />
------ 4 RESERVATIONs for GROUP pellegrinigroup (lm/27010)<br />
------ 1 RESERVATION for GROUP staff1group (lm/27010)<br />
Compiler<br />
---- (Total of 7 licenses issued; Total of 4 licenses in use)<br />
------ 1 RESERVATION for GROUP cohenlab (lm/27010)<br />
------ 2 RESERVATIONs for GROUP miaogroup (lm/27010)<br />
Curve_Fitting_Toolbox<br />
---- (Total of 1 license issued; Total of 1 license in use)<br />
------ 1 RESERVATION for GROUP bueragroup (lm/27010)<br />
Image_Toolbox<br />
---- (Total of 8 licenses issued; Total of 7 licenses in use)<br />
------ 5 RESERVATIONs for GROUP cohenlab (lm/27010)<br />
------ 2 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
Optimization_Toolbox<br />
---- (Total of 4 licenses issued; Total of 2 licenses in use)<br />
------ 2 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
Signal_Toolbox<br />
---- (Total of 5 licenses issued; Total of 4 licenses in use)<br />
------ 1 RESERVATION for GROUP cohenlab (lm/27010)<br />
------ 2 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
Statistics_Toolbox<br />
---- (Total of 15 licenses issued; Total of 12 licenses in use)<br />
------ 3 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
------ 4 RESERVATIONs for GROUP galaxygroup (lm/27010)<br />
------ 4 RESERVATIONs for GROUP pellegrinigroup (lm/27010)<br />
Wavelet_Toolbox<br />
---- (Total of 2 licenses issued; Total of 2 licenses in use)<br />
------ 1 RESERVATION for GROUP cohenlab (lm/27010)<br />
------ 1 RESERVATION for GROUP fmrigroup (lm/27010)<br />
Bioinformatics_Toolbox<br />
---- (Total of 8 licenses issued; Total of 8 licenses in use)<br />
------ 4 RESERVATIONs for GROUP galaxygroup (lm/27010)<br />
------ 4 RESERVATIONs for GROUP pellegrinigroup (lm/27010)<br />
Distrib_Computing_Toolbox<br />
---- (Total of 3 licenses issued; Total of 3 licenses in use)<br />
MATLAB_Distrib_Comp_Engine<br />
---- (Total of 16 licenses issued; Total of 16 licenses in use)<br />
SIMULINK<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Control_Toolbox<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Simulink_Control_Design<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Simulink_Design_Optim<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Symbolic_Toolbox<br />
---- (Total of 1 license issued; Total of 0 licenses in use)<br />
<br />
This shows that there are still four (4) licenses reserved for cohenlab (mscohen) and six (6) reserved for fmrigroup (sbook), in addition to the two remaining free licenses. The Wavelet and Bioinformatics Toolboxes are all in use because they are all reserved by specific groups, but SIMULINK and the Symbolic Toolbox are not used and don't have any reservations on them.<br />
<br />
<br />
===In Use===<br />
$ matlab_license_check.sh -u<br />
20130702T113908-0700<br />
MATLAB<br />
---- (Total of 37 licenses issued; Total of 35 licenses in use)<br />
------ 1 RESERVATION for GROUP bueragroup (lm/27010)<br />
------ 4 RESERVATIONs for GROUP cohenlab (lm/27010)<br />
------ 6 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
------ 4 RESERVATIONs for GROUP galaxygroup (lm/27010)<br />
------ 1 RESERVATION for GROUP ghoniemgroup (lm/27010)<br />
-------- hiteshag n6280 /dev/tty (v27) (lm/27010 2001), start Tue 7/2 10:21<br />
-------- huning n2180 /dev/tty (v27) (lm/27010 1801), start Tue 7/2 10:20<br />
-------- kerr b8169-harley.local /dev/ttys000 (v23) (lm/27010 2501), start Tue 7/2 10:22<br />
------ 3 RESERVATIONs for GROUP kluggroup (lm/27010)<br />
------ 2 RESERVATIONs for GROUP miaogroup (lm/27010)<br />
------ 1 RESERVATION for GROUP miaogroup (lm/27010)<br />
------ 1 RESERVATION for GROUP moshegroup (lm/27010)<br />
-------- navari n233 /dev/pts/4 (v27) (lm/27010 2201), start Tue 7/2 10:22<br />
------ 4 RESERVATIONs for GROUP pellegrinigroup (lm/27010)<br />
-------- raobasav n6165 /dev/tty (v27) (lm/27010 1901), start Tue 7/2 10:20<br />
-------- raobasav n6154 /dev/tty (v27) (lm/27010 3001), start Tue 7/2 10:29<br />
-------- snt n2179 /dev/tty (v27) (lm/27010 2903), start Tue 7/2 10:39<br />
------ 1 RESERVATION for GROUP staff1group (lm/27010)<br />
Compiler<br />
---- (Total of 7 licenses issued; Total of 4 licenses in use)<br />
------ 1 RESERVATION for GROUP cohenlab (lm/27010)<br />
------ 2 RESERVATIONs for GROUP miaogroup (lm/27010)<br />
-------- tdevries n6263 /dev/pts/0 (v27) (lm/27010 2802), start Tue 7/2 11:33 (linger: 1800)<br />
Curve_Fitting_Toolbox<br />
---- (Total of 1 license issued; Total of 1 license in use)<br />
------ 1 RESERVATION for GROUP bueragroup (lm/27010)<br />
Image_Toolbox<br />
---- (Total of 8 licenses issued; Total of 7 licenses in use)<br />
------ 5 RESERVATIONs for GROUP cohenlab (lm/27010)<br />
------ 2 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
Optimization_Toolbox<br />
---- (Total of 4 licenses issued; Total of 2 licenses in use)<br />
------ 2 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
Signal_Toolbox<br />
---- (Total of 5 licenses issued; Total of 4 licenses in use)<br />
------ 1 RESERVATION for GROUP cohenlab (lm/27010)<br />
------ 2 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
-------- hiteshag n6280 /dev/tty (v27) (lm/27010 2101), start Tue 7/2 10:21<br />
Statistics_Toolbox<br />
---- (Total of 15 licenses issued; Total of 12 licenses in use)<br />
------ 3 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
------ 4 RESERVATIONs for GROUP galaxygroup (lm/27010)<br />
-------- navari n233 /dev/pts/4 (v27) (lm/27010 2401), start Tue 7/2 10:22<br />
------ 4 RESERVATIONs for GROUP pellegrinigroup (lm/27010)<br />
Wavelet_Toolbox<br />
---- (Total of 2 licenses issued; Total of 2 licenses in use)<br />
------ 1 RESERVATION for GROUP cohenlab (lm/27010)<br />
------ 1 RESERVATION for GROUP fmrigroup (lm/27010)<br />
Bioinformatics_Toolbox<br />
---- (Total of 8 licenses issued; Total of 8 licenses in use)<br />
------ 4 RESERVATIONs for GROUP galaxygroup (lm/27010)<br />
------ 4 RESERVATIONs for GROUP pellegrinigroup (lm/27010)<br />
Distrib_Computing_Toolbox<br />
---- (Total of 3 licenses issued; Total of 3 licenses in use)<br />
-------- navari n233 /dev/pts/4 (v27) (lm/27010 2301), start Tue 7/2 10:22<br />
-------- raobasav n6165 /dev/tty (v27) (lm/27010 102), start Tue 7/2 10:20<br />
-------- raobasav n6154 /dev/tty (v27) (lm/27010 3101), start Tue 7/2 10:29<br />
MATLAB_Distrib_Comp_Engine<br />
---- (Total of 16 licenses issued; Total of 16 licenses in use)<br />
-------- raobasav n6268 /dev/tty (v27) (lm/27010 1001), start Tue 7/2 10:19<br />
-------- raobasav n6268 /dev/tty (v27) (lm/27010 1101), start Tue 7/2 10:19<br />
-------- raobasav n7166 /dev/tty (v27) (lm/27010 1201), start Tue 7/2 10:19<br />
-------- raobasav n7166 /dev/tty (v27) (lm/27010 1301), start Tue 7/2 10:19<br />
-------- raobasav n7166 /dev/tty (v27) (lm/27010 1401), start Tue 7/2 10:19<br />
-------- raobasav n7256 /dev/tty (v27) (lm/27010 1501), start Tue 7/2 10:19<br />
-------- raobasav n7256 /dev/tty (v27) (lm/27010 1601), start Tue 7/2 10:19<br />
-------- raobasav n7166 /dev/tty (v27) (lm/27010 1701), start Tue 7/2 10:19<br />
-------- raobasav n7166 /dev/tty (v27) (lm/27010 201), start Tue 7/2 10:19<br />
-------- raobasav n7166 /dev/tty (v27) (lm/27010 301), start Tue 7/2 10:19<br />
-------- raobasav n7166 /dev/tty (v27) (lm/27010 401), start Tue 7/2 10:19<br />
-------- raobasav n7255 /dev/tty (v27) (lm/27010 501), start Tue 7/2 10:19<br />
-------- raobasav n7255 /dev/tty (v27) (lm/27010 601), start Tue 7/2 10:19<br />
-------- raobasav n7255 /dev/tty (v27) (lm/27010 701), start Tue 7/2 10:19<br />
-------- raobasav n7255 /dev/tty (v27) (lm/27010 801), start Tue 7/2 10:19<br />
-------- raobasav n6268 /dev/tty (v27) (lm/27010 901), start Tue 7/2 10:19<br />
SIMULINK<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Control_Toolbox<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Simulink_Control_Design<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Simulink_Design_Optim<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Symbolic_Toolbox<br />
---- (Total of 1 license issued; Total of 0 licenses in use)<br />
<br />
This shows the same information as the "-r" option, but adds in who is using each license and where they are using it from. All of the Distributed Computing licenses are being used by user <code>raobasav</code>. You can also see that only seven specific MATLAB licenses are in use by six different users.<br />
<br />
<br />
<br />
===Full===<br />
$ matlab_license_check.sh -f<br />
20130702T114032-0700<br />
<br />
You have requested: users matlab<br />
<br />
. /u/local/licenses/LM_CMDS.sh users matlab<br />
Arguments are "users" and "matlab"<br />
The license directory is /u/local/licenses<br />
The apps directory is /u/local/apps<br />
<br />
lmstat - Copyright (c) 1989-2010 Flexera Software, Inc. All Rights Reserved.<br />
Flexible License Manager status on Tue 7/2/2013 11:40<br />
<br />
License server status: 27010@lm<br />
License file(s) on lm: /u/local/licenses/license.matlab:<br />
<br />
lm: license server UP (MASTER) v11.9<br />
<br />
Vendor daemon status (on lm):<br />
<br />
MLM: UP v11.6<br />
Feature usage info:<br />
<br />
Users of MATLAB: (Total of 37 licenses issued; Total of 35 licenses in use)<br />
<br />
"MATLAB" v28, vendor: MLM<br />
floating license<br />
<br />
<br />
"MATLAB" v28, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP bueragroup (lm/27010)<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
4 RESERVATIONs for GROUP cohenlab (lm/27010)<br />
<br />
"MATLAB" v28, vendor: MLM<br />
floating license<br />
<br />
6 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
4 RESERVATIONs for GROUP galaxygroup (lm/27010)<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP ghoniemgroup (lm/27010)<br />
hiteshag n6280 /dev/tty (v27) (lm/27010 2001), start Tue 7/2 10:21<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
huning n2180 /dev/tty (v27) (lm/27010 1801), start Tue 7/2 10:20<br />
<br />
"MATLAB" v28, vendor: MLM<br />
floating license<br />
<br />
kerr b8169-harley.local /dev/ttys000 (v23) (lm/27010 2501), start Tue 7/2 10:22<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
<br />
"MATLAB" v21, vendor: MLM<br />
nodelocked license, locked to "ID=301818"<br />
<br />
3 RESERVATIONs for GROUP kluggroup (lm/27010)<br />
<br />
"MATLAB" v21, vendor: MLM<br />
nodelocked license, locked to "ID=319868"<br />
<br />
2 RESERVATIONs for GROUP miaogroup (lm/27010)<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP miaogroup (lm/27010)<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP moshegroup (lm/27010)<br />
navari n233 /dev/pts/4 (v27) (lm/27010 2201), start Tue 7/2 10:22<br />
<br />
"MATLAB" v28, vendor: MLM<br />
floating license<br />
<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
4 RESERVATIONs for GROUP pellegrinigroup (lm/27010)<br />
raobasav n6165 /dev/tty (v27) (lm/27010 1901), start Tue 7/2 10:20<br />
raobasav n6154 /dev/tty (v27) (lm/27010 3001), start Tue 7/2 10:29<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
snt n2179 /dev/tty (v27) (lm/27010 2903), start Tue 7/2 10:39<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP staff1group (lm/27010)<br />
<br />
Users of Compiler: (Total of 7 licenses issued; Total of 4 licenses in use)<br />
<br />
"Compiler" v28, vendor: MLM<br />
floating license<br />
<br />
<br />
"Compiler" v21, vendor: MLM<br />
nodelocked license, locked to "ID=301818"<br />
<br />
1 RESERVATION for GROUP cohenlab (lm/27010)<br />
<br />
"Compiler" v27, vendor: MLM<br />
floating license<br />
<br />
2 RESERVATIONs for GROUP miaogroup (lm/27010)<br />
tdevries n6263 /dev/pts/0 (v27) (lm/27010 2802), start Tue 7/2 11:33 (linger: 1800)<br />
<br />
Users of Curve_Fitting_Toolbox: (Total of 1 license issued; Total of 1 license in use)<br />
<br />
"Curve_Fitting_Toolbox" v28, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP bueragroup (lm/27010)<br />
<br />
Users of Image_Toolbox: (Total of 8 licenses issued; Total of 7 licenses in use)<br />
<br />
"Image_Toolbox" v28, vendor: MLM<br />
floating license<br />
<br />
<br />
"Image_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
5 RESERVATIONs for GROUP cohenlab (lm/27010)<br />
2 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
<br />
Users of Optimization_Toolbox: (Total of 4 licenses issued; Total of 2 licenses in use)<br />
<br />
"Optimization_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
2 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
<br />
Users of Signal_Toolbox: (Total of 5 licenses issued; Total of 4 licenses in use)<br />
<br />
"Signal_Toolbox" v28, vendor: MLM<br />
floating license<br />
<br />
<br />
"Signal_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP cohenlab (lm/27010)<br />
<br />
"Signal_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
2 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
hiteshag n6280 /dev/tty (v27) (lm/27010 2101), start Tue 7/2 10:21<br />
<br />
Users of Statistics_Toolbox: (Total of 15 licenses issued; Total of 12 licenses in use)<br />
<br />
"Statistics_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
<br />
"Statistics_Toolbox" v28, vendor: MLM<br />
floating license<br />
<br />
3 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
<br />
"Statistics_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
4 RESERVATIONs for GROUP galaxygroup (lm/27010)<br />
navari n233 /dev/pts/4 (v27) (lm/27010 2401), start Tue 7/2 10:22<br />
<br />
"Statistics_Toolbox" v28, vendor: MLM<br />
floating license<br />
<br />
4 RESERVATIONs for GROUP pellegrinigroup (lm/27010)<br />
<br />
Users of Wavelet_Toolbox: (Total of 2 licenses issued; Total of 2 licenses in use)<br />
<br />
"Wavelet_Toolbox" v28, vendor: MLM<br />
floating license<br />
<br />
<br />
"Wavelet_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP cohenlab (lm/27010)<br />
1 RESERVATION for GROUP fmrigroup (lm/27010)<br />
<br />
Users of Bioinformatics_Toolbox: (Total of 8 licenses issued; Total of 8 licenses in use)<br />
<br />
"Bioinformatics_Toolbox" v28, vendor: MLM<br />
floating license<br />
<br />
4 RESERVATIONs for GROUP galaxygroup (lm/27010)<br />
4 RESERVATIONs for GROUP pellegrinigroup (lm/27010)<br />
<br />
Users of Distrib_Computing_Toolbox: (Total of 3 licenses issued; Total of 3 licenses in use)<br />
<br />
"Distrib_Computing_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
navari n233 /dev/pts/4 (v27) (lm/27010 2301), start Tue 7/2 10:22<br />
<br />
"Distrib_Computing_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
raobasav n6165 /dev/tty (v27) (lm/27010 102), start Tue 7/2 10:20<br />
<br />
"Distrib_Computing_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
raobasav n6154 /dev/tty (v27) (lm/27010 3101), start Tue 7/2 10:29<br />
<br />
Users of MATLAB_Distrib_Comp_Engine: (Total of 16 licenses issued; Total of 16 licenses in use)<br />
<br />
"MATLAB_Distrib_Comp_Engine" v27, vendor: MLM<br />
floating license<br />
<br />
raobasav n6268 /dev/tty (v27) (lm/27010 1001), start Tue 7/2 10:19<br />
raobasav n6268 /dev/tty (v27) (lm/27010 1101), start Tue 7/2 10:19<br />
raobasav n7166 /dev/tty (v27) (lm/27010 1201), start Tue 7/2 10:19<br />
raobasav n7166 /dev/tty (v27) (lm/27010 1301), start Tue 7/2 10:19<br />
raobasav n7166 /dev/tty (v27) (lm/27010 1401), start Tue 7/2 10:19<br />
raobasav n7256 /dev/tty (v27) (lm/27010 1501), start Tue 7/2 10:19<br />
raobasav n7256 /dev/tty (v27) (lm/27010 1601), start Tue 7/2 10:19<br />
raobasav n7166 /dev/tty (v27) (lm/27010 1701), start Tue 7/2 10:19<br />
<br />
"MATLAB_Distrib_Comp_Engine" v27, vendor: MLM<br />
floating license<br />
<br />
raobasav n7166 /dev/tty (v27) (lm/27010 201), start Tue 7/2 10:19<br />
raobasav n7166 /dev/tty (v27) (lm/27010 301), start Tue 7/2 10:19<br />
raobasav n7166 /dev/tty (v27) (lm/27010 401), start Tue 7/2 10:19<br />
raobasav n7255 /dev/tty (v27) (lm/27010 501), start Tue 7/2 10:19<br />
raobasav n7255 /dev/tty (v27) (lm/27010 601), start Tue 7/2 10:19<br />
raobasav n7255 /dev/tty (v27) (lm/27010 701), start Tue 7/2 10:19<br />
raobasav n7255 /dev/tty (v27) (lm/27010 801), start Tue 7/2 10:19<br />
raobasav n6268 /dev/tty (v27) (lm/27010 901), start Tue 7/2 10:19<br />
<br />
Users of SIMULINK: (Total of 2 licenses issued; Total of 0 licenses in use)<br />
<br />
Users of Control_Toolbox: (Total of 2 licenses issued; Total of 0 licenses in use)<br />
<br />
Users of Simulink_Control_Design: (Total of 2 licenses issued; Total of 0 licenses in use)<br />
<br />
Users of Simulink_Design_Optim: (Total of 2 licenses issued; Total of 0 licenses in use)<br />
<br />
Users of Symbolic_Toolbox: (Total of 1 license issued; Total of 0 licenses in use)<br />
<br />
<br />
/u/local/apps/matlab/7.14/etc/glnxa64/lmstat -c /u/local/licenses/license.matlab -S MLM<br />
<br />
#############<br />
All done on 02/07/13 (European, I mean logical, date notation)!<br />
<br />
All the same information as in the previous examples, except there is additional information about licenses being floating vs reserved.</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&diff=847Hoffman2:Introduction2013-06-27T07:08:42Z<p>Elau: Made the quota bit more explicit and added links</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
==What is Hoffman2?==<br />
The Hoffman2 Cluster is a campus computing resource at UCLA and is named for Paul Hoffman (1947-2003). It is maintained by the Academic Technology Services Department at UCLA and they host a webpage about it [http://www.ats.ucla.edu/clusters/hoffman2/ here]. With many high end processors and data storage and backup technologies, it is a useful tool for executing research computations especially when working with large datasets. More than 1000 users are currently registered and the cluster sees tremendous usage. Click [[Hoffman2:Getting an Account|here]] to find out how to join that user group. In February 2012 alone, there were more than 4 million compute hours logged. See more usage statistics [https://idre.ucla.edu/hoffman2/cluster-statistics here].<br />
<br />
<br />
<br />
==Anatomy of the Computing Cluster==<br />
What does Hoffman2 consist of?<br />
* Login Nodes<br />
* Computing Nodes<br />
* Storage Space<br />
* Sun Grid Engine (a brain of sorts)<br />
<br />
[[File:hoffman2layout.png]]<br />
<br/><br />
''**Image taken from a previous ATS "Using Hoffman2 Cluster" slide deck and modified for our point.**''<br />
<br />
<br />
===Login Nodes===<br />
There are four login nodes which allow you to access and interact with the Hoffman2 Cluster. These are essentially four dedicated computers that you can [[Hoffman2:Accessing the Cluster#SSH|SSH]] into and use to look at and edit your files or submit computing jobs to the queue (more on what the queue is in a bit). It is important to remember that these are four computers being shared by ALL the Hoffman2 users. Doing ANY type of heavy computing on these nodes is frowned upon. If you are:<br />
*moving lots of files<br />
*calculating the inverse solution to an EEG signal, or<br />
*running a bunch of python scripts to extract tractography of a brain<br />
you should NOT be doing this on a login node. If the sysadmins at ATS find any process that is taking up too many resources on the login nodes, they reserve the right to terminate the process immediately.<br />
<br />
<br />
===Computing Nodes===<br />
As of November 2012, Hoffman2 is made up of more than 9000 processors across three data centers and this number continues to grow as the cluster is expanded. The individual cores of the processors are where your programs gets executed when you submit a job to the cluster. There are ways to request that you only be given one core to use or that you be given many cores.<br />
<br />
There is also a GPU cluster that has more than 300 nodes, but access to this must be requested separately from a normal Hoffman2 account. Look [http://fuji.ats.ucla.edu/for-transfer/hoffman2-cluster/computing/gpuq.htm here] for how to request access.<br />
<br />
The reason the number of computing cores continues to grow is because more resource groups (like individual research labs) join Hoffman2 and buy nodes to be integrated into the cluster. Nodes contributed by a resource group are guaranteed to that resource group and can be used to run longer jobs ([http://fuji.ats.ucla.edu/for-transfer/hoffman2-cluster/computing/policies.htm#highp up to 14 days]). As of June 2013, the Cohen and Bookheimer groups on Hoffman2 have 96 cores:<br />
* 6 nodes (installed pre 2010) each with<br />
** 8 cores<br />
** 8GB RAM<br />
* 3 nodes (installed Fall 2012) each with<br />
** 16 cores<br />
** 48GB RAM<br />
Use the command <code>mygroup</code> to see what resources you have available.<br />
<br />
<br />
===Storage Space===<br />
For official and up-to-date information about storage space, [https://idre.ucla.edu/hoffman2/data-storage click here]. If you want a quick overview, see below.<br />
<br />
====Long Term Storage====<br />
ATS maintains high end storage systems (BlueArc and Panasas) for Hoffman2 disk space. There have built in redundancies and are fault tolerant. On top of that, ATS does tape backups regularly.<br />
<br />
If all of that sounded Greek to you, the important thing to understand is that there is a lot of disk space on Hoffman2 and ATS takes great pains to make sure your data is safe.<br />
<br />
=====Home Directories=====<br />
:When you login to Hoffman2, you get dropped into your home directory immediately. Home directory locations follow the pattern<br />
::<code>/u/home/[u]/[username]</code><br />
:Where <code>[u]</code> is the first letter of the username, e.g.<br />
::<code>/u/home/j/jbruin</code><br />
::<code> /u/home/t/ttrojan</code><br />
<br />
:Your home directory is where you can keep your personal files (papers, correspondences, notes, etc.) and files you frequently change (source code, configuration files, job command files). '''It is not the place for your large datasets for computing.''' Data in your home directory is accessible from all login and computing nodes.<br />
<br />
:Every user is allowed to store up to 20GB of data files in their home directory. If you are part of a cluster contributing group, you can also store data files in that group's common space described in the next section...<br />
<br />
:[[Hoffman2:Quotas|Find out how much space your group is using on Hoffman2.]]<br />
<br />
=====Group Directory=====<br />
:Group directories are given to groups that purchase extended storage space (in 1TB/1million file increments for three year periods, as of Summer 2013). This is common space designed for collaboration and is where your datasets should mainly be stored. Individual users are given directories under the main group directory to help organize data ownership. For example:<br />
::<pre>/u/home/mscohen # Common group directory</pre><br />
::<pre>/u/home/mscohen/data # Common group "data" directory, create subdirectories within this for specific projects or uses</pre><br />
::<pre>/u/home/mscohen/aaronab # mscohen group directory for the user aaronab, different from their /u/home/a/aaronab home directory</pre><br />
::<pre>/u/home/mscohen/kerr # mscohen group directory for the user kerr, different from their /u/home/k/kerr home directory</pre><br />
::<pre>/u/home/mscohen/pamelita # mscohen group directory for the user pamelita, different from their /u/home/p/pamelita home directory</pre><br />
:and these directories are accessible from all login and computing nodes.<br />
<br />
:'''These directories have limits to how many files can be put in them and how large those files can be.'''<br />
:*When a group buys in for 1TB/1million files, their quota is considered met when they have EITHER<br />
:** 1TB worth of files, OR<br />
:** 1 million files<br />
:*When a group buys in for 4TB/4million files, their quota is considered met when they have EITHER<br />
:** 4TB worth of files, OR<br />
:** 4 million files<br />
:'''Once a group's quota has been reached, everyone in that group is immediately prevented from creating any more files in the group directory automatically.''' This means any computing jobs you are running may fail due to an inability to write out their results. You may also have trouble starting GUI sessions due to an inability to create temporary files.<br />
:Read about how to monitor your disk quota [[Hoffman2:Quotas|here]].<br />
<br />
=====Historical Notes=====<br />
======June 2013======<br />
: ''Before July 2013, for users that were part of groups that purchased storage, their home directories were the same as their personal group directories. e.g.''<br />
::<code> /u/home/j/jbruin</code><br />
:''did not exist, but''<br />
::<code> /u/home/mscohen/jbruin</code><br />
:''did exist and was the home directory (and personal group directory) for the user jbruin. IDRE changed this behavior after the Summer Maintenance restart in 2013 to better separate users from their groups. This separation more cleanly allows users to be part of multiple storage groups (e.g. belonging to sbook and mscohen groups), or switch between single groups over time, while retaining their own personal space on the cluster. A symlink named '''project''' was placed in the new home directories pointing to the old home directories. e.g.''<br />
::<code> /u/home/j/jbruin/project -> /u/home/mscohen/jbruin</code><br />
<br />
======June 2011======<br />
: ''Before July 2011, there was a symlink pointing from /u/home9 to /u/home as a legacy support mechanism. This symlink was finally removed after the Summer Maintenance of 2011 and some adjustments had to be made by anyone still using home9 references.''<br />
<br />
====Temporary Storage====<br />
When running a computing job on Hoffman2, reading and writing a bunch of files in your home directory can be slow. So faster temporary storage is available to use for ongoing jobs. Read the official description [https://idre.ucla.edu/hoffman2/data-storage#tempfs here].<br />
<br />
=====work=====<br />
:'''/work'''<br />
:Each computing node has its own unique "work" directory. This is only accessible by jobs on that specific node. Any data your job may put on it will be removed as soon as your job finishes. There is at least 100GB of this space on each node, but you may only use a portion proportional to the number of cores you are using on that node (you have to share).<br />
<br />
:Every job is given a unique subdirectory on ''work'' where it can read and write files rapidly. The [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] <code>$TMPDIR</code> points to this directory.<br />
<br />
:If your job reads from or writes to a file repeatedly, you may save time by keeping that file in this temporary directory and then moving it to your home directory at completion so it is not deleted.<br />
<br />
=====scratch=====<br />
:'''/u/scratch/[u]/[username]'''<br />
:Where ''[username]'' is replaced with your Hoffman2 username and ''[u]'' is replaced with the first letter of your username. Data here is accessible on all login and computing nodes. You can use up to 2TB of space here, but data is not kept here for more than 7 days and can be overwritten sooner if there is a high demand for scratch space. Use the [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] <code>$SCRATCH</code> to reliably access your personal scratch directory.<br />
<br />
<br />
===Sun Grid Engine===<br />
The Sun Grid Engine is the brains behind how jobs get executed on the cluster. When you request that a script be run on Hoffman2, the SGE looks at the resources you requested (how much memory, how many computing cores, how many computing hours, etc) and puts your job in a queue (a waiting line for those not familiar with British English) based on your requirements. Less demanding jobs generally get front loaded while more demanding ones must wait for adequate resources to free up. The SGE tries to schedule jobs on computing nodes in order to make the most efficient use of the resources available.<br />
<br />
====Queues====<br />
There is more than one queue on Hoffman2. Each is for a slightly different purpose:<br />
; express<br />
: For jobs requesting at most 2 hours of computing time.<br />
; interactive<br />
: For jobs requesting at most 24 hours of computing time and requiring the ability for users to interact with the program running.<br />
; highp<br />
: For jobs requesting at most 14 days of computing time. These are required to run on nodes owned by your group.<br />
And there are others. Read about them [http://fuji.ats.ucla.edu/for-transfer/hoffman2-cluster/computing/policies.htm here].<br />
<br />
<br />
[[Hoffman2:Submitting Jobs|Find out how to submit computing jobs to the Hoffman2 Cluster.]]<br />
<br />
<br />
<br />
==External Links==<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/ Hoffman2 Webpage]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/h2stat/statistics.htm Hoffman2 Statistics]<br />
*[http://www.ats.ucla.edu/clusters/hosting/ Hoffman2 Cluster Hosting]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm Hoffman2 Queues]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/hardware/default.htm Hoffman2 Hardware]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/computing/gpuq.htm Hoffman2 GPU Cluster]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/data_storage/default.htm Hoffman2 Data Storage]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/computing/howtoscratch.htm Temporary File Storage for Fast I/O]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Computer_Resources&diff=403Computer Resources2013-06-24T20:59:20Z<p>Elau: </p>
<hr />
<div>Lab computing work is done through these primary channels:<br />
* [[Computer_Resources#Group Resource Lab Computers | Group resource lab computers]]<br />
* [[Computer_Resources#Hoffman2 | The Hoffman2 campus computing cluster]]<br />
* [[Computer_Resources#Personal Computers | Personal computers]]<br />
<br />
Let's cover each in more detail and explain how you can gain access to them.<br />
<br />
<br />
<br />
==Group Resource Lab Computers==<br />
These are computers available for reservation by anyone in the lab, though they are generally reserved for SRPs.<br />
<br />
===Access===<br />
In order to log into these computers, you need a lab account. Request one by sending an email to [mailto:support@ccn.ucla.edu support@ccn.ucla.edu] with the following information:<br />
:'''Subject'''<br />
::<code>Lab Account Request</code><br />
:'''Body'''<br />
::<code>Name: ''first and last''</code><br />
::<code>Email: ''your preferred email for correspondence''</code><br />
::<code>Phone: ''phone number you can be reached at''</code><br />
::<code>AIM: ''chat accounts you can be contacted through''</code><br />
::<code>Project: ''which project you will be working on, or which lab member will be guiding your work''</code><br />
::<code>Year: ''what year you're currently in school?''</code><br />
::<code>Google Email: ''this is needed to add you to the Google Calendars for reserving computer time''</code><br />
<br />
When you account is created, you will be sent a confirmation email with instructions on how to log in.<br />
<br />
===Reservations===<br />
Computer seats are limited, so usage is regulated by a reservation system. When a member joins the lab, they will be added to a set of Google Calendars through which they can reserve time on a lab computer. Remember, a reservation is no good if you don't have a lab account to log in with.<br />
<br />
<br />
<br />
==Hoffman2==<br />
Read all about this fancy and powerful UCLA campus computing cluster [[Hoffman2 | here]]. Only the first section is required reading for new lab members, the rest is intended to be useful reference material.<br />
<br />
In the [[Hoffman2 | Hoffman2 section]], you will find details about how to get your own account on the cluster and what you need to do to login and use the tools on it.<br />
<br />
'''Please remember that Hoffman2 is a resource maintained by [http://www.ats.ucla.edu/ UCLA ATS].''' If you are experiencing trouble with the system:<br />
# first refer to their [http://www.ats.ucla.edu/clusters/hoffman2/faq.htm FAQ]<br />
# and if your problem or question persists, email their support team at [mailto:atshpc@ucla.edu atshpc@ucla.edu]<br />
<br />
The CCN Admins do maintain the FMRI software tools on Hoffman2, so if your problem is related to those tools you can email [mailto:support@ccn.ucla.edu support@ccn.ucla.edu] with questions.<br />
<br />
<br />
<br />
==Personal Computers==<br />
Nothing is preventing you from working on a personal computer for lab work. There are various software packages you may need to install to make your computer useful like FSL, MATLAB, Python, FreeSurfer, OsiriX, Diffusion Toolkit and TrackVis.<br />
<br />
The sysadmins can provide limited support to personal computers. It never hurts to ask, but personal computers are primarily the responsibility of their owners.<br />
<br />
'''It is also important to recognize requirements regarding patient data. You should never store patient data on a personal computer or an unencrypted drive as this breaks HIPAA regulations for security.'''</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&diff=846Hoffman2:Introduction2013-06-22T14:42:11Z<p>Elau: Updating Historical Notes and what home/group directories should be used for.</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
==What is Hoffman2?==<br />
The Hoffman2 Cluster is a campus computing resource at UCLA and is named for Paul Hoffman (1947-2003). It is maintained by the Academic Technology Services Department at UCLA and they host a webpage about it [http://www.ats.ucla.edu/clusters/hoffman2/ here]. With many high end processors and data storage and backup technologies, it is a useful tool for executing research computations especially when working with large datasets. More than 1000 users are currently registered and the cluster sees tremendous usage. Click [[Hoffman2:Getting an Account|here]] to find out how to join that user group. In February 2012 alone, there were more than 4 million compute hours logged. See more usage statistics [https://idre.ucla.edu/hoffman2/cluster-statistics here].<br />
<br />
<br />
<br />
==Anatomy of the Computing Cluster==<br />
What does Hoffman2 consist of?<br />
* Login Nodes<br />
* Computing Nodes<br />
* Storage Space<br />
* Sun Grid Engine (a brain of sorts)<br />
<br />
[[File:hoffman2layout.png]]<br />
<br/><br />
''**Image taken from a previous ATS "Using Hoffman2 Cluster" slide deck and modified for our point.**''<br />
<br />
<br />
===Login Nodes===<br />
There are four login nodes which allow you to access and interact with the Hoffman2 Cluster. These are essentially four dedicated computers that you can [[Hoffman2:Accessing the Cluster#SSH|SSH]] into and use to look at and edit your files or submit computing jobs to the queue (more on what the queue is in a bit). It is important to remember that these are four computers being shared by ALL the Hoffman2 users. Doing ANY type of heavy computing on these nodes is frowned upon. If you are:<br />
*moving lots of files<br />
*calculating the inverse solution to an EEG signal, or<br />
*running a bunch of python scripts to extract tractography of a brain<br />
you should NOT be doing this on a login node. If the sysadmins at ATS find any process that is taking up too many resources on the login nodes, they reserve the right to terminate the process immediately.<br />
<br />
<br />
===Computing Nodes===<br />
As of November 2012, Hoffman2 is made up of more than 9000 processors across three data centers and this number continues to grow as the cluster is expanded. The individual cores of the processors are where your programs gets executed when you submit a job to the cluster. There are ways to request that you only be given one core to use or that you be given many cores.<br />
<br />
There is also a GPU cluster that has more than 300 nodes, but access to this must be requested separately from a normal Hoffman2 account. Look [http://fuji.ats.ucla.edu/for-transfer/hoffman2-cluster/computing/gpuq.htm here] for how to request access.<br />
<br />
The reason the number of computing cores continues to grow is because more resource groups (like individual research labs) join Hoffman2 and buy nodes to be integrated into the cluster. Nodes contributed by a resource group are guaranteed to that resource group and can be used to run longer jobs ([http://fuji.ats.ucla.edu/for-transfer/hoffman2-cluster/computing/policies.htm#highp up to 14 days]). As of June 2013, the Cohen and Bookheimer groups on Hoffman2 have 96 cores:<br />
* 6 nodes (installed pre 2010) each with<br />
** 8 cores<br />
** 8GB RAM<br />
* 3 nodes (installed Fall 2012) each with<br />
** 16 cores<br />
** 48GB RAM<br />
Use the command <code>mygroup</code> to see what resources you have available.<br />
<br />
<br />
===Storage Space===<br />
For official and up-to-date information about storage space, [https://idre.ucla.edu/hoffman2/data-storage click here]. If you want a quick overview, see below.<br />
<br />
====Long Term Storage====<br />
ATS maintains high end storage systems (BlueArc and Panasas) for Hoffman2 disk space. There have built in redundancies and are fault tolerant. On top of that, ATS does tape backups regularly.<br />
<br />
If all of that sounded Greek to you, the important thing to understand is that there is a lot of disk space on Hoffman2 and ATS takes great pains to make sure your data is safe.<br />
<br />
=====Home Directories=====<br />
:When you login to Hoffman2, you get dropped into your home directory immediately. Home directory locations follow the pattern<br />
::<code>/u/home/[u]/[username]</code><br />
:Where <code>[u]</code> is the first letter of the username, e.g.<br />
::<code>/u/home/j/jbruin</code><br />
::<code> /u/home/t/ttrojan</code><br />
<br />
:Your home directory is where you can keep your personal files (papers, correspondences, notes, etc.) and files you frequently change (source code, configuration files, job command files). '''It is not the place for your large datasets for computing.''' Data in your home directory is accessible from all login and computing nodes.<br />
<br />
:Every user is allowed to store up to 20GB of data files in their home directory. If you are part of a cluster contributing group, you can also store data files in that group's common space<br />
::<code> /u/home/[GROUPNAME]</code><br />
:so long as that group is within their quota limits for file count and size.<br />
<br />
:[[Hoffman2:Quotas|Find out how much space your group is using on Hoffman2.]]<br />
<br />
=====Group Directory=====<br />
:Group directories are given to groups that purchase extended storage space (in 1TB/1million file increments for three year periods, as of Summer 2013). This is common space designed for collaboration and is where your datasets should mainly be stored. Individual users are given directories under the main group directory to help organize data ownership. For example:<br />
::<pre>/u/home/mscohen # Common group directory</pre><br />
::<pre>/u/home/mscohen/data # Common group "data" directory, create subdirectories within this for specific projects or uses</pre><br />
::<pre>/u/home/mscohen/aaronab # mscohen group directory for the user aaronab, different from their /u/home/a/aaronab home directory</pre><br />
::<pre>/u/home/mscohen/kerr # mscohen group directory for the user kerr, different from their /u/home/k/kerr home directory</pre><br />
::<pre>/u/home/mscohen/pamelita # mscohen group directory for the user pamelita, different from their /u/home/p/pamelita home directory</pre><br />
:and these directories are accessible from all login and computing nodes.<br />
<br />
=====Historical Notes=====<br />
======June 2013======<br />
: ''Before July 2013, for users that were part of groups that purchased storage, their home directories were the same as their personal group directories. e.g.''<br />
::<code> /u/home/j/jbruin</code><br />
:''did not exist, but''<br />
::<code> /u/home/mscohen/jbruin</code><br />
:''did exist and was the home directory (and personal group directory) for the user jbruin. IDRE changed this behavior after the Summer Maintenance restart in 2013 to better separate users from their groups. This separation more cleanly allows users to be part of multiple storage groups (e.g. belonging to sbook and mscohen groups), or switch between single groups over time, while retaining their own personal space on the cluster. A symlink named '''project''' was placed in the new home directories pointing to the old home directories. e.g.''<br />
::<code> /u/home/j/jbruin/project -> /u/home/mscohen/jbruin</code><br />
<br />
======June 2011======<br />
: ''Before July 2011, there was a symlink pointing from /u/home9 to /u/home as a legacy support mechanism. This symlink was finally removed after the Summer Maintenance of 2011 and some adjustments had to be made by anyone still using home9 references.''<br />
<br />
====Temporary Storage====<br />
When running a computing job on Hoffman2, reading and writing a bunch of files in your home directory can be slow. So faster temporary storage is available to use for ongoing jobs. Read the official description [https://idre.ucla.edu/hoffman2/data-storage#tempfs here].<br />
<br />
=====work=====<br />
:'''/work'''<br />
:Each computing node has its own unique "work" directory. This is only accessible by jobs on that specific node. Any data your job may put on it will be removed as soon as your job finishes. There is at least 100GB of this space on each node, but you may only use a portion proportional to the number of cores you are using on that node (you have to share).<br />
<br />
:Every job is given a unique subdirectory on ''work'' where it can read and write files rapidly. The [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] <code>$TMPDIR</code> points to this directory.<br />
<br />
:If your job reads from or writes to a file repeatedly, you may save time by keeping that file in this temporary directory and then moving it to your home directory at completion so it is not deleted.<br />
<br />
=====scratch=====<br />
:'''/u/scratch/[u]/[username]'''<br />
:Where ''[username]'' is replaced with your Hoffman2 username and ''[u]'' is replaced with the first letter of your username. Data here is accessible on all login and computing nodes. You can use up to 2TB of space here, but data is not kept here for more than 7 days and can be overwritten sooner if there is a high demand for scratch space. Use the [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] <code>$SCRATCH</code> to reliably access your personal scratch directory.<br />
<br />
<br />
===Sun Grid Engine===<br />
The Sun Grid Engine is the brains behind how jobs get executed on the cluster. When you request that a script be run on Hoffman2, the SGE looks at the resources you requested (how much memory, how many computing cores, how many computing hours, etc) and puts your job in a queue (a waiting line for those not familiar with British English) based on your requirements. Less demanding jobs generally get front loaded while more demanding ones must wait for adequate resources to free up. The SGE tries to schedule jobs on computing nodes in order to make the most efficient use of the resources available.<br />
<br />
====Queues====<br />
There is more than one queue on Hoffman2. Each is for a slightly different purpose:<br />
; express<br />
: For jobs requesting at most 2 hours of computing time.<br />
; interactive<br />
: For jobs requesting at most 24 hours of computing time and requiring the ability for users to interact with the program running.<br />
; highp<br />
: For jobs requesting at most 14 days of computing time. These are required to run on nodes owned by your group.<br />
And there are others. Read about them [http://fuji.ats.ucla.edu/for-transfer/hoffman2-cluster/computing/policies.htm here].<br />
<br />
<br />
[[Hoffman2:Submitting Jobs|Find out how to submit computing jobs to the Hoffman2 Cluster.]]<br />
<br />
<br />
<br />
==External Links==<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/ Hoffman2 Webpage]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/h2stat/statistics.htm Hoffman2 Statistics]<br />
*[http://www.ats.ucla.edu/clusters/hosting/ Hoffman2 Cluster Hosting]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm Hoffman2 Queues]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/hardware/default.htm Hoffman2 Hardware]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/computing/gpuq.htm Hoffman2 GPU Cluster]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/data_storage/default.htm Hoffman2 Data Storage]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/computing/howtoscratch.htm Temporary File Storage for Fast I/O]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&diff=845Hoffman2:Introduction2013-06-22T08:42:28Z<p>Elau: Adding in a section about Group Directories, historical notes about home directories, and fixing information about Scratch directory locations.</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
==What is Hoffman2?==<br />
The Hoffman2 Cluster is a campus computing resource at UCLA and is named for Paul Hoffman (1947-2003). It is maintained by the Academic Technology Services Department at UCLA and they host a webpage about it [http://www.ats.ucla.edu/clusters/hoffman2/ here]. With many high end processors and data storage and backup technologies, it is a useful tool for executing research computations especially when working with large datasets. More than 1000 users are currently registered and the cluster sees tremendous usage. Click [[Hoffman2:Getting an Account|here]] to find out how to join that user group. In February 2012 alone, there were more than 4 million compute hours logged. See more usage statistics [https://idre.ucla.edu/hoffman2/cluster-statistics here].<br />
<br />
<br />
<br />
==Anatomy of the Computing Cluster==<br />
What does Hoffman2 consist of?<br />
* Login Nodes<br />
* Computing Nodes<br />
* Storage Space<br />
* Sun Grid Engine (a brain of sorts)<br />
<br />
[[File:hoffman2layout.png]]<br />
<br/><br />
''**Image taken from a previous ATS "Using Hoffman2 Cluster" slide deck and modified for our point.**''<br />
<br />
<br />
===Login Nodes===<br />
There are four login nodes which allow you to access and interact with the Hoffman2 Cluster. These are essentially four dedicated computers that you can [[Hoffman2:Accessing the Cluster#SSH|SSH]] into and use to look at and edit your files or submit computing jobs to the queue (more on what the queue is in a bit). It is important to remember that these are four computers being shared by ALL the Hoffman2 users. Doing ANY type of heavy computing on these nodes is frowned upon. If you are:<br />
*moving lots of files<br />
*calculating the inverse solution to an EEG signal, or<br />
*running a bunch of python scripts to extract tractography of a brain<br />
you should NOT be doing this on a login node. If the sysadmins at ATS find any process that is taking up too many resources on the login nodes, they reserve the right to terminate the process immediately.<br />
<br />
<br />
===Computing Nodes===<br />
As of November 2012, Hoffman2 is made up of more than 9000 processors across three data centers and this number continues to grow as the cluster is expanded. The individual cores of the processors are where your programs gets executed when you submit a job to the cluster. There are ways to request that you only be given one core to use or that you be given many cores.<br />
<br />
There is also a GPU cluster that has more than 300 nodes, but access to this must be requested separately from a normal Hoffman2 account. Look [http://fuji.ats.ucla.edu/for-transfer/hoffman2-cluster/computing/gpuq.htm here] for how to request access.<br />
<br />
The reason the number of computing cores continues to grow is because more resource groups (like individual research labs) join Hoffman2 and buy nodes to be integrated into the cluster. Nodes contributed by a resource group are guaranteed to that resource group and can be used to run longer jobs ([http://fuji.ats.ucla.edu/for-transfer/hoffman2-cluster/computing/policies.htm#highp up to 14 days]). As of June 2013, the Cohen and Bookheimer groups on Hoffman2 have 96 cores:<br />
* 6 nodes (installed pre 2010) each with<br />
** 8 cores<br />
** 8GB RAM<br />
* 3 nodes (installed Fall 2012) each with<br />
** 16 cores<br />
** 48GB RAM<br />
Use the command <code>mygroup</code> to see what resources you have available.<br />
<br />
<br />
===Storage Space===<br />
For official and up-to-date information about storage space, [https://idre.ucla.edu/hoffman2/data-storage click here]. If you want a quick overview, see below.<br />
<br />
====Long Term Storage====<br />
ATS maintains high end storage systems (BlueArc and Panasas) for Hoffman2 disk space. There have built in redundancies and are fault tolerant. On top of that, ATS does tape backups regularly.<br />
<br />
If all of that sounded Greek to you, the important thing to understand is that there is a lot of disk space on Hoffman2 and ATS takes great pains to make sure your data is safe.<br />
<br />
=====Home Directories=====<br />
:When you login to Hoffman2, you get dropped into your home directory immediately. Home directory locations follow the pattern<br />
::<code>/u/home/[u]/[username]</code><br />
:Where <code>[u]</code> is the first letter of the username, e.g.<br />
::<code>/u/home/j/jbruin</code><br />
::<code> /u/home/t/ttrojan</code><br />
<br />
:Your home directory is where you can keep your personal files, data and scripts you work with. Data in your home directory is accessible on all login and computing nodes.<br />
<br />
:Every user is allowed to store up to 20GB of data files in their home directory. If you are part of a cluster contributing group, you can also store data files in that group's common space<br />
::<code> /u/home/[GROUPNAME]</code><br />
:so long as that group is within their quota limits for file count and size.<br />
<br />
:[[Hoffman2:Quotas|Find out how much space your group is using on Hoffman2.]]<br />
<br />
=====Group Directory=====<br />
:Group directories are given to groups that purchase extended storage space (in 1TB/1million file increments for three year periods, as of Summer 2013). This is common space designed for collaboration. Individual users are given directories under the main group directory to help organize data ownership. For example:<br />
::<pre>/u/home/mscohen # Common group directory</pre><br />
::<pre>/u/home/mscohen/data # Common group "data" directory, create subdirectories within this for specific projects or uses</pre><br />
::<pre>/u/home/mscohen/aaronab # mscohen group directory for the user aaronab, different from their /u/home/a/aaronab home directory</pre><br />
::<pre>/u/home/mscohen/kerr # mscohen group directory for the user kerr, different from their /u/home/k/kerr home directory</pre><br />
::<pre>/u/home/mscohen/pamelita # mscohen group directory for the user pamelita, different from their /u/home/p/pamelita home directory</pre><br />
<br />
=====Historical Note=====<br />
: ''Before July 2013, for users that were part of groups that purchased storage, their home directories were the same as their personal group directories. e.g.''<br />
::<code> /u/home/j/jbruin</code><br />
:''did not exist, but''<br />
::<code> /u/home/mscohen/jbruin</code><br />
:''did exist and was the home directory (and personal group directory) for the user jbruin. IDRE changed this behavior after the Summer Maintenance restart in 2013 to better separate users from their groups. This separation more cleanly allows users to be part of multiple storage groups (e.g. belonging to sbook and mscohen groups), or switch between single groups over time, while retaining their own personal space on the cluster.''<br />
<br />
====Temporary Storage====<br />
When running a computing job on Hoffman2, reading and writing a bunch of files in your home directory can be slow. So faster temporary storage is available to use for ongoing jobs. Read the official description [https://idre.ucla.edu/hoffman2/data-storage#tempfs here].<br />
<br />
=====work=====<br />
:'''/work'''<br />
:Each computing node has its own unique "work" directory. This is only accessible by jobs on that specific node. Any data your job may put on it will be removed as soon as your job finishes. There is at least 100GB of this space on each node, but you may only use a portion proportional to the number of cores you are using on that node (you have to share).<br />
<br />
:Every job is given a unique subdirectory on ''work'' where it can read and write files rapidly. The [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] <code>$TMPDIR</code> points to this directory.<br />
<br />
:If your job reads from or writes to a file repeatedly, you may save time by keeping that file in this temporary directory and then moving it to your home directory at completion so it is not deleted.<br />
<br />
=====scratch=====<br />
:'''/u/scratch/[u]/[username]'''<br />
:Where ''[username]'' is replaced with your Hoffman2 username and ''[u]'' is replaced with the first letter of your username. Data here is accessible on all login and computing nodes. You can use up to 2TB of space here, but data is not kept here for more than 7 days and can be overwritten sooner if there is a high demand for scratch space. Use the [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] <code>$SCRATCH</code> to reliably access your personal scratch directory.<br />
<br />
<br />
===Sun Grid Engine===<br />
The Sun Grid Engine is the brains behind how jobs get executed on the cluster. When you request that a script be run on Hoffman2, the SGE looks at the resources you requested (how much memory, how many computing cores, how many computing hours, etc) and puts your job in a queue (a waiting line for those not familiar with British English) based on your requirements. Less demanding jobs generally get front loaded while more demanding ones must wait for adequate resources to free up. The SGE tries to schedule jobs on computing nodes in order to make the most efficient use of the resources available.<br />
<br />
====Queues====<br />
There is more than one queue on Hoffman2. Each is for a slightly different purpose:<br />
; express<br />
: For jobs requesting at most 2 hours of computing time.<br />
; interactive<br />
: For jobs requesting at most 24 hours of computing time and requiring the ability for users to interact with the program running.<br />
; highp<br />
: For jobs requesting at most 14 days of computing time. These are required to run on nodes owned by your group.<br />
And there are others. Read about them [http://fuji.ats.ucla.edu/for-transfer/hoffman2-cluster/computing/policies.htm here].<br />
<br />
<br />
[[Hoffman2:Submitting Jobs|Find out how to submit computing jobs to the Hoffman2 Cluster.]]<br />
<br />
<br />
<br />
==External Links==<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/ Hoffman2 Webpage]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/h2stat/statistics.htm Hoffman2 Statistics]<br />
*[http://www.ats.ucla.edu/clusters/hosting/ Hoffman2 Cluster Hosting]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm Hoffman2 Queues]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/hardware/default.htm Hoffman2 Hardware]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/computing/gpuq.htm Hoffman2 GPU Cluster]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/data_storage/default.htm Hoffman2 Data Storage]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/computing/howtoscratch.htm Temporary File Storage for Fast I/O]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2&diff=2332Hoffman22013-06-22T07:49:49Z<p>Elau: Spacing</p>
<hr />
<div>A compilation of lab know-how regarding the Hoffman2 Computer Cluster.<br />
<br />
Anyone new to the lab and using Hoffman2 '''NEEDS''' to read the first section to have adequate basic working knowledge.<br />
<br />
<br />
<br />
==Getting Started==<br />
<br />
====Introduction====<br />
Hoffman2 is a Computing Cluster at UCLA, find out how it generally works so you know how to use it.<br />
:[[Hoffman2:Introduction]]<br />
<br />
====Getting an Account====<br />
You know what it is, now you want to use it. First you need an account.<br />
:[[Hoffman2:Getting an Account]]<br />
<br />
====Accessing the Cluster====<br />
Now how do you use that account to access the cluster?<br />
:[[Hoffman2:Accessing the Cluster]]<br />
<br />
====Working in a UNIX Environment====<br />
Never heard of a command line before today? Vaguely know what "permissions" are and have no idea how to navigate a filesystem? This page is meant to take the scary out of the words "command line" so you can actually use Hoffman2, because no matter how many GUIs there are you will still need to command line sometimes.<br />
:[[Hoffman2:UNIX Tutorial]]<br />
<br />
====Quotas====<br />
Resources are not infinite, and disk space is a resource. Find out how to manage your disk space usage to stay under quota.<br />
:[[Hoffman2:Quotas]]<br />
<br />
====Profile====<br />
You have an account, know how to get there, and now you need to make one last step for you account to be fully usable.<br />
:[[Hoffman2:Profile]]<br />
<br />
<br />
==Computing==<br />
You can find your way through Hoffman2, now it is time to start making things happen.<br />
<br />
====Software Tools====<br />
You've got your account, you are logged on, now how do you get to using a real software tool?<br />
:[[Hoffman2:Software Tools]]<br />
<br />
====Submitting Jobs====<br />
Now you have the tools, but how does one ask Hoffman2 to run them for you as a job? Since you aren't supposed to be running them on a login node...<br />
:[[Hoffman2:Submitting Jobs]]<br />
<br />
====Monitoring Jobs====<br />
Right after they zap their monster to life, every mad scientist wishes they had the tools to check on or stop their creation. Now that you can submit jobs, you need to be able to check on them and stop them if they start terrorizing downtown Tokyo.<br />
:[[Hoffman2:Monitoring Jobs]] <br />
<br />
====Interactive Sessions====<br />
Some software tools need you to interact with them while they work. Other times you just need to be able to run your script over and over while you work to eradicate all of its bugs. Enter ''Interactive Sessions''.<br />
:[[Hoffman2:Interactive Sessions]]<br />
<br />
<br />
==Software==<br />
<br />
====MATLAB====<br />
How to use MATLAB on the cluster. It is easier than you think.<br />
:[[Hoffman2:MATLAB]]<br />
<br />
:'''Compiling MATLAB'''<br />
:So you have a MATLAB script, but you don't need to GUI open all night to have it process your data. How to submit MATLAB jobs to Hoffman2.<br />
::[[Hoffman2:Compiling MATLAB]]<br />
<br />
:'''EEGLAB'''<br />
:We try to maintain the three most recent versions of EEGLAB for your convenience. Make sure to add it to your MATLAB path.<br />
::[[Hoffman2:MATLAB:EEGLAB]]<br />
<br />
====R====<br />
You are probably a statistician, or you just prefer open source software. Here's how to run R on Hoffman2.<br />
:[[Hoffman2:R]]<br />
<br />
====WEKA====<br />
If machine learning is your thing, maybe you've heard of WEKA. If not, maybe it will be your new best friend.<br />
:[[Hoffman2:WEKA]]<br />
<br />
====LONI Pipeline====<br />
A Workflow application to make things easier.<br />
:[[Hoffman2:LONI]]<br />
<br />
====FSL====<br />
FSL is a comprehensive library of analysis tools for FMRI, MRI and DTI brain imaging data.<br />
:[[Hoffman2:FSL]]<br />
<br />
<br />
==Productivity==<br />
How about streamlining some of those tasks, or getting more things done.<br />
<br />
====Scripts====<br />
All of the difficulties you are experiencing now have probably been experienced before by someone else. And for that reason we already have scripts to simplify your life.<br />
:[[Hoffman2:Scripts]]<br />
<br />
====Data Transfer====<br />
All dressed up with no where to go? That's how Hoffman2 feels if you don't give it any data to work with. Find out how to avoid hurting the Cluster's feelings.<br />
:[[Hoffman2:Data Transfer]]<br />
<br />
====Sharing Filesystems====<br />
All you want to do is be able to look at your precious data. But it is locked up on Hoffman2 and you want to use tools on your computer to look at it. There's an app for that.<br />
:[[Hoffman2:Sharing Filesystems]]<br />
<br />
<br />
<br />
==FAQ==<br />
Wesley's Usage, so you can plan around it and ask him to stop beating the cluster up.<br />
:[[Hoffman2:WTK Usage]]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Software_Tools&diff=944Hoffman2:Software Tools2013-06-22T07:48:16Z<p>Elau: Updating with new FSL and FreeSurfer versions</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
There is an FMRI usergroup on Hoffman2 which is maintained for groups doing Neuroimaging work at UCLA. Tools like FSL, FreeSurfer, AFNI and Nibabel are maintained for this group separate from normal Hoffman2 programs. In order to take advantage of these tools, you need to setup your bash profile [[Hoffman2:Profile|properly]].<br />
<br />
Below is a list of the available software tools. We will do our best to update it in real-time.<br />
<br />
<br />
==List of Tools==<br />
Under Construction...<br />
===FSL===<br />
[http://fsl.fmrib.ox.ac.uk/fsl/fslwiki/ Official Website]<br />
*v5.0.4<br />
**Install Date: 2013.06.18<br />
*v5.0.2<br />
**Install Date: 2013.02.19<br />
*v5.0.1<br />
**Install Date: 2012.10.01<br />
*v5.0.0<br />
**Install Date: 2012.09.14<br />
*v4.1.9<br />
**Install Date: 2011.12.01<br />
*v4.1.8<br />
**Install Date: circa 2011.06<br />
*v4.1.7<br />
**Install Date: circa 2011.11<br />
*v4.1.4<br />
**Install Date: circa 2009<br />
*v4.1.3<br />
**Install Date: circa 2009<br />
*v4.1..1<br />
**Install Date: circa 2008<br />
*v4.1.0<br />
**Install Date: circa 2008<br />
*v4.0.4<br />
**Install Date: circa 2008<br />
<br />
===FreeSurfer===<br />
[http://surfer.nmr.mgh.harvard.edu/ Official Website]<br />
*v5.3.0<br />
**Install Date: 2013.06.18<br />
*v5.2.0<br />
**Install Date: 2013.03.27<br />
*v5.1.0<br />
**Install Date: 2011.11.14<br />
*v5.0.0<br />
**Install Date: circa 2010<br />
*v4.4.0<br />
**Install Date: circa 2009<br />
*v4.0.5<br />
**Install Date: circa 2008<br />
<br />
===AFNI===<br />
[http://afni.nimh.nih.gov/afni/ Official Webste]<br />
*vAFNI_2011_12_21_1014<br />
**Install Date: 2012.03<br />
<br />
===Chronux===<br />
[http://www.chronux.org Official Website]<br />
*v2.10<br />
**Install Date: 2013.02.26<br />
<br />
===EEGLAB===<br />
[http://sccn.ucsd.edu/eeglab/ Official Website]<br />
*v12.0.0.0b<br />
**Instal Date: 2012.12.10<br />
*v11.0.0.0b<br />
**Install Date: 2012.02.21<br />
*v10.2.5.8b<br />
**Install Date: 2012.02.21<br />
<br />
===GCC===<br />
===LAPACK===<br />
===BLAS===<br />
===GLIB===<br />
===C++===<br />
===CMake===<br />
===CPACK===<br />
===MPI Kmeans===<br />
See this website for how to cite using the MPI Kmeans tool.<br />
[http://mloss.org/software/view/48/]<br />
<br />
===Python2.7===<br />
====Packages====<br />
=====CVXOPT=====<br />
=====Cython=====<br />
=====Gnuplot=====<br />
=====IPython=====<br />
=====matplotlib=====<br />
=====nibabel=====<br />
=====nifti=====<br />
=====nimfa=====<br />
:Non-negative Matrix Factorization<br />
:<br />
:[http://nimfa.biolab.si/ http://nimfa.biolab.si/]<br />
=====nipype=====<br />
=====nose=====<br />
=====numpy=====<br />
=====(p)lsa=====<br />
:(probabilistic) Latent Semantic Analysis. Failed its tests.py though.<br />
<br />
:[http://www.mblondel.org/journal/2010/06/13/lsa-and-plsa-in-python/ http://www.mblondel.org/journal/2010/06/13/lsa-and-plsa-in-python/]<br />
=====pydicom=====<br />
=====pygments=====<br />
=====PyMF=====<br />
:Python Matrix Factorization Module. Failed its tests though.<br />
<br />
:[http://pymf.googlecode.com http://pymf.googlecode.com]<br />
=====pypr=====<br />
=====PyQt4=====<br />
=====pytz=====<br />
=====pywt=====<br />
=====pyximport=====<br />
=====scikits=====<br />
=====scipy=====<br />
=====sklearn=====<br />
=====sparsesvd=====<br />
:Singular Value Decomposition. Passed both tests.<br />
<br />
:[http://pypi.python.org/pypi/sparsesvd http://pypi.python.org/pypi/sparsesvd]<br />
=====sphinx=====<br />
=====sympy=====<br />
=====traits=====<br />
=====virtualenv=====<br />
=====xcbgen=====</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Profile&diff=883Hoffman2:Profile2013-06-22T07:36:10Z<p>Elau: Directory restructure requires rewording fix_perms.sh execution.</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
In UNIX systems, there are certain configuration files that get executed every time you login. If you are using the Bash shell (default), you have a file called <code>.bash_profile</code> which is processed when you log in. In order to make the FMRI toolset available to you on Hoffman2 and so you can work well with others, we recommend that you follow the instructions in the [[Hoffman2:Profile#Basics|Basics section]]. Read [[Hoffman2:Profile#Extras|Extras]] for some bells and whistles.<br />
<br />
<br />
==Basics==<br />
You account has one last thing that needs to be edited before being usable.<br />
<br />
# [[Hoffman2:Accessing the Cluster#SSH - Command Line|SSH into Hoffman2]]<br />
# Use your favorite [[Text Editors|text editor]] to edit the file <code>~/.bash_profile</code><br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* <pre>$ vim ~/.bash_profile</pre><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs]]<br />
#:* <pre>$ emacs ~/.bash_profile</pre><br />
#: [[Text Editors#NEdit (H2) (OSX)|NEdit]]<br />
#:* <pre>$ nedit ~/.bash_profile</pre><br />
# Insert these lines at the '''bottom''' of the file<br />
#:* <pre>source /u/home/FMRI/apps/etc/profile&#10;umask 007</pre><br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* Type <code>G</code> - capital G - to go to the end of the file<br />
#:* Type <code>A</code> - capital A - to go to the end of the line and enter insert mode<br />
#:* Type <code>ENTER</code> - to insert a newline<br />
#:* Type or paste in the specified lines.<br />
#: [[Text Editors#Emacs (H2)(OSX)|Emacs]]<br />
#:* Use the arrow keys to scroll the cursor down to the bottom of the document and add a newline.<br />
#:* Type or paste in the specified lines.<br />
#: [[Text Editors#NEdit (H2)|NEdit]]<br />
#:* Use the arrow keys to scroll the cursor down to the bottom of the document and add a newline.<br />
#:* Type or paste in the specified lines.<br />
# Save the file<br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* <code>ESC + ":wq" + ENTER</code><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs command line]]<br />
#:* <code>CTRL+x, CTRL+c</code><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs GUI]]<br />
#:* <code>CTRL+x, CTRL+c, y</code><br />
#:* or use the menu system<br />
#: [[Text Editors#NEdit (H2)|NEdit]]<br />
#:* Use the menu.<br />
# Log out of Hoffman2 and the next time you log in, everything will be set for you to start working.<br />
<br />
<br />
===Curious?===<br />
For those that care, what you are doing is asking the computer to execute the file<br />
/u/home/FMRI/apps/etc/profile<br />
every time you login. This file modifies your PATH variable so you have access to the FMRI toolset.<br />
<br />
The last line<br />
umask 007<br />
makes it so that any files you create will not allow "anyone" outside your group to read, write, or execute files and directories you make. This does not automatically grant read, write, and execute privileges to you and your group though.<br />
<br />
<br />
<br />
==Extras==<br />
===Collaboration===<br />
By default, any files and directories you create will not necessarily have permissions that allow your group to write on them. This can be a problem if other people are supposed to build on data you processed. We have a script ([[Hoffman2:Scripts:fix_perms.sh |fix_perms.sh]]) that will kindly find any files you own in a specified directory that don't have read/write/execute permissions for the group and make it so they do.<br />
<br />
You can build this script into your bash profile so that every time you log into Hoffman2, it will run in the background. It is also recommended that you run this script at the end of jobs to make results immediately available to collaborators.<br />
<br />
Adding the line<br />
fix_perms.sh -q /u/home/[GROUP]/data &<br />
to the end of your bash profile will run the permission fixer on your group's common data directory in the background quietly each time you log in. '''Make sure to replace [GROUP] with the name of your Hoffman2 group (e.g. mscohen, sbook, cbearden, laltshul, jfeusner or mgreen).'''<br />
<br />
<br />
===Colors===<br />
You can change the content and color of your command prompt by editing your bash_profile. There is a great explanation of how to do this [http://www.cyberciti.biz/tips/howto-linux-unix-bash-shell-setup-prompt.html here].<br />
<br />
Some of the content you can include in the command prompt:<br />
;Current time<br />
: You can format this however you want. This helps when looking back through your Terminal to find when you made certain changes to files.<br />
;Current working directory<br />
: So you always know where you are in a filesystem and don't need to constantly retype <code>pwd</code>.<br />
;Username<br />
: Who you are. Helpful if you are logged into multiple servers under multiple accounts and need help keeping track.<br />
;Host<br />
: The name of the computer you are logged into. This also helps you know where you are at all times.<br />
<br />
Line to add to your bash profile<br />
export PS1="\[\e[0;31m\]\h\[\e[1;37m\]:\[\e[1;34m\]\w\n\[\e[1;37m\]\D{%Y-%m-%d-%H-%M-%S} \[\e[22;32m\]\u \$ "<br />
Resulting prompt (on a black background)<br/><br />
<code style="background:#000000; padding:5pt"><span style="color:#FF0000">HOST</span><span style="color:#000000">:</span><span style="color:#0000FF">CURRENT WORKING DIRECTORY</span><br/><br />
<span style="color:#FFFFFF"> DATETIME IN ISO8601 FORMAT</span> <span style="color:#00FF00">USERNAME $</span></code><br />
<br />
<br />
<br />
==Example Bash Profile==<br />
<nowiki>#.bash_profile<br />
<br />
# Get the aliases and functions<br />
if [ -f ~/.bashrc ]; then<br />
. ~/.bashrc<br />
fi<br />
<br />
# Source to use FMRI Apps<br />
source /u/home/FMRI/apps/etc/profile<br />
<br />
# Umask (Revoke Permissions)<br />
umask 007<br />
<br />
# Collaborative permissions<br />
fix_perms.sh -q /u/home/sbook/data/collabDirectory<br />
<br />
# Happy Colors<br />
export PS1="\[\e[0;31m\]\h\[\e[1;37m\]:\[\e[1;34m\]\w\n\[\e[1;37m\]\D{%Y-%m-%d-%H-%M-%S} \[\e[22;32m\]\u \$ <br />
</nowiki><br />
<br />
<br />
<br />
==External Links==<br />
*[http://ss64.com/bash/period.html Explanation of source]<br />
*[http://linux.die.net/man/2/umask Man for umask]<br />
*[http://www.cyberciti.biz/tips/understanding-linux-unix-umask-value-usage.html Better explanation of umask]<br />
*[http://www.cyberciti.biz/tips/howto-linux-unix-bash-shell-setup-prompt.html Coloration]<br />
*[http://en.wikipedia.org/wiki/ISO_8601 ISO 8601 Datetime format]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Quotas&diff=893Hoffman2:Quotas2013-06-22T07:34:23Z<p>Elau: Updating the groupquota command to myquota -g after directory restructuring</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
Users and groups of users on Hoffman2 only have access to a predefined amount of disk space and number of files.<br />
<br />
:'''After quota is reached, your account (or all the accounts of a usergroup) will have reduced capabilities since you won't be able to make any new files.'''<br />
<br />
==Stay In The Know==<br />
Keep yourself apprised of how much data you are using with these tools.<br />
<br />
===Personal Quotas===<br />
myquota<br />
:Returns information about how much disk space you are using and how many files you have. An example output is shown below.<br />
$ myquota<br />
User quotas for eplau (UID 8693) (in GBs):<br />
Filesystem Usage (in GB) Quota File Count File Quota<br />
/home/mscohen 141 5120 58575 8000000<br />
Filesystem /home/mscohen usage: 3857 of 5120 GBs (75.3%) and 7024000 of 8000000 files (87.8%)<br />
/home/sbook 1 2048 4 5000000<br />
Filesystem /home/sbook usage: 5901 of 6144 GBs (96.1%) and 5881211 of 6000000 files (98.0%)<br />
<br />
===Group Quotas===<br />
myquota -g [GROUPNAME]<br />
:Returns information about how much space you and everyone in your resource group are using on Hoffman2. An example output is shown below.<br />
$ myquota -g mscohen<br />
Group mscohen Report (/home/mscohen):<br />
Username UID Usage (in GB) Quota File Count File Quota<br />
aarontre 9307 164 6144 81477 8000000<br />
aburggre 8223 0 6144 10 8000000<br />
akshaan 10094 0 6144 120 8000000<br />
alenarto 8800 254 6144 97615 8000000<br />
alhead 9612 100 6144 1001 8000000<br />
ariana 8186 293 6144 420801 8000000<br />
ayc 8955 160 6144 1180551 8000000<br />
cdrodrig 9545 3 6144 52 8000000<br />
dcmoyer 9397 921 6144 173754 8000000<br />
diannaha 8134 0 6144 171 8000000<br />
eddieyan 10322 0 6144 16 8000000<br />
eplau 8693 523 6144 123964 8000000<br />
eshwang1 9811 0 6144 54 8000000<br />
fbiessma 10212 0 6144 67 8000000<br />
fmri 1901 0 6144 479 8000000<br />
jbramen 8369 632 6144 462428 8000000<br />
jbrown 8187 0 6144 148 8000000<br />
jianwen 9921 0 6144 19 8000000<br />
kaavyara 9295 0 6144 1064 8000000<br />
kerr 8555 487 6144 34095 8000000<br />
kesslers 9815 25 6144 60996 8000000<br />
mhussien 9922 0 6144 10 8000000<br />
mlschaef 9447 11 6144 15783 8000000<br />
mowyong 10329 0 6144 201 8000000<br />
mscohen 4004 18 6144 8458 8000000<br />
mundaeru 9542 0 6144 10 8000000<br />
mwollner 9696 35 6144 2721 8000000<br />
pamelita 8557 1280 6144 946730 8000000<br />
root 0 0 6144 1 8000000<br />
sanjayra 9846 0 6144 10 8000000<br />
xiahongj 9047 666 6144 289695 8000000<br />
Filesystem mscohen usage: 5656 of 6144 GBs (92.1%) and 3921309 of 8000000 files (49.0%)<br />
<br />
<br />
<br />
==Clean Up After Yourself==<br />
Every so often, some spring cleaning is useful. We have an app for that.<br />
=====clean_me.py=====<br />
Available to everyone in the FMRI usergroup on Hoffman2, this script was designed to find certain types of pesky files that have been known to build up over time but aren't actually necessary:<br />
* ''.DS_store'' files<br />
* Empty directories<br />
* tsplots<br />
* Empty files<br />
* Extended Attributes<br />
* mat files<br />
and it gives you the option of deleting them.<br />
<br />
To run it, just change into the directory you wish to clean and use the command:<br />
$ clean_me.py<br />
<br />
<br />
<br />
==When it Hits the Fan==<br />
You tried to remember to clean up your files, you even kept a large dataset on your computer last weekend instead of working with it on Hoffman2. But still your quota, or your group's quota was reached. How does one fix this?<br />
<br />
# First things first, run <code>clean_me.py</code><br />
# Start identifying what you can delete. This is a wonderful opportunity to audit your home directory to see what really is in the directory called ''temp-567'' and what you actually put in the file ''subjects-az-list''.<br />
# [[Tar Tutorial|Tar]] things up. That is to say, take that huge collection of DICOM files and turn them into a single file to cut down on overhead (remember it is about both file count and file size on Hoffman2). Unfamiliar with what tar is? Check out the [[Tar Tutorial]].</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Introduction&diff=844Hoffman2:Introduction2013-06-22T07:29:07Z<p>Elau: Edits to reflect new state after home directory locations were changed.</p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
==What is Hoffman2?==<br />
The Hoffman2 Cluster is a campus computing resource at UCLA and is named for Paul Hoffman (1947-2003). It is maintained by the Academic Technology Services Department at UCLA and they host a webpage about it [http://www.ats.ucla.edu/clusters/hoffman2/ here]. With many high end processors and data storage and backup technologies, it is a useful tool for executing research computations especially when working with large datasets. More than 1000 users are currently registered and the cluster sees tremendous usage. Click [[Hoffman2:Getting an Account|here]] to find out how to join that user group. In February 2012 alone, there were more than 4 million compute hours logged. See more usage statistics [https://idre.ucla.edu/hoffman2/cluster-statistics here].<br />
<br />
<br />
<br />
==Anatomy of the Computing Cluster==<br />
What does Hoffman2 consist of?<br />
* Login Nodes<br />
* Computing Nodes<br />
* Storage Space<br />
* Sun Grid Engine (a brain of sorts)<br />
<br />
[[File:hoffman2layout.png]]<br />
<br/><br />
''**Image taken from a previous ATS "Using Hoffman2 Cluster" slide deck and modified for our point.**''<br />
<br />
<br />
===Login Nodes===<br />
There are four login nodes which allow you to access and interact with the Hoffman2 Cluster. These are essentially four dedicated computers that you can [[Hoffman2:Accessing the Cluster#SSH|SSH]] into and use to look at and edit your files or submit computing jobs to the queue (more on what the queue is in a bit). It is important to remember that these are four computers being shared by ALL the Hoffman2 users. Doing ANY type of heavy computing on these nodes is frowned upon. If you are:<br />
*moving lots of files<br />
*calculating the inverse solution to an EEG signal, or<br />
*running a bunch of python scripts to extract tractography of a brain<br />
you should NOT be doing this on a login node. If the sysadmins at ATS find any process that is taking up too many resources on the login nodes, they reserve the right to terminate the process immediately.<br />
<br />
<br />
===Computing Nodes===<br />
As of November 2012, Hoffman2 is made up of more than 9000 processors across three data centers and this number continues to grow as the cluster is expanded. The individual cores of the processors are where your programs gets executed when you submit a job to the cluster. There are ways to request that you only be given one core to use or that you be given many cores.<br />
<br />
There is also a GPU cluster that has more than 300 nodes, but access to this must be requested separately from a normal Hoffman2 account. Look [http://fuji.ats.ucla.edu/for-transfer/hoffman2-cluster/computing/gpuq.htm here] for how to request access.<br />
<br />
The reason the number of computing cores continues to grow is because more resource groups (like individual research labs) join Hoffman2 and buy nodes to be integrated into the cluster. Nodes contributed by a resource group are guaranteed to that resource group and can be used to run longer jobs ([http://fuji.ats.ucla.edu/for-transfer/hoffman2-cluster/computing/policies.htm#highp up to 14 days]). As of June 2013, the Cohen and Bookheimer groups on Hoffman2 have 96 cores:<br />
* 6 nodes (installed pre 2010) each with<br />
** 8 cores<br />
** 8GB RAM<br />
* 3 nodes (installed Fall 2012) each with<br />
** 16 cores<br />
** 48GB RAM<br />
Use the command <code>mygroup</code> to see what resources you have available.<br />
<br />
<br />
===Storage Space===<br />
For official and up-to-date information about storage space, [https://idre.ucla.edu/hoffman2/data-storage click here]. If you want a quick overview, see below.<br />
<br />
====Home Directory====<br />
When you login to Hoffman2, you get dropped into your home directory immediately. Home directory locations follow the pattern<br />
/u/home/[u]/[username]<br />
Where <code>[u]</code> is the first letter of the username, e.g.<br />
/u/home/j/jbruin<br />
/u/home/t/ttrojan<br />
<br />
Your home directory is where you can keep your personal files, data and scripts you work with. Data in your home directory is accessible on all login and computing nodes.<br />
<br />
ATS maintains high end storage systems (BlueArc and Panasas) for your home directory. There have built in redundancies and are fault tolerant. On top of that, ATS does tape backups regularly. '''If all of that sounded Greek to you, the important thing to understand is that there is a lot of disk space on Hoffman2 and they take great pains to make sure your data is safe.'''<br />
<br />
Every user is allowed to store up to 20GB of data files in their home directory. If you are part of a cluster contributing group, you can also store data files in that group's common space<br />
/u/home/[GROUPNAME]<br />
so long as that group is within their quota limits for file count and size.<br />
<br />
[[Hoffman2:Quotas|Find out how much space your group is using on Hoffman2.]]<br />
<br />
====Temporary Storage====<br />
When running a computing job on Hoffman2, reading and writing a bunch of files in your home directory can be slow. So faster temporary storage is available to use for ongoing jobs. Read the official description [https://idre.ucla.edu/hoffman2/data-storage#tempfs here].<br />
<br />
:'''/work'''<br />
:Each computing node has its own unique "work" directory. This is only accessible by jobs on that specific node. Any data your job may put on it will be removed as soon as your job finishes. There is at least 100GB of this space on each node, but you may only use a portion proportional to the number of cores you are using on that node (you have to share).<br />
<br />
:Every job is given a unique subdirectory on ''work'' where it can read and write files rapidly. The [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] <code>$TMPDIR</code> points to this directory.<br />
<br />
:If your job reads from or writes to a file repeatedly, you may save time by keeping that file in this temporary directory and then moving it to your home directory at completion so it is not deleted.<br />
<br />
:'''/u/scratch/[UserID]'''<br />
:Where ''[UserID]'' is replaced with your Hoffman2 username. Data here is accessible on all login and computing nodes. You can use up to 2TB of space here, but data is not kept here for more than 7 days and can be overwritten sooner if there is a high demand for scratch space. Use the [[Hoffman2:UNIX Tutorial#Environment Variables|UNIX environment variable]] <code>$SCRATCH</code> to access your personal scratch directory.<br />
<br />
<br />
===Sun Grid Engine===<br />
The Sun Grid Engine is the brains behind how jobs get executed on the cluster. When you request that a script be run on Hoffman2, the SGE looks at the resources you requested (how much memory, how many computing cores, how many computing hours, etc) and puts your job in a queue (a waiting line for those not familiar with British English) based on your requirements. Less demanding jobs generally get front loaded while more demanding ones must wait for adequate resources to free up. The SGE tries to schedule jobs on computing nodes in order to make the most efficient use of the resources available.<br />
<br />
====Queues====<br />
There is more than one queue on Hoffman2. Each is for a slightly different purpose:<br />
; express<br />
: For jobs requesting at most 2 hours of computing time.<br />
; interactive<br />
: For jobs requesting at most 24 hours of computing time and requiring the ability for users to interact with the program running.<br />
; highp<br />
: For jobs requesting at most 14 days of computing time. These are required to run on nodes owned by your group.<br />
And there are others. Read about them [http://fuji.ats.ucla.edu/for-transfer/hoffman2-cluster/computing/policies.htm here].<br />
<br />
<br />
[[Hoffman2:Submitting Jobs|Find out how to submit computing jobs to the Hoffman2 Cluster.]]<br />
<br />
<br />
<br />
==External Links==<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/ Hoffman2 Webpage]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/h2stat/statistics.htm Hoffman2 Statistics]<br />
*[http://www.ats.ucla.edu/clusters/hosting/ Hoffman2 Cluster Hosting]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/computing/batch/policies.htm Hoffman2 Queues]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/hardware/default.htm Hoffman2 Hardware]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/computing/gpuq.htm Hoffman2 GPU Cluster]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/data_storage/default.htm Hoffman2 Data Storage]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/computing/howtoscratch.htm Temporary File Storage for Fast I/O]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Profile&diff=882Hoffman2:Profile2013-06-20T17:17:36Z<p>Elau: </p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
In UNIX systems, there are certain configuration files that get executed every time you login. If you are using the Bash shell (default), you have a file called <code>.bash_profile</code> which is processed when you log in. In order to make the FMRI toolset available to you on Hoffman2 and so you can work well with others, we recommend that you follow the instructions in the [[Hoffman2:Profile#Basics|Basics section]]. Read [[Hoffman2:Profile#Extras|Extras]] for some bells and whistles.<br />
<br />
<br />
==Basics==<br />
You account has one last thing that needs to be edited before being usable.<br />
<br />
# [[Hoffman2:Accessing the Cluster#SSH - Command Line|SSH into Hoffman2]]<br />
# Use your favorite [[Text Editors|text editor]] to edit the file <code>~/.bash_profile</code><br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* <pre>$ vim ~/.bash_profile</pre><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs]]<br />
#:* <pre>$ emacs ~/.bash_profile</pre><br />
#: [[Text Editors#NEdit (H2) (OSX)|NEdit]]<br />
#:* <pre>$ nedit ~/.bash_profile</pre><br />
# Insert these lines at the '''bottom''' of the file<br />
#:* <pre>source /u/home/FMRI/apps/etc/profile&#10;umask 007</pre><br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* Type <code>G</code> - capital G - to go to the end of the file<br />
#:* Type <code>A</code> - capital A - to go to the end of the line and enter insert mode<br />
#:* Type <code>ENTER</code> - to insert a newline<br />
#:* Type or paste in the specified lines.<br />
#: [[Text Editors#Emacs (H2)(OSX)|Emacs]]<br />
#:* Use the arrow keys to scroll the cursor down to the bottom of the document and add a newline.<br />
#:* Type or paste in the specified lines.<br />
#: [[Text Editors#NEdit (H2)|NEdit]]<br />
#:* Use the arrow keys to scroll the cursor down to the bottom of the document and add a newline.<br />
#:* Type or paste in the specified lines.<br />
# Save the file<br />
#: [[Text Editors#Vim (H2) (OSX)|VIM]]<br />
#:* <code>ESC + ":wq" + ENTER</code><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs command line]]<br />
#:* <code>CTRL+x, CTRL+c</code><br />
#: [[Text Editors#Emacs (H2) (OSX)|Emacs GUI]]<br />
#:* <code>CTRL+x, CTRL+c, y</code><br />
#:* or use the menu system<br />
#: [[Text Editors#NEdit (H2)|NEdit]]<br />
#:* Use the menu.<br />
# Log out of Hoffman2 and the next time you log in, everything will be set for you to start working.<br />
<br />
<br />
===Curious?===<br />
For those that care, what you are doing is asking the computer to execute the file<br />
/u/home/FMRI/apps/etc/profile<br />
every time you login. This file modifies your PATH variable so you have access to the FMRI toolset.<br />
<br />
The last line<br />
umask 007<br />
makes it so that any files you create will not allow "anyone" outside your group to read, write, or execute files and directories you make. This does not automatically grant read, write, and execute privileges to you and your group though.<br />
<br />
<br />
<br />
==Extras==<br />
===Collaboration===<br />
By default, any files and directories you create will not necessarily have permissions that allow your group to write on them. This can be a problem if other people are supposed to build on data you processed. We have a script ([[Hoffman2:Scripts:fix_perms.sh |fix_perms.sh]]) that will kindly find any files you own in a specified directory that don't have read/write/execute permissions for the group and make it so they do.<br />
<br />
You can build this script into your bash profile so that every time you log into Hoffman2, it will run in the background. It is also recommended that you run this script at the end of jobs to make results immediately available to collaborators.<br />
<br />
Adding the line<br />
fix_perms.sh -q /u/home/[GROUP]/data &<br />
to the end of your bash profile will run the permission fixer on your group's common data directory in the background quietly each time you log in. '''Make sure to replace [GROUP] with the name of your Hoffman2 group (e.g. mscohen, sbook, cbearden, laltshul, jfeusner or mgreen).'''<br />
<br />
<br />
===Colors===<br />
You can change the content and color of your command prompt by editing your bash_profile. There is a great explanation of how to do this [http://www.cyberciti.biz/tips/howto-linux-unix-bash-shell-setup-prompt.html here].<br />
<br />
Some of the content you can include in the command prompt:<br />
;Current time<br />
: You can format this however you want. This helps when looking back through your Terminal to find when you made certain changes to files.<br />
;Current working directory<br />
: So you always know where you are in a filesystem and don't need to constantly retype <code>pwd</code>.<br />
;Username<br />
: Who you are. Helpful if you are logged into multiple servers under multiple accounts and need help keeping track.<br />
;Host<br />
: The name of the computer you are logged into. This also helps you know where you are at all times.<br />
<br />
Line to add to your bash profile<br />
export PS1="\[\e[0;31m\]\h\[\e[1;37m\]:\[\e[1;34m\]\w\n\[\e[1;37m\]\D{%Y-%m-%d-%H-%M-%S} \[\e[22;32m\]\u \$ "<br />
Resulting prompt (on a black background)<br/><br />
<code style="background:#000000; padding:5pt"><span style="color:#FF0000">HOST</span><span style="color:#000000">:</span><span style="color:#0000FF">CURRENT WORKING DIRECTORY</span><br/><br />
<span style="color:#FFFFFF"> DATETIME IN ISO8601 FORMAT</span> <span style="color:#00FF00">USERNAME $</span></code><br />
<br />
<br />
<br />
==Example Bash Profile==<br />
<nowiki>#.bash_profile<br />
<br />
# Get the aliases and functions<br />
if [ -f ~/.bashrc ]; then<br />
. ~/.bashrc<br />
fi<br />
<br />
# Source to use FMRI Apps<br />
source /u/home/FMRI/apps/etc/profile<br />
<br />
# Umask (Revoke Permissions)<br />
umask 007<br />
<br />
# Collaborative permissions<br />
fix_perms.sh -q ~/../data/collabDirectory<br />
<br />
# Happy Colors<br />
export PS1="\[\e[0;31m\]\h\[\e[1;37m\]:\[\e[1;34m\]\w\n\[\e[1;37m\]\D{%Y-%m-%d-%H-%M-%S} \[\e[22;32m\]\u \$ <br />
</nowiki><br />
<br />
<br />
<br />
==External Links==<br />
*[http://ss64.com/bash/period.html Explanation of source]<br />
*[http://linux.die.net/man/2/umask Man for umask]<br />
*[http://www.cyberciti.biz/tips/understanding-linux-unix-umask-value-usage.html Better explanation of umask]<br />
*[http://www.cyberciti.biz/tips/howto-linux-unix-bash-shell-setup-prompt.html Coloration]<br />
*[http://en.wikipedia.org/wiki/ISO_8601 ISO 8601 Datetime format]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Scripts:matlab_license_check.sh&diff=913Hoffman2:Scripts:matlab license check.sh2013-06-18T00:52:30Z<p>Elau: </p>
<hr />
<div>[[Hoffman2:Scripts|Back to Hoffman2 Scripts]]<br />
<br />
==Use Cases==<br />
* You need to run a MATLAB script and want to check that there are sufficient licenses. If not, you are grabbing a coffee or going for a run until they free up.<br />
* You tried running a MATLAB script and found that you couldn't check out sufficient licenses to do your work. Now you want to see if the users taking up precious licenses are part of your group so you can ask/yell for them to release the license and step away from the computer for a bit.<br />
<br />
<br />
<br />
<br />
==A bit about licenses==<br />
Every time you run the MATLAB program, you need a MATLAB license. If you also make use of the utilities in a specific toolbox, say some statistical plotting commands line <code>boxplot</code> from the Statistics Toolbox, you need a license for that specific toolbox in addition to the one just for MATLAB.<br />
<br />
If you were trying to run multiple MATLAB instances at once and each used a specific toolbox, you can see how the number of licenses you'd need could get out of hand.<br />
<br />
[[Hoffman2:Compiling_MATLAB|Compiled MATLAB]] only needs the requisite licenses to create the compiled code. After that it can run without taking up a license making compiled code very attractive for massively parallel processing.<br />
<br />
<br />
<br />
==Help/Usage==<br />
<pre><br />
matlab_license_check.sh<br />
2013.06.17<br />
<br />
Wrapper script for the command<br />
/u/local/licenses/lic_manager.sh users matlab<br />
which is used to check on the status of MATLAB licenses on Hoffman2. The<br />
output is slightly modified and managed to make it more readable for users.<br />
<br />
<br />
USAGE:<br />
$ matlab_license_check.sh [-h --help -r -f]<br />
<br />
<br />
ARGUMENTS:<br />
With no argument, a simple digest version is displayed listing the<br />
number of used and avaiable licenses of each MATLAB tool.<br />
<br />
Optional arguments:<br />
-h, --help Show this usage message<br />
-r Show the digest version plus reservation information,<br />
because reserved licenses are counted as in-use, so even if<br />
it says all licenses are used, it is important to check to<br />
see if there are any reserved licenses that you are able to<br />
make use of.<br />
-f Show the full output, including details about which users<br />
are using licenses and on which nodes<br />
</pre><br />
<br />
<br />
<br />
==Example Output==<br />
===Digest===<br />
$ matlab_license_check.sh<br />
20130409T144846-0700<br />
MATLAB<br />
---- (Total of 37 licenses issued; Total of 37 licenses in use)<br />
Compiler<br />
---- (Total of 7 licenses issued; Total of 4 licenses in use)<br />
Curve_Fitting_Toolbox<br />
---- (Total of 1 license issued; Total of 1 license in use)<br />
Image_Toolbox<br />
---- (Total of 8 licenses issued; Total of 8 licenses in use)<br />
Optimization_Toolbox<br />
---- (Total of 4 licenses issued; Total of 2 licenses in use)<br />
Signal_Toolbox<br />
---- (Total of 5 licenses issued; Total of 4 licenses in use)<br />
Statistics_Toolbox<br />
---- (Total of 15 licenses issued; Total of 11 licenses in use)<br />
Wavelet_Toolbox<br />
---- (Total of 2 licenses issued; Total of 2 licenses in use)<br />
Bioinformatics_Toolbox<br />
---- (Total of 8 licenses issued; Total of 8 licenses in use)<br />
Distrib_Computing_Toolbox<br />
---- (Total of 3 licenses issued; Total of 1 license in use)<br />
MATLAB_Distrib_Comp_Engine<br />
---- (Total of 16 licenses issued; Total of 0 licenses in use)<br />
SIMULINK<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Control_Toolbox<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Simulink_Control_Design<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Simulink_Design_Optim<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Symbolic_Toolbox<br />
---- (Total of 1 license issued; Total of 0 licenses in use)<br />
This shows that all MATLAB licenses are either in use or reserved, so it is unclear if I have one available to checkout. It also shows that things like SIMULINK and Symbolic Toolbox are not in use by anyone right now but things like the Wavelet and Bioinformatics Toolboxes are in use or reserved.<br />
<br />
<br />
===Reservations===<br />
$ matlab_license_check.sh -r<br />
20130409T144846-0700<br />
MATLAB<br />
---- (Total of 37 licenses issued; Total of 37 licenses in use)<br />
-------- 1 RESERVATION for GROUP bueragroup (lm/27010)<br />
-------- 5 RESERVATIONs for GROUP cohenlab (lm/27010)<br />
-------- 5 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
-------- 4 RESERVATIONs for GROUP galaxygroup (lm/27010)<br />
-------- 1 RESERVATION for GROUP ghoniemgroup (lm/27010)<br />
-------- 3 RESERVATIONs for GROUP kluggroup (lm/27010)<br />
-------- 1 RESERVATION for GROUP miaogroup (lm/27010)<br />
-------- 1 RESERVATION for GROUP miaogroup (lm/27010)<br />
-------- 1 RESERVATION for GROUP moshegroup (lm/27010)<br />
-------- 1 RESERVATION for GROUP moshegroup (lm/27010)<br />
-------- 1 RESERVATION for GROUP moshegroup (lm/27010)<br />
-------- 1 RESERVATION for GROUP staff1group (lm/27010)<br />
-------- 1 RESERVATION for GROUP staff1group (lm/27010)<br />
Compiler<br />
---- (Total of 7 licenses issued; Total of 4 licenses in use)<br />
-------- 1 RESERVATION for GROUP cohenlab (lm/27010)<br />
-------- 2 RESERVATIONs for GROUP miaogroup (lm/27010)<br />
-------- 1 RESERVATION for GROUP staff1group (lm/27010)<br />
Curve_Fitting_Toolbox<br />
---- (Total of 1 license issued; Total of 1 license in use)<br />
-------- 1 RESERVATION for GROUP bueragroup (lm/27010)<br />
Image_Toolbox<br />
---- (Total of 8 licenses issued; Total of 8 licenses in use)<br />
-------- 5 RESERVATIONs for GROUP cohenlab (lm/27010)<br />
-------- 2 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
Optimization_Toolbox<br />
---- (Total of 4 licenses issued; Total of 2 licenses in use)<br />
-------- 2 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
Signal_Toolbox<br />
---- (Total of 5 licenses issued; Total of 4 licenses in use)<br />
-------- 1 RESERVATION for GROUP cohenlab (lm/27010)<br />
-------- 2 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
Statistics_Toolbox<br />
---- (Total of 15 licenses issued; Total of 11 licenses in use)<br />
-------- 3 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
-------- 4 RESERVATIONs for GROUP galaxygroup (lm/27010)<br />
-------- 4 RESERVATIONs for GROUP pellegrinigroup (lm/27010)<br />
Wavelet_Toolbox<br />
---- (Total of 2 licenses issued; Total of 2 licenses in use)<br />
-------- 1 RESERVATION for GROUP cohenlab (lm/27010)<br />
-------- 1 RESERVATION for GROUP fmrigroup (lm/27010)<br />
Bioinformatics_Toolbox<br />
---- (Total of 8 licenses issued; Total of 8 licenses in use)<br />
-------- 4 RESERVATIONs for GROUP galaxygroup (lm/27010)<br />
-------- 4 RESERVATIONs for GROUP pellegrinigroup (lm/27010)<br />
Distrib_Computing_Toolbox<br />
---- (Total of 3 licenses issued; Total of 1 license in use)<br />
MATLAB_Distrib_Comp_Engine<br />
---- (Total of 16 licenses issued; Total of 0 licenses in use)<br />
SIMULINK<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Control_Toolbox<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Simulink_Control_Design<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Simulink_Design_Optim<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Symbolic_Toolbox<br />
---- (Total of 1 license issued; Total of 0 licenses in use)<br />
This shows that there are still five (5) licenses reserved for cohenlab (mscohen) and fmrigroup (sbook), despite all other licenses being used. The Wavelet and Bioinformatics Toolboxes are all in use because they are all reserved by specific groups, but SIMULINK and the Symbolic Toolbox are not used and don't have any reservations on them.<br />
<br />
<br />
===Full===<br />
$ matlab_license_check.sh -f<br />
20130409T144846-0700<br />
<br />
You have requested: users matlab<br />
<br />
Arguments are "users" and "matlab"<br />
The license directory is /u/local/licenses<br />
The apps directory is /u/local/apps<br />
<br />
lmstat - Copyright (c) 1989-2010 Flexera Software, Inc. All Rights Reserved.<br />
Flexible License Manager status on Tue 4/9/2013 14:48<br />
<br />
License server status: 27010@lm<br />
License file(s) on lm: /u/local/licenses/license.matlab:<br />
<br />
lm: license server UP (MASTER) v11.9<br />
<br />
Vendor daemon status (on lm):<br />
<br />
MLM: UP v11.6<br />
Feature usage info:<br />
<br />
Users of MATLAB: (Total of 37 licenses issued; Total of 37 licenses in use)<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
alhead n2173 /dev/tty (v27) (lm/27010 1805), start Tue 4/9 13:33<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
alnersh n47 /dev/tty (v27) (lm/27010 1208), start Tue 4/9 11:43<br />
<br />
"MATLAB" v28, vendor: MLM<br />
floating license<br />
<br />
<br />
"MATLAB" v28, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP bueragroup (lm/27010)<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
5 RESERVATIONs for GROUP cohenlab (lm/27010)<br />
<br />
"MATLAB" v28, vendor: MLM<br />
floating license<br />
<br />
5 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
4 RESERVATIONs for GROUP galaxygroup (lm/27010)<br />
<br />
"MATLAB" v28, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP ghoniemgroup (lm/27010)<br />
hstroudd n2004 /dev/tty (v27) (lm/27010 301), start Tue 4/9 8:12<br />
hstroudd n39 /dev/tty (v27) (lm/27010 401), start Tue 4/9 8:13<br />
hstroudd n2002 /dev/tty (v27) (lm/27010 501), start Tue 4/9 8:14<br />
hstroudd n45 /dev/tty (v27) (lm/27010 601), start Tue 4/9 8:14<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
huangzhi n126 /dev/pts/0 (v20) (lm/27010 902), start Tue 4/9 10:03<br />
huning n2176 /dev/tty (v27) (lm/27010 802), start Tue 4/9 10:01<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
<br />
"MATLAB" v21, vendor: MLM<br />
nodelocked license, locked to "ID=301818"<br />
<br />
3 RESERVATIONs for GROUP kluggroup (lm/27010)<br />
<br />
"MATLAB" v21, vendor: MLM<br />
nodelocked license, locked to "ID=319868"<br />
<br />
1 RESERVATION for GROUP miaogroup (lm/27010)<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP miaogroup (lm/27010)<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP moshegroup (lm/27010)<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP moshegroup (lm/27010)<br />
1 RESERVATION for GROUP moshegroup (lm/27010)<br />
navari n2177 /dev/tty (v27) (lm/27010 201), start Tue 4/9 8:11<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP staff1group (lm/27010)<br />
<br />
"MATLAB" v21, vendor: MLM<br />
nodelocked license, locked to "ID=301818"<br />
<br />
1 RESERVATION for GROUP staff1group (lm/27010)<br />
wuli n43 /dev/tty (v20) (lm/27010 703), start Tue 4/9 8:30<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
xiahongj n40 /dev/tty (v27) (lm/27010 101), start Tue 4/9 8:11<br />
<br />
Users of Compiler: (Total of 7 licenses issued; Total of 4 licenses in use)<br />
<br />
"Compiler" v28, vendor: MLM<br />
floating license<br />
<br />
<br />
"Compiler" v21, vendor: MLM<br />
nodelocked license, locked to "ID=301818"<br />
<br />
1 RESERVATION for GROUP cohenlab (lm/27010)<br />
<br />
"Compiler" v27, vendor: MLM<br />
floating license<br />
<br />
2 RESERVATIONs for GROUP miaogroup (lm/27010)<br />
1 RESERVATION for GROUP staff1group (lm/27010)<br />
<br />
Users of Curve_Fitting_Toolbox: (Total of 1 license issued; Total of 1 license in use)<br />
<br />
"Curve_Fitting_Toolbox" v28, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP bueragroup (lm/27010)<br />
<br />
Users of Image_Toolbox: (Total of 8 licenses issued; Total of 8 licenses in use)<br />
<br />
"Image_Toolbox" v28, vendor: MLM<br />
floating license<br />
<br />
<br />
"Image_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
5 RESERVATIONs for GROUP cohenlab (lm/27010)<br />
<br />
"Image_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
2 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
huangzhi n126 /dev/pts/0 (v20) (lm/27010 1001), start Tue 4/9 10:12<br />
<br />
Users of Optimization_Toolbox: (Total of 4 licenses issued; Total of 2 licenses in use)<br />
<br />
"Optimization_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
2 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
<br />
Users of Signal_Toolbox: (Total of 5 licenses issued; Total of 4 licenses in use)<br />
<br />
"Signal_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
alnersh n47 /dev/tty (v27) (lm/27010 1103), start Tue 4/9 12:11<br />
<br />
"Signal_Toolbox" v28, vendor: MLM<br />
floating license<br />
<br />
<br />
"Signal_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP cohenlab (lm/27010)<br />
2 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
<br />
Users of Statistics_Toolbox: (Total of 15 licenses issued; Total of 11 licenses in use)<br />
<br />
"Statistics_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
<br />
"Statistics_Toolbox" v28, vendor: MLM<br />
floating license<br />
<br />
3 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
4 RESERVATIONs for GROUP galaxygroup (lm/27010)<br />
4 RESERVATIONs for GROUP pellegrinigroup (lm/27010)<br />
<br />
Users of Wavelet_Toolbox: (Total of 2 licenses issued; Total of 2 licenses in use)<br />
<br />
"Wavelet_Toolbox" v28, vendor: MLM<br />
floating license<br />
<br />
<br />
"Wavelet_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP cohenlab (lm/27010)<br />
1 RESERVATION for GROUP fmrigroup (lm/27010)<br />
<br />
Users of Bioinformatics_Toolbox: (Total of 8 licenses issued; Total of 8 licenses in use)<br />
<br />
"Bioinformatics_Toolbox" v28, vendor: MLM<br />
floating license<br />
<br />
4 RESERVATIONs for GROUP galaxygroup (lm/27010)<br />
4 RESERVATIONs for GROUP pellegrinigroup (lm/27010)<br />
<br />
Users of Distrib_Computing_Toolbox: (Total of 3 licenses issued; Total of 1 license in use)<br />
<br />
"Distrib_Computing_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
alnersh n47 /dev/tty (v27) (lm/27010 1301), start Tue 4/9 11:44<br />
<br />
Users of MATLAB_Distrib_Comp_Engine: (Total of 16 licenses issued; Total of 0 licenses in use)<br />
<br />
Users of SIMULINK: (Total of 2 licenses issued; Total of 0 licenses in use)<br />
<br />
Users of Control_Toolbox: (Total of 2 licenses issued; Total of 0 licenses in use)<br />
<br />
Users of Simulink_Control_Design: (Total of 2 licenses issued; Total of 0 licenses in use)<br />
<br />
Users of Simulink_Design_Optim: (Total of 2 licenses issued; Total of 0 licenses in use)<br />
<br />
Users of Symbolic_Toolbox: (Total of 1 license issued; Total of 0 licenses in use)<br />
<br />
<br />
/u/local/apps/matlab/7.14/etc/glnxa64/lmstat -c /u/local/licenses/license.matlab -S MLM<br />
<br />
#############<br />
All done on 09/04/13 (European, I mean logical, date notation)!<br />
All the same information as in the previous examples, except you can also tell that alhead and alnersh are using MATLAB licenses on nodes 2173 and 27, respectively.</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:MATLAB&diff=855Hoffman2:MATLAB2013-06-18T00:35:31Z<p>Elau: </p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
MATLAB is not a small program and it can handle some fairly complex graphics. As such, this is not something suitable to be used on a login node of Hoffman2. But that's already been thought of by the great people at ATS.<br />
<br />
<br />
==GUI==<br />
To run a full GUI session of MATLAB, execute<br />
$ matlab<br />
That's it, no flags, no frills, nothing else. Hoffman2 will automatically check out an appropriate interactive node for you to run MATLAB on. All you have to do is provide a time limit (in hours) when they ask you<br />
Enter a time limit for your session, in hours (default 2)<br />
<or quit>: <br />
<br />
<br />
<br />
==Command Line==<br />
If you don't need the fancy GUI and just want the command line, execute<br />
$ matlab -nodesktop<br />
and then supply a time limit when asked.<br />
<br />
'''Since this uses interactive nodes, the maximum time limit you can request is 24 hours.'''<br />
<br />
<br />
<br />
==License Check==<br />
With so many people using Hoffman2 and MATLAB, sometimes licenses run out. Using this helpful script will give you some insight as to the license situation.<br />
<br />
[[Hoffman2:Scripts:matlab_license_check.sh|matlab_license_check.sh]]<br />
<br />
<br />
<br />
==External Links==<br />
*[http://www.ats.ucla.edu/clusters/common/software/engineering/matlab.htm MATLAB on Hoffman2]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Scripts:matlab_license_check.sh&diff=912Hoffman2:Scripts:matlab license check.sh2013-06-18T00:34:13Z<p>Elau: Created page with "Back to Hoffman2 Scripts ==Use Cases== * You need to run a MATLAB script and want to check that there are sufficient licenses. If not, you are grabbing a c..."</p>
<hr />
<div>[[Hoffman2:Scripts|Back to Hoffman2 Scripts]]<br />
<br />
==Use Cases==<br />
* You need to run a MATLAB script and want to check that there are sufficient licenses. If not, you are grabbing a coffee or going for a run until they free up.<br />
* You tried running a MATLAB script and found that you couldn't check out sufficient licenses to do your work. Now you want to see if the users taking up precious licenses are part of your group so you can ask/yell for them to release the license and step away from the computer for a bit.<br />
<br />
<br />
<br />
==Help/Usage==<br />
<pre><br />
matlab_license_check.sh<br />
2013.06.17<br />
<br />
Wrapper script for the command<br />
/u/local/licenses/lic_manager.sh users matlab<br />
which is used to check on the status of MATLAB licenses on Hoffman2. The<br />
output is slightly modified and managed to make it more readable for users.<br />
<br />
<br />
USAGE:<br />
$ matlab_license_check.sh [-h --help -r -f]<br />
<br />
<br />
ARGUMENTS:<br />
With no argument, a simple digest version is displayed listing the<br />
number of used and avaiable licenses of each MATLAB tool.<br />
<br />
Optional arguments:<br />
-h, --help Show this usage message<br />
-r Show the digest version plus reservation information,<br />
because reserved licenses are counted as in-use, so even if<br />
it says all licenses are used, it is important to check to<br />
see if there are any reserved licenses that you are able to<br />
make use of.<br />
-f Show the full output, including details about which users<br />
are using licenses and on which nodes<br />
</pre><br />
<br />
<br />
<br />
==Example Output==<br />
===Digest===<br />
$ matlab_license_check.sh<br />
20130409T144846-0700<br />
MATLAB<br />
---- (Total of 37 licenses issued; Total of 37 licenses in use)<br />
Compiler<br />
---- (Total of 7 licenses issued; Total of 4 licenses in use)<br />
Curve_Fitting_Toolbox<br />
---- (Total of 1 license issued; Total of 1 license in use)<br />
Image_Toolbox<br />
---- (Total of 8 licenses issued; Total of 8 licenses in use)<br />
Optimization_Toolbox<br />
---- (Total of 4 licenses issued; Total of 2 licenses in use)<br />
Signal_Toolbox<br />
---- (Total of 5 licenses issued; Total of 4 licenses in use)<br />
Statistics_Toolbox<br />
---- (Total of 15 licenses issued; Total of 11 licenses in use)<br />
Wavelet_Toolbox<br />
---- (Total of 2 licenses issued; Total of 2 licenses in use)<br />
Bioinformatics_Toolbox<br />
---- (Total of 8 licenses issued; Total of 8 licenses in use)<br />
Distrib_Computing_Toolbox<br />
---- (Total of 3 licenses issued; Total of 1 license in use)<br />
MATLAB_Distrib_Comp_Engine<br />
---- (Total of 16 licenses issued; Total of 0 licenses in use)<br />
SIMULINK<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Control_Toolbox<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Simulink_Control_Design<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Simulink_Design_Optim<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Symbolic_Toolbox<br />
---- (Total of 1 license issued; Total of 0 licenses in use)<br />
This shows that all MATLAB licenses are either in use or reserved, so it is unclear if I have one available to checkout. It also shows that things like SIMULINK and Symbolic Toolbox are not in use by anyone right now but things like the Wavelet and Bioinformatics Toolboxes are in use or reserved.<br />
<br />
<br />
===Reservations===<br />
$ matlab_license_check.sh -r<br />
20130409T144846-0700<br />
MATLAB<br />
---- (Total of 37 licenses issued; Total of 37 licenses in use)<br />
-------- 1 RESERVATION for GROUP bueragroup (lm/27010)<br />
-------- 5 RESERVATIONs for GROUP cohenlab (lm/27010)<br />
-------- 5 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
-------- 4 RESERVATIONs for GROUP galaxygroup (lm/27010)<br />
-------- 1 RESERVATION for GROUP ghoniemgroup (lm/27010)<br />
-------- 3 RESERVATIONs for GROUP kluggroup (lm/27010)<br />
-------- 1 RESERVATION for GROUP miaogroup (lm/27010)<br />
-------- 1 RESERVATION for GROUP miaogroup (lm/27010)<br />
-------- 1 RESERVATION for GROUP moshegroup (lm/27010)<br />
-------- 1 RESERVATION for GROUP moshegroup (lm/27010)<br />
-------- 1 RESERVATION for GROUP moshegroup (lm/27010)<br />
-------- 1 RESERVATION for GROUP staff1group (lm/27010)<br />
-------- 1 RESERVATION for GROUP staff1group (lm/27010)<br />
Compiler<br />
---- (Total of 7 licenses issued; Total of 4 licenses in use)<br />
-------- 1 RESERVATION for GROUP cohenlab (lm/27010)<br />
-------- 2 RESERVATIONs for GROUP miaogroup (lm/27010)<br />
-------- 1 RESERVATION for GROUP staff1group (lm/27010)<br />
Curve_Fitting_Toolbox<br />
---- (Total of 1 license issued; Total of 1 license in use)<br />
-------- 1 RESERVATION for GROUP bueragroup (lm/27010)<br />
Image_Toolbox<br />
---- (Total of 8 licenses issued; Total of 8 licenses in use)<br />
-------- 5 RESERVATIONs for GROUP cohenlab (lm/27010)<br />
-------- 2 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
Optimization_Toolbox<br />
---- (Total of 4 licenses issued; Total of 2 licenses in use)<br />
-------- 2 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
Signal_Toolbox<br />
---- (Total of 5 licenses issued; Total of 4 licenses in use)<br />
-------- 1 RESERVATION for GROUP cohenlab (lm/27010)<br />
-------- 2 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
Statistics_Toolbox<br />
---- (Total of 15 licenses issued; Total of 11 licenses in use)<br />
-------- 3 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
-------- 4 RESERVATIONs for GROUP galaxygroup (lm/27010)<br />
-------- 4 RESERVATIONs for GROUP pellegrinigroup (lm/27010)<br />
Wavelet_Toolbox<br />
---- (Total of 2 licenses issued; Total of 2 licenses in use)<br />
-------- 1 RESERVATION for GROUP cohenlab (lm/27010)<br />
-------- 1 RESERVATION for GROUP fmrigroup (lm/27010)<br />
Bioinformatics_Toolbox<br />
---- (Total of 8 licenses issued; Total of 8 licenses in use)<br />
-------- 4 RESERVATIONs for GROUP galaxygroup (lm/27010)<br />
-------- 4 RESERVATIONs for GROUP pellegrinigroup (lm/27010)<br />
Distrib_Computing_Toolbox<br />
---- (Total of 3 licenses issued; Total of 1 license in use)<br />
MATLAB_Distrib_Comp_Engine<br />
---- (Total of 16 licenses issued; Total of 0 licenses in use)<br />
SIMULINK<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Control_Toolbox<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Simulink_Control_Design<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Simulink_Design_Optim<br />
---- (Total of 2 licenses issued; Total of 0 licenses in use)<br />
Symbolic_Toolbox<br />
---- (Total of 1 license issued; Total of 0 licenses in use)<br />
This shows that there are still five (5) licenses reserved for cohenlab (mscohen) and fmrigroup (sbook), despite all other licenses being used. The Wavelet and Bioinformatics Toolboxes are all in use because they are all reserved by specific groups, but SIMULINK and the Symbolic Toolbox are not used and don't have any reservations on them.<br />
<br />
<br />
===Full===<br />
$ matlab_license_check.sh -f<br />
20130409T144846-0700<br />
<br />
You have requested: users matlab<br />
<br />
Arguments are "users" and "matlab"<br />
The license directory is /u/local/licenses<br />
The apps directory is /u/local/apps<br />
<br />
lmstat - Copyright (c) 1989-2010 Flexera Software, Inc. All Rights Reserved.<br />
Flexible License Manager status on Tue 4/9/2013 14:48<br />
<br />
License server status: 27010@lm<br />
License file(s) on lm: /u/local/licenses/license.matlab:<br />
<br />
lm: license server UP (MASTER) v11.9<br />
<br />
Vendor daemon status (on lm):<br />
<br />
MLM: UP v11.6<br />
Feature usage info:<br />
<br />
Users of MATLAB: (Total of 37 licenses issued; Total of 37 licenses in use)<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
alhead n2173 /dev/tty (v27) (lm/27010 1805), start Tue 4/9 13:33<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
alnersh n47 /dev/tty (v27) (lm/27010 1208), start Tue 4/9 11:43<br />
<br />
"MATLAB" v28, vendor: MLM<br />
floating license<br />
<br />
<br />
"MATLAB" v28, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP bueragroup (lm/27010)<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
5 RESERVATIONs for GROUP cohenlab (lm/27010)<br />
<br />
"MATLAB" v28, vendor: MLM<br />
floating license<br />
<br />
5 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
4 RESERVATIONs for GROUP galaxygroup (lm/27010)<br />
<br />
"MATLAB" v28, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP ghoniemgroup (lm/27010)<br />
hstroudd n2004 /dev/tty (v27) (lm/27010 301), start Tue 4/9 8:12<br />
hstroudd n39 /dev/tty (v27) (lm/27010 401), start Tue 4/9 8:13<br />
hstroudd n2002 /dev/tty (v27) (lm/27010 501), start Tue 4/9 8:14<br />
hstroudd n45 /dev/tty (v27) (lm/27010 601), start Tue 4/9 8:14<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
huangzhi n126 /dev/pts/0 (v20) (lm/27010 902), start Tue 4/9 10:03<br />
huning n2176 /dev/tty (v27) (lm/27010 802), start Tue 4/9 10:01<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
<br />
"MATLAB" v21, vendor: MLM<br />
nodelocked license, locked to "ID=301818"<br />
<br />
3 RESERVATIONs for GROUP kluggroup (lm/27010)<br />
<br />
"MATLAB" v21, vendor: MLM<br />
nodelocked license, locked to "ID=319868"<br />
<br />
1 RESERVATION for GROUP miaogroup (lm/27010)<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP miaogroup (lm/27010)<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP moshegroup (lm/27010)<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP moshegroup (lm/27010)<br />
1 RESERVATION for GROUP moshegroup (lm/27010)<br />
navari n2177 /dev/tty (v27) (lm/27010 201), start Tue 4/9 8:11<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP staff1group (lm/27010)<br />
<br />
"MATLAB" v21, vendor: MLM<br />
nodelocked license, locked to "ID=301818"<br />
<br />
1 RESERVATION for GROUP staff1group (lm/27010)<br />
wuli n43 /dev/tty (v20) (lm/27010 703), start Tue 4/9 8:30<br />
<br />
"MATLAB" v27, vendor: MLM<br />
floating license<br />
<br />
xiahongj n40 /dev/tty (v27) (lm/27010 101), start Tue 4/9 8:11<br />
<br />
Users of Compiler: (Total of 7 licenses issued; Total of 4 licenses in use)<br />
<br />
"Compiler" v28, vendor: MLM<br />
floating license<br />
<br />
<br />
"Compiler" v21, vendor: MLM<br />
nodelocked license, locked to "ID=301818"<br />
<br />
1 RESERVATION for GROUP cohenlab (lm/27010)<br />
<br />
"Compiler" v27, vendor: MLM<br />
floating license<br />
<br />
2 RESERVATIONs for GROUP miaogroup (lm/27010)<br />
1 RESERVATION for GROUP staff1group (lm/27010)<br />
<br />
Users of Curve_Fitting_Toolbox: (Total of 1 license issued; Total of 1 license in use)<br />
<br />
"Curve_Fitting_Toolbox" v28, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP bueragroup (lm/27010)<br />
<br />
Users of Image_Toolbox: (Total of 8 licenses issued; Total of 8 licenses in use)<br />
<br />
"Image_Toolbox" v28, vendor: MLM<br />
floating license<br />
<br />
<br />
"Image_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
5 RESERVATIONs for GROUP cohenlab (lm/27010)<br />
<br />
"Image_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
2 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
huangzhi n126 /dev/pts/0 (v20) (lm/27010 1001), start Tue 4/9 10:12<br />
<br />
Users of Optimization_Toolbox: (Total of 4 licenses issued; Total of 2 licenses in use)<br />
<br />
"Optimization_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
2 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
<br />
Users of Signal_Toolbox: (Total of 5 licenses issued; Total of 4 licenses in use)<br />
<br />
"Signal_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
alnersh n47 /dev/tty (v27) (lm/27010 1103), start Tue 4/9 12:11<br />
<br />
"Signal_Toolbox" v28, vendor: MLM<br />
floating license<br />
<br />
<br />
"Signal_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP cohenlab (lm/27010)<br />
2 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
<br />
Users of Statistics_Toolbox: (Total of 15 licenses issued; Total of 11 licenses in use)<br />
<br />
"Statistics_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
<br />
"Statistics_Toolbox" v28, vendor: MLM<br />
floating license<br />
<br />
3 RESERVATIONs for GROUP fmrigroup (lm/27010)<br />
4 RESERVATIONs for GROUP galaxygroup (lm/27010)<br />
4 RESERVATIONs for GROUP pellegrinigroup (lm/27010)<br />
<br />
Users of Wavelet_Toolbox: (Total of 2 licenses issued; Total of 2 licenses in use)<br />
<br />
"Wavelet_Toolbox" v28, vendor: MLM<br />
floating license<br />
<br />
<br />
"Wavelet_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
1 RESERVATION for GROUP cohenlab (lm/27010)<br />
1 RESERVATION for GROUP fmrigroup (lm/27010)<br />
<br />
Users of Bioinformatics_Toolbox: (Total of 8 licenses issued; Total of 8 licenses in use)<br />
<br />
"Bioinformatics_Toolbox" v28, vendor: MLM<br />
floating license<br />
<br />
4 RESERVATIONs for GROUP galaxygroup (lm/27010)<br />
4 RESERVATIONs for GROUP pellegrinigroup (lm/27010)<br />
<br />
Users of Distrib_Computing_Toolbox: (Total of 3 licenses issued; Total of 1 license in use)<br />
<br />
"Distrib_Computing_Toolbox" v27, vendor: MLM<br />
floating license<br />
<br />
alnersh n47 /dev/tty (v27) (lm/27010 1301), start Tue 4/9 11:44<br />
<br />
Users of MATLAB_Distrib_Comp_Engine: (Total of 16 licenses issued; Total of 0 licenses in use)<br />
<br />
Users of SIMULINK: (Total of 2 licenses issued; Total of 0 licenses in use)<br />
<br />
Users of Control_Toolbox: (Total of 2 licenses issued; Total of 0 licenses in use)<br />
<br />
Users of Simulink_Control_Design: (Total of 2 licenses issued; Total of 0 licenses in use)<br />
<br />
Users of Simulink_Design_Optim: (Total of 2 licenses issued; Total of 0 licenses in use)<br />
<br />
Users of Symbolic_Toolbox: (Total of 1 license issued; Total of 0 licenses in use)<br />
<br />
<br />
/u/local/apps/matlab/7.14/etc/glnxa64/lmstat -c /u/local/licenses/license.matlab -S MLM<br />
<br />
#############<br />
All done on 09/04/13 (European, I mean logical, date notation)!<br />
All the same information as in the previous examples, except you can also tell that alhead and alnersh are using MATLAB licenses on nodes 2173 and 27, respectively.</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Scripts:fix_perms.sh&diff=910Hoffman2:Scripts:fix perms.sh2013-06-17T20:14:14Z<p>Elau: </p>
<hr />
<div>[[Hoffman2:Scripts|Back to Hoffman2 Scripts]]<br />
<br />
==Use Cases==<br />
* You are collaborating with another researcher on a data set and you want to make sure all files are editable by the both of you. You need to make sure group read/write permissions are enabled and you both belong to a common group.<br />
* Your research group maintains a central data repository and different members are responsible for processing subjects at different stages. But everyone needs access to all the files created. Use this to make sure read/write permissions are enabled on that centrally located directory for the group. <br />
<br />
<br />
==Help/Usage==<br />
<pre><br />
$ fix_perms.sh --help<br />
<br />
fix_perms.sh<br />
<br />
This tool changes the permissions of any file owned by the<br />
executing user to have group read/write/execute permissions.<br />
Given no arguments, this permissions change is done<br />
recursively on the current directory. Given an argument<br />
that points to a directory that exists in the filesystem,<br />
it will run recursively on that.<br />
<br />
USAGE:<br />
$ fix_perms.sh -h<br />
or<br />
$ fix_perms.sh --help<br />
To see this usage message.<br />
<br />
$ fix_perms.sh<br />
Will recursively search the current working directory for<br />
files owned by the executing user. Any files that do not<br />
have group read/write/execute permissions will be given such.<br />
<br />
$ fix_perms.sh /path/to/directory<br />
Will recursively search the directory given as an argument<br />
for files owned by the executing user. Any files that do<br />
not have group read/write/execute permission will be given<br />
such.<br />
<br />
$ fix_perms.sh -q /path/to/directory<br />
Will do the same changing of permissions, but suppress the<br />
output of the find and chmod commands so that the process<br />
happens quietly. Useful if you run this command on <br />
directories everytime you login as a background process<br />
and would like to not be bombarded by lines and lines<br />
of output.<br />
</pre></div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Scripts&diff=907Hoffman2:Scripts2013-06-17T20:07:04Z<p>Elau: </p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
When you encounter a computer problem, it is best to find a systematic solution in the form of a script or program. This lets you be prepared to handle the problem the next time you encounter it, because let's face it, history has a way of repeating itself.<br />
<br />
Here-in lies a collection of the scripts we have built up in the Hoffman2 FMRI group for solving our most common problems. Each script's help output is provided for searching convenience as well as example usage cases for each script.<br />
<br />
<br />
==clean_me.py==<br />
How to clean up after yourself and maintain within your quota space.<br />
:[[Hoffman2:Scripts:clean_me.py|clean_me.py]]<br />
<br />
<br />
==clear_scratch.sh==<br />
Clear out all the temporary files you kept in your [[Hoffman2:Introduction#Temporary_Storage|scratch]] directory.<br />
:[[Hoffman2:Scripts:clear_scratch.sh|clear_scratch.sh]]<br />
<br />
<br />
==compUsage.py==<br />
Check out how much the FMRI group has been using the cluster. Useful to know if your job has a snow ball's chance of running before next quarter, or if Wesley's been breaking Hoffman2 again.<br />
:[[Hoffman2:Scripts:compUsage.py|compUsage.py]]<br />
<br />
<br />
==fix_all_home9.sh==<br />
A long time ago, the Hoffman2 filesystem had a directory called home9 that many people made use of. Now that it's gone, some of your scripts point to places that no longer exist. This tries to fix those scripts.<br />
:[[Hoffman2:Scripts:fix_all_home9.sh|fix_all_home9.sh]]<br />
<br />
<br />
==fix_perms.sh==<br />
Collaboration is one of the best ways to further science. So if your lab mate can't work on the data you just processed for her, science has a hard time moving forward. Fix those permissions and collaborate!<br />
:[[Hoffman2:Scripts:fix_perms.sh|fix_perms.sh]]<br />
<br />
<br />
==getDicomDate.py==<br />
All you want to know if when the DICOM got created and you don't have time to learn how to read DICOM files byte-wise. Just run this script.<br />
:[[Hoffman2:Scripts:getDicomDate.py|getDicomDate.py]]<br />
<br />
<br />
==home9_fix.sh==<br />
You have a very out dated bash profile that refers to the old home9 directory. Run this script once and all things will be fixed.<br />
:[[Hoffman2:Scripts:home9_fix.sh|home9_fix.sh]]<br />
<br />
<br />
==matlab_license_check.sh==<br />
MATLAB Licenses are as valuable as gold at times of heavy computing on Hoffman2. Use this script to find out if there is one with your name on it, or to find out who to go yell at for using too many licenses.<br />
:[[Hoffman2:Scripts:matlab_license_check.sh|matlab_license_check.sh]]<br />
<br />
<br />
==mkstruc.sh==<br />
Standards are important for uniform understanding. Use our standard data directory tree for your next study.<br />
:[[Hoffman2:Scripts:mkstruc.sh|mkstruc.sh]]<br />
<br />
<br />
==switch_freesurfer==<br />
Version consistency is important for software. Use this to stay consistent on the version of FreeSurfer you are using.<br />
:[[Hoffman2:Scripts:switch_freesurfer|switch_freesurfer]]<br />
<br />
<br />
==switch_fsl==<br />
Use this one to stay consistent on your FSL version.<br />
:[[Hoffman2:Scripts:switch_fsl|switch_fsl]]</div>Elauhttps://www.ccn.ucla.edu/wiki/index.php?title=Hoffman2:Accessing_the_Cluster&diff=744Hoffman2:Accessing the Cluster2013-05-06T19:33:59Z<p>Elau: </p>
<hr />
<div>[[Hoffman2|Back to all things Hoffman2]]<br />
<br />
Here are some of our favorite ways to access the Hoffman2 Cluster login nodes.<br />
<br />
==SSH - Command Line==<br />
: ''The official description of how to do this is found [http://www.ats.ucla.edu/clusters/common/head_node_access/access.htm here]''<br />
SSH stands for ''Secure Shell'' and is a method of remotely logging into a computer using an encrypted connection. It is a command line tool and is available on most *nix-based operating systems with ports available for Windows.<br />
<br />
<br />
===If you are on a Mac or a PC running Linux/Unix...===<br />
# Open up X11 or Terminal. Both are under ''Applications > Utilities'' on Macs.<br />
# Type the command<br />
#: <pre>$ ssh -X [USERNAME]@hoffman2.idre.ucla.edu</pre><br />
#: filling in your Hoffman2 username.<br />
#: The <code>-X</code> is for X11 Forwarding so that any graphics that are rendered on Hoffman2 get forwarded to the screen of your computer. A <code>-Y</code> flag accomplishes the same thing but does not secure the connection. Correct us if we have that switched.<br />
# Press enter and type in your password when it asks for it. No characters or asterisks will show up while you type.<br />
# Provided your typing was good, you will be greeted by the Hoffman2 login message and have successfully SSHd into a login node.<br />
<br />
<br />
===If you are on a PC running Windows...===<br />
# Go [http://www.ats.ucla.edu/clusters/common/head_node_access/access.htm here] and follow the instructions under ''From a PC running Windows''. We recommend PuTTY or NX Client.<br />
# Once you have that setup, the process is the same as if you were on a Mac or Linux/Unix machine<br />
<br />
<br />
<br />
==NX Client - GUI==<br />
: ''The official description of how to do this is found [http://hpc.ucla.edu/hoffman2/access/nx.php here]''<br />
The NX Client program allows you to set up a Virtual Network Computing (VNC)-like session with Hoffman2. This session will keep running even if your Internet connection drops in and out (much like [[Using Screen|screen]] on the command line).<br />
<br />
<br />
===PCs or Mac OS X 10.6 and earlier===<br />
# Go to the [http://www.nomachine.com/ No Machine website] and navigate to the ''Download'' tab.<br />
# Find the section titled ''NX Client Products'' and click on the one for the operating system you are running.<br />
# Download the appropriate installation file and install it on your computer.<br />
# Once it is installed on your computer, start up NX Client.<br />
# The window that appears will ask for:<br />
#*Login -- your Hoffman2 username<br />
#*Password -- your Hoffman2 password<br />
#*Session -- type ''Hoffman2''<br />
# Then click the ''Configure...'' button and fill out the necessary information<br />
#*Under the ''General'' tab<br />
#**In the ''Server'' section<br />
#***Host -- ''hoffman2.idre.ucla.edu''<br />
#***Port -- 22<br />
#***Key -- Click this button and delete the contents of the window that appears. Open up an [[Accessing Hoffman2#SSH_-_Command_Line|SSH session]] to Hoffman2 and run the command<br />
#***:<pre>$ cat /etc/nxserver/client.id_dsa.key</pre><br />
#***:The output is the key. Copy everything that was printed out and paste it into the Key window in NX Client and click ''Save''.<br />
#**In the ''Desktop'' section -- Use the drop down menus to select ''Unix'' and ''GNOME''<br />
#*Click ''Save''<br />
# Now you can click ''Login'' on the main window and a GUI environment connection will be established with a Hoffman2 login node.<br />
<br />
<br />
===Mac OS X 10.7 and newer===<br />
#: Go to the ''Download Preview'' section of the No Machine website ([http://www.nomachine.com/download-preview.php]).''' and download the NX Client 4 preview.<br />
# Open the DMG that you downloaded, copy "No Machine Player.app" into your "Applications" directory and open it up for the first time.<br />
# A window titled "New connection" will appear. Fill out the fields accordingly<br />
#* Name -- Something like "Hoffman2"<br />
#* Host -- "hoffman2.idre.ucla.edu" since this is the server you are connecting to<br />
#* Port -- 22<br />
#* Select "Use the NoMachine login" and click the button on the right labeled with "..."<br />
#** In the new window labeled "NoMachine login" check the box next to "Use an alternate server key"<br />
#** Open up a Terminal and run the following command (replacing USERNAME with your Hoffman2 username)<br />
#**:<code>$ scp USERNAME@login2.hoffman2.idre.ucla.edu:/etc/nxserver/client.id_dsa.key ~/Documents/</code><br />
#**and enter your Hoffman2 password when prompted.<br />
#** Back in NoMachine, click on the button labeled "..." and find the file you just downloaded (it is in your Documents folder labeled "client.id_dsa.key").<br />
#** Click on the X in the top right corner to return to the previous window.<br />
#* Click on the X in the top right corner to finish setting up the connection parameters.<br />
# Double click on the connection you just created (it should be the only one in the list).<br />
# A circular progress indicator will show up for a bit before giving way to an authentication screen asking for username and password.<br />
# Enter your Hoffman2 username and password and click "OK" (You may also check the box labeled "Save this setting in the configuration file" to avoid retyping this in the future)<br />
# A circular progress indicator will show up again until a menu appears. Select "Create a new session".<br />
# In the next menu, '''select GNOME'''.<br />
# After another circular progress indicator, a virtual desktop should appear.<br />
# Reconnections in this client are not currently supported for Hoffman2, so please make sure to logout and close your connections properly. [http://hpc.ucla.edu/hoffman2/access/nx.php#logout]<br />
<br />
<br />
<br />
===Troubleshooting===<br />
If your NX Client session freezes and you are unable to close it properly, open ''NX Session Administrator'' and disconnect your session from there. This freezing often occurs when your Internet connection is lost abruptly. Another possible cause for freezing is scrolling on certain Windows touchpads.<br />
<br />
<br />
<br />
==UCLA Grid Portal==<br />
: ''The official description of how to do this is found [http://www.ats.ucla.edu/clusters/grid_portal/ here]''<br />
<br />
We haven't used this one much yet, but we'll be trying it out and get back to you about our experiences.<br />
<br />
<br />
<br />
==External Links==<br />
*[http://www.ats.ucla.edu/clusters/common/head_node_access/access.htm Accessing Hoffman2 via Command Line]<br />
*[http://hpc.ucla.edu/hoffman2/access/nx.php Accessing Hoffman2 via NX Client]<br />
*[http://www.ats.ucla.edu/clusters/hoffman2/head_node_access/ Information about Hoffman2 Login Nodes] -- RSA Fingerprints, node addresses<br />
*[http://www.ats.ucla.edu/clusters/grid_portal/ Accessing Hoffman2 through UCLA Grid Portal]</div>Elau