UCLA Multimodal Connectivity Package

From Center for Cognitive Neuroscience
Jump to navigation Jump to search

NOTE: This code is in beta and assurances will not be made as to its quality. Use at your own risk.

The UCLA Multimodal Connectivity Package is a set of Python programs used to calculate connectivity metrics from a variety of neuroimaging modalities including diffusion weighted MRI (DTI/DSI), fMRI, and structural MRI. It also includes Python functions for brain network visualization. It is available for download here: http://www.ccn.ucla.edu/~jbrown/Downloads/umcp_13_dev.tar.gz . Currently, its strongsuit is the simple, automated compilation of statistics from TrackVis .trk files and .nii ROI masks. All you need to do is make sure the numpy and pynifti python modules are installed and you can run these programs from the Unix command line.

It is currently optimized for python2.7.

Required Libraries

  1. Numpy: http://www.scipy.org/Download (required for all modules)
  2. NiBabel: http://nipy.sourceforge.net/nibabel/ (required for all modules)
  3. Scipy: http://www.scipy.org/ (only necessary for certain functions in timseries.py)

For plotting:

  1. matplotlib: http://matplotlib.org/
  2. mayavi: http://docs.enthought.com/mayavi/mayavi/
  3. networkx: http://networkx.github.io/ (for 2D plotting functions)

If you use a Mac, I highly recommend using Macports for downloading and managing Python modules.

Contents

  • core.py: Core functions, including those which use the NiBabel library (http://nipy.sourceforge.net/nibabel/) in order to perform helper operations on .nii files.
  • tracks.py: Functions which operate on Diffusion Toolkit .trk files. Options include calculating connectivity matrices between a set of masks, calculating track density files for a set of masks, and calculating statistics for groups of tracks between masks.
  • run_tracks.py: A command-line callable function which takes a .trk files and a list of masks as input and can optionally output a connectivity matrix text file and/or a "density" file listing the number of tracks hitting each mask. Note that fiber counts from this program are very nearly identical to the counts found in TrackVis (99.99% similar), but not exactly identical. It's best to observe the number of fibers between two masks in TrackVis and then compare that to the number counted by this program and make sure the values are similar enough for your needs.
  • timeseries.py: Functions which operate on 4D fMRI .nii files in order to calculate various connectivity metrics. Options include correlating all voxels with a seed region, calculating the connectivity matrix based on correlation for a list of seed regions, and mutual information equivalent measures. These are especially untested.
  • run_timeseries.py: A command-line callable function which takes a 4D fMRI .nii file and a text file listing the paths to a set of regional masks in the same space as the fMRI data. Calculates the pairwise statistical similarity of the mean timeseries from each region (correlation, partial correlation, or covariance) and outputs a connectivity matrix.
  • plot_network.py: a set of functions that take a connectivity matrix file, along with (x,y,z) coordinates defining each region's center, and generates 3D or 2D network diagrams.

Setup

Before using these programs, the following should be done:

  1. For diffusion data (eg DTI): preprocessing (eddy correction, etc.), tensor calculation, and tractography with Diffusion Toolkit (http://www.trackvis.org/dtk/) to create a .trk file. In order to do standard space voxelwise comparisons, the track file should be registered to standard space using the program track_transform (packaged with Diffusion Toolkit).
  2. For fMRI data: preprocessing (motion correction, skull stripping, high pass filtering, spatial smoothing, etc.) to prepare the 4D BOLD .nii file and optionally, registration to standard space.
  3. For structural data (eg T1, MPRAGE): preprocessing (skull stripping, etc.), parcellation into separate regions of interest saved as .nii masks files (binary or not). Alternatively, regions of interest can be obtained from an atlas like AAL or Harvard-Oxford.

Usage

run_tracks.py can be called from the Unix command line without the need to enter a Python session. In order to run the program, you only need the following:

  • A .trk file
  • A text file listing the path/filename of each mask for which you wish to find intersecting tracks. The text file should appear like so:
~/Fiber_networks/streamline/subject_1/HO_masks/mask1.nii.gz
~/Fiber_networks/streamline/subject_1/HO_masks/mask2.nii.gz
~/Fiber_networks/streamline/subject_1/HO_masks/mask3.nii.gz
~/Fiber_networks/streamline/subject_1/HO_masks/mask4.nii.gz
…

Once these items are ready, you can run run_tracks.py:

run_tracks.py -t dti.trk -m ../HO_masks/masklist.txt -o subject_1 -c

The program will write a log file showing the python functions that are executed, so the user can re-produce these functions from within the python interpreter as desired.

tracks_list_mm = tracks.get_floats('dti.trk')
header = tracks.get_header('dti.trk')
tracks_list_vox = tracks.mm_to_vox_convert(tracks_list_mm, header)
tracks_list_vox_filled = tracks.add_missing_vox(tracks_list_vox)
outmat,tracknums_mat=tracks.mask_connectivity_matrix(tracks_list_vox_filled,header,mask_list,'subject_1',20.0,0,tracks_list_mm,15.019)
volumemat,lengthmat,curvemat,statmat = tracks.track_stats_group(tracknums_mat,tracks_list,header,'subject_1',tracks_list_vox_filled,dti_fa.nii)

This will count all fibers that start and end in any pair of masks from the mask list and produce a connectivity matrix named subject_1_connectmat.txt.

In order to count all fibers that intersect with a mask and repeat for all masks in a list:

run_tracks.py -t dti.trk -m ../HO_masks/masklist.txt -o subject_1 -d

This will produce a text file that on each line reports the number of tracks that hit that mask from masklist.txt.

run_tracks.py -t dti.trk -m ../HO_masks/masklist.txt -o subject_1 -c -d -s --statimg=dti_fa.nii --lenthr=15.019 --maskthr=20 

This will calculate the connectivity matrix and mask track hit counts for each mask, calculate the statistics for volume/avg length/avg curvature/avg FA, counting only tracks longer than 15.019 mm and thresholding the probabilistic masks below 20.

To see all the options for run_tracks.py, use the help:

run_tracks.py -h
Usage: run_tracks.py -t <input_tracks> -m <input_masks> -o <output_prefix> [options]
Options:
 -h, --help            show this help message and exit
 -t TRACKSFILE, --tracks=TRACKSFILE
                       read track data from FILENAME.trk
 -m MASKSFILE, --masks=MASKSFILE
                       read mask filenames stored on separate lines in
                       FILENAME.txt
 -o OUTPUT, --out=OUTPUT
                       output file prefix
 -c, --cmat            calculate connectivity matrix between all masks
 -d, --dens            calculate number (density) of tracks intersecting each
                       mask
 -s, --stats           calculate statistics for each track group
 --statimg=STATIMAGE   optional: calculate average track group value for
                       diffusion metric (FA, MD, ...) from .nii file
 --cthrough            connectmat: any part of track must hit any part of
                       mask
 --dend                density: either endpoint of track must hit any part of
                       mask
 --maskthr=MASKTHRESH  optional: threshold value for probabilistic masks
 --lenthr=LENTHRESH    optional: length threshold for tracks
 --densnii             for density calculation, output .nii density file
                       instead of mask hit counts in .txt file

Items to note:

  • Either a connectivity matrix or a density file must be specified – that’s the whole point of the program.
  • The statistics for track length are very close to those reported by TrackVis. The statistics for volume are quite different and curvature is not reported by TrackVis. For the statistics based a .nii image such as FA or MD, the average value for a trackgroup is based on a weighted average that is close but not identical to the values reported by TrackVis.
  • The default behavior for a connectivity matrix is to count tracks where one track end point is in one mask and the other track end point is in the other mask. This can be changed to find tracks where any portion intersects one mask and any other portion intersects another mask using the --cthrough option. The situation is reversed for a density calculation.
  • Track counts calculated by this program are very close, but not exact, to those reported by TrackVis. In practice, they appear accurate to within 99.995%. Check to see how the counts reported by run_tracks.py and TrackVis compare for a few random masks from your data, just to be confident.

Changes from previous version (1.0):

  • run_tracks.py has changed significantly. It now functions like a standard unix program with flags for all required and optional inputs.
  • The module previously called nifti_functions has been renamed core.
  • The default behavior for the density calculation (implemented by tracks.mask_tracks) is now to write a text file listing the number of tracks hitting each mask; writing a 4D .nii density file is an option.
  • The average track statistics for diffusion metric images (FA, MD, etc.) are now based on a weighted average.
  • Connectivity matrices and density calculations both have options for a minimum length threshold and including fibers that a) start/terminate in the mask(s) or b) intersect the mask at any point.

The following is an example of how to use the tracks functions from a Python interactive session, load the required modules and set up variables.

In [1]: import os,sys
In [2]: import tracks,core
In [3]: import numpy as np
In [4]: tracks_file='dti_std.trk'
In [5]: stats_file='dti_FA.nii'
In [6]: mask_list=['fsmask3010.nii.gz','fsmask3025.nii.gz','fsmask3029.nii.gz','fsmask4025.nii.gz','fsmask4029.nii.gz']

Read the .trk file and header

In [7]: tracks_list_mm=tracks.get_floats(tracks_file)
In [8]: header=tracks.get_header(tracks_file)

Convert the track mm coordinates to voxel coordinates

In [9]: tracks_list_vox=tracks.mm_to_vox_convert(tracks_list_mm,header)

Fill in the missing voxels:

In [10]: tracks_list_vox_filled=tracks.add_missing_vox(tracks_list_vox)

Get the set of tracks that intersect each mask

In [11]: tracknums_list=tracks.mask_tracks(tracks_list_vox_filled,header,mask_list)

Get array of associated statistics for each set of tracks (total track bundle volume, avg track length, avg track curvature, and optionally an average track statistic from a statistical image like FA or MD)

In [12]: [vollist,lenlist,curvelist,statlist]=tracks.track_stats_list(tracknums_list,tracks_list_mm,header,tracks_list_vox_filled,stats_file)

Get the set of tracks that start/stop in each pair of masks; store as a connectivity matrix

In [13]: [outmat,tracknums_mat]=tracks.mask_connectivity_matrix(tracks_list_vox_filled,header,mask_list,'subject_1')

Get matrix of associated statistics for each set of tracks

In [14]: [volmat,lenmat,curvemat,statmat]=tracks.track_stats_group(tracknums_mat,tracks_list_mm,header,tracks_list_vox_filled,stats_file)

Save one of the statistic matrices as a text file

In [15]: np.savetxt('%s.txt'%volume_matrix,volmat)

Suggestions

For analyzing connectivity matrices and network characterization, the Brain Connectivity Toolbox (https://sites.google.com/site/bctnet/) is a good choice.

For performing voxelwise image analysis on density/timeseries correlations .nii files created with these programs, FSL tools like randomise (http://www.fmrib.ox.ac.uk/fsl/randomise/index.html) are well suited.