CONN extensions:

EL

aka EvLab

EvLab fMRI preprocessing and analysis pipeline (evlab.mit.edu): a complete pipeline combining SPM and CONN functionality for task activation analyses of fMRI data

EL data organization:

Subjects directory

The functional data and analysis results are organized in a subject-centric manner, where all of the data associated with a given subject (or subject session) is organized under the same directory (even if different portions of these data may be used in different experiments) .

The directory where all of the subjects are stored is called EL's root.subjects directory. This directory can be defined dynamically (e.g. when multiple usergroups share the same software installation) by using the syntax:


>> el root.subjects your_root_folder

Within this location, each directory contains the data from a different subject. Subject directory names are referred to in EL as subject-ID's, and they need to be unique for each subject but are otherwise arbitrarily defined (without whitespaces or punctuation marks, e.g. 835_FED_20200305a_3T2_PL2017)

Within each subject directory, a file named data.cfg defines the location and source of the original/raw data files for this subject (e.g. functional and anatomical files). It is recommended that the raw data files are also located within the same subject directory but this is not necessary (e.g. in scenarios where the raw files may be read-only and shared among multiple groups/researchers). A typical data.cfg file may contain the following information, for example, if your original/raw data is in NIFTI format:

SUBJECTS/subject0001/data.cfg

#functionals

/Volumes/ext/SUBJECTS/subject0001/raw/func_01.nii

/Volumes/ext/SUBJECTS/subject0001/raw/func_02.nii

#structurals

/Volumes/ext/SUBJECTS/subject0001/raw/anat_01.nii

#RT

2

example data.cfg file (see data.cfg documentation for additional details of this file format)

or something like the following information, for example, if your original/raw data is in DICOM format:

SUBJECTS/subject0002/data.cfg

#dicoms

/Volumes/ext/SUBJECTS/subject0002/dicoms/*-3-1.dcm

/Volumes/ext/SUBJECTS/subject0002/dicoms/*-7-1.dcm

/Volumes/ext/SUBJECTS/subject0002/dicoms/*-13-1.dcm

#functionals

7 13

#RT

2.53

#structurals

3

example data.cfg file (see data.cfg documentation for additional details of this file format)

In addition, each subject directory may contain one or multiple .cat files indicating the details of the experimental design associated with the functional data (see EXPERIMENTAL DESIGN section below for details about these .cat files)

example contents of SUBJECTS folder before starting EL preprocessing and analyses (the original/raw data may be organized differently or located in a different directory)

Experimental design directory

For task activation analyses, the directory where information about all of the experimental designs are stored is called EL's root.tasks directory. This directory can also be defined dynamically using the syntax:


>> el root.tasks your_tasks_folder

Within this directory, a series of files named <design-ID>_*.para (e.g. tasksAB_random1.para) contain the onset/duration/event-type information defining all individual events in each experimental design. Just like subject-IDs, design-IDs are arbitrarily defined unique codes associated with each experimental design. The same design-ID may be associated with multiple .para files, for example for different variations or random orders of the same experimental task . A typical .para file may look like the following:

tasksAB_v1.para

% onset_time (in scans units) / task_type

#onsets

0.00 1

4.00 2

8.00 1

13.00 1

16.00 1

20.00 1

24.00 1

30.00 1

36.00 2

42.00 2

45.00 2

48.00 1

52.00 1

55.00 2

60.00 2

65.00 2

70.00 1

75.00 2

79.00 1

82.00 1

85.00 2

88.00 1

94.00 1

97.00 1

104.00 1

107.00 2

111.00 2

115.00 2

120.00 2

126.00 1

131.00 2

134.00 2

138.00 2

143.00 1

147.00 2

153.00 2

160.00 2

163.00 1

166.00 1

172.00 2


% task names

#names

Speech

NonSpeech


% task durations (in scans units)

#durations

3 3


% time units (scans/secs)

#units scans

example .para file (see design.para documentation for additional details)

In addition, also within this location, a series of files named contrasts_<designID>.txt define all of the contrasts that we would like to evaluate for each experimental design. Each contrast is defined in a separate line as a linear combination of modeled tasks. A typical contrast file may contain the following information, for example:

contrasts_tasksAB.txt

Speech-Baseline Speech 1

NonSpeech-Baseline NonSpeech 1

Speech-NonSpeech Speech NonSpeech 1 -1

NonSpeech-Speech Speech NonSpeech -1 1

AverageTasks Speech NonSpeech 0.5 0.5

example contrast definition file (see contrast.txt documentation for details)

Last, the information about which experimental design was used in each individual subject's functional run, is stored within each subject folder in a file named <designID>.cat (note: different design files may exist within a subject directory, for example when different functional runs are used in different experiments or when the same runs are analyzed differently). A typical .cat association file may look like the following:

tasksAB.cat

% functional runs included in tasksAB experiment

#runs

1

2


% experimental designs used in these runs

#files

/Volumes/ext/DESIGNS/tasksAB_v1.para

/Volumes/ext/DESIGNS/tasksAB_v2.para

example .cat file (see design.cat documentation for additional details)

EL how-to:

preprocessing functional and anatomical images

A typical sequence of commands to preprocess your functional/anatomical data would look like the following:

  • Step 1) initialize EL and point to the location of root folders

>> conn module el init;

>> el root.subjects /data/subjects;

>> el root.tasks /data/designs;

  • Step 2) run one of EL's predefined preprocessing pipelines on subject 'sub0001' data

>> el preprocessing sub0001 DefaultMNI;

  • Step 3) create a series of Quality Assurance plots evaluating the results of the previous preprocessing pipeline on subject 'sub0001'

>> el preprocessing.qa sub0001 DefaultMNI;

repeating steps 2) and 3) for any additional subjects.

If necessary, additional preprocessing steps can be run a posteriori on an already preprocessed dataset. For example, after step 2) above, using the following syntax would run and additional spatial smoothing step to a dataset which has already been preprocessed using the DefaultMNI pipeline:

>> el preprocessing.append sub0001 DefaultMNI OnlySmooth;

EL preprocessing pipelines are defined through .cfg files, and may use any of the preprocessing and denoising options available in CONN. EL includes several standard preprocessing pipelines already defined and tailored for task activation analyses (see conn/modules/el/pipeline_preproc_<pipelineID>.cfg files for details), and users may modify these files and/or create their own pipelines.

When using EL to preprocess your data, EL will create additional subdirectories within each subject directory containing the results of all preprocessing steps (these directories are named after the name of preprocessing pipeline run).

(see "help el" and "help evlab17_run_preproc" for additional details)

example contents of SUBJECTS folder after preprocessing

conn/modules/el/pipeline_preproc_DefaultMNI_PlusStructural.cat

#steps

structural_center

structural_segment&normalize

functional_label_as_original

functional_realign

functional_center

functional_art

functional_label_as_subjectspace

functional_segment&normalize_direct

functional_label_as_mnispace

functional_regression

functional_smooth

functional_label_as_minimallysmoothed

functional_smooth

functional_label_as_smoothed


#fwhm

4

6.9282


#reg_names

realignment

scrubbing

White Matter

CSF


#reg_dimensions

inf

inf

5

5


#reg_deriv

1

0

0

0


#reg_skip

1

one of the included preprocessing/denoising pipelines in EL (see preprocessing .cfg documentation for additional details about the format of these preprocessing .cfg files)

model estimation (task activation analyses)

A typical sequence of commands to run first-level GLM task activation analyses of your functional data would look like the following:

  • Step 4) run model estimation on subject 'sub0001' preprocessed data using the 'tasksAB' experimental design

>> el model sub0001 DefaultMNI tasksAB;

  • Step 5) create a series of Quality Assurance plots of model design, sample effect-sizes, and contrast estimability measures of the analyses of 'sub0001' data

>> el model.qa sub0001 DefaultMNI tasksAB;

repeating steps 4) and 5) for any additional subjects.

The results of the first-level analyses will be stored in a directory named after the experimental design and contained within the results/firstlevel subdirectory of the preprocessed dataset. The contents of this directory are the standard SPM first-level analysis outputs, including an SPM.mat file containing the details of the estimated General Linear Model and which can be loaded in SPM, beta_*.nii files containing maps of estimated effect-sizes for each model regressor, as well as con_*.nii and spmT_*.nii files containing maps of estimated contrast values and T-statistics, respectively, for each specified first-level contrast.

(see "help el" and "help evlab17_run_model" for additional details)

example contents of SUBJECTS folder after first-level analyses

Additional first-level analysis details or options can be specified using a last argument to the "el model ..." command, pointing to a model configuration options file (either directly or indirectly to any file located in conn/modules/el/pipeline_model_*.cfg), e.g.

>> el model sub0001 DefaultMNI tasksAB Default;

(see example and link below for additional details)

conn/modules/el/pipeline_model_Default.cfg

#functional_label

minimallysmoothed


#model_basis

hrf+deriv


#model_covariates

denoise


#model_serial

AR(1)


#hpf

128


one of the included sets of predefined model configuration options in EL (see model.cfg documentation for additional details of this model configuration file format)