EL data organization:
The functional data and analysis results are organized in a subject-centric manner, where all of the data associated with a given subject is organized under the same directory (even if different portions of these data may be used in different experiments) .
The directory where all of the subjects are stored is called EL's root.subjects directory. This directory can be defined dynamically (e.g. when multiple usergroups share the same software installation) by using the syntax:
Within this location, each directory contains the data from a different subject. Subject directory names are referred to in EL as subject-ID's, and they need to be unique for each subject but are otherwise arbitrarily defined (without whitespaces or punctuation marks, e.g. 835_FED_20200305a_3T2_PL2017)
Within each subject directory, a file named data.cfg defines the location and source of the original/raw data for this subject. It is recommended that this raw data is also located within the same subject directory but this is not necessary (e.g. in scenarios where the raw data may be read-only and shared among multiple groups/researchers). A typical data.cfg file may contain the following information, for example, if your original/raw data is in NIFTI format:
or something like the following information, for example, if your original/raw data is in DICOM format:
In addition, each subject directory may contain one or multiple .cat files indicating the details of the experimental design associated with each individual functional run (see EXPERIMENTAL DESIGN section below for details about these .cat files)
Experimental design directory
For task activation analyses, the directory where information about all of the experimental designs are stored is called EL's root.tasks directory. This directory can also be defined dynamically using the syntax:
Within this directory, a series of files named <design-ID>_*.para (e.g. tasksAB_random1.para) contain the onset/duration/event-type information defining all individual events in each experimental design. Just like subject-IDs, design-IDs are arbitrarily defined unique codes associated with each experimental design. The same design-ID may be associated with multiple .para files, for example for different variations or random orders of the same experimental task . A typical .para file may look like the following:
In addition, also within this location, a series of files named contrasts_<designID>.txt define all of the contrasts that we would like to evaluate for each experimental design. Each contrast is defined in a separate line as a linear combination of modeled tasks. A typical contrast file may contain the following information, for example:
Last, the information about which experimental design was used in each individual subject's functional run, is stored within each subject folder in a file named <designID>.cat (note: different design files may exist within a subject directory, for example when different functional runs are used in different experiments or when the same runs are analyzed differently). A typical .cat association file may look like the following:
preprocessing functional and anatomical images
A typical sequence of commands to preprocess your functional/anatomical data would look like the following:
Step 1) initialize EL and point to the location of root folders
Step 2) run one of EL's predefined preprocessing pipelines on subject 'sub0001' data
Step 3) create a series of Quality Assurance plots evaluating the results of the previous preprocessing pipeline on subject 'sub0001'
repeating steps 2) and 3) for any additional subjects.
If necessary, additional preprocessing steps can be run a posteriori on an already preprocessed dataset. For example, after step 2) above, using the following syntax would run and additional spatial smoothing step to a dataset which has already been preprocessed using the DefaultMNI pipeline:
EL preprocessing pipelines are defined through .cfg files, and may use any of the preprocessing and denoising options available in CONN. EL includes several standard preprocessing pipelines already defined and tailored for task activation analyses (see conn/modules/el/pipeline_preproc_<pipelineID>.cfg files for details), and users may modify these files and/or create their own pipelines.
When using EL to preprocess your data, EL will create additional subdirectories within each subject directory containing the results of all preprocessing steps (these directories are named after the name of preprocessing pipeline run).
(see "help el" and "help evlab17_run_preproc" for additional details)
model estimation (task activation analyses)
A typical sequence of commands to run first-level GLM task activation analyses of your functional data would look like the following:
Step 4) run model estimation on subject 'sub0001' preprocessed data using the 'tasksAB' experimental design
Step 5) create a series of Quality Assurance plots of model design, sample effect-sizes, and contrast estimability measures of the analyses of 'sub0001' data
repeating steps 4) and 5) for any additional subjects.
The results of the first-level analyses will be stored in a directory named after the experimental design and contained within the results/firstlevel subdirectory of the preprocessed dataset. The contents of this directory are the standard SPM first-level analysis outputs, including an SPM.mat file containing the details of the estimated General Linear Model and which can be loaded in SPM, beta_*.nii files containing maps of estimated effect-sizes for each model regressor, as well as con_*.nii and spmT_*.nii files containing maps of estimated contrast values and T-statistics, respectively, for each specified first-level contrast.
(see "help el" and "help evlab17_run_model" for additional details)