Skip to content
Snippets Groups Projects
fBeyer89's avatar
Frauke Beyer authored
9f41fbc5
History

CHARGE_spatialWMH

Spatial WMH patterns

Description

This repository provides the code to perform extraction of 41 WMH volumes (36 + 5 corpus callosum) using the Bullseye white matter segmentation and prepare the phenotype file to be shared.

How to download this repository

  • To download the repository click on the blue button "Code" in the upper right corner and choose for example .zipcompression. Then you select the location of the folder where the scripts should be saved.

COMPONENTS

  • MAIN_script_batch.sh: batch that launches shell script on a compute cluster.

  • MAIN_script.sh: main analysis script launching the python workflow + extraction of total intracranial volume

  • my_sublist.txt: list of subjects to process

  • environment_spatialwmh.yml: environment to be used for installation of python environment in conda.

  • python: folder containing all code necessary to run 1) T1-FLAIR/T2 registration, 2) Bullseye WM Segmentation and 3)spatial WMH volume extraction

    • run_bullseye_WMH_segmentation.py: combining the steps 1)- 3)
    • create_flairreg_pipeline.py: workflow for 1) T1-FLAIR/T2 registration
    • bullseye_pipeline.py: workflow for 2) Bullseye WM Segmentation
    • utils.py: helper functions including the function for 3) spatial WMH volume extraction
    • __init__.py, /configoptions.py: config files

PREREQUISITES

  • A computer with Linux or MAC OS X. For Windows a virtual machine is needed.
  • An installation of FreeSurfer (version >= 5.3.0)
  • An installation of Python 3.9.18 environment including nipype==1.8.6, nibabel==5.2.0, numpy=1.26.3
    • you can install this environment on your own using conda or use the specification file included by typing conda env create -f environment_spatialwmh.yml (this may take a long time)

We also have created a containerized version of all software necessary to run the code. The prerequisite for using this is an installation of singularity available on your system.

To use this option:

  1. download the singularity image bullseye.sif. It is fairly large (4.5 GB) so store it in an appropriate location.
  2. download the FreeSurfer license file here and put it in /my/path/to/license.txt
  3. start the environment by typing singularity run --bind /my/path/to/license.txt:/opt/freesurfer-7.3.0/.license bullseye.sif
  4. ./MAIN_script_singularity.sh to run individuals within the singularity container.

INPUT

FreeSurfer

Required FreeSurfer input

Not all the FreeSurfer output is required. The following is the mandatory portion of FreeSurfer data that is required by the pipeline:

  • scansdir
    • subject-id1
      • mri
        • aseg.mgz
      • label
        • lh.aparc.annot
        • rh.aparc.annot
      • surf
        • lh.white
        • rh.white
        • lh.pial
        • rh.pial
    • subject-id2
      ...

WMH maps and images used for WMH segmentation (e.g. FLAIR or T2)

USAGE

Change the following files in MAIN_script_batch.sh

  • sublist_name=${PATH_SCRIPT}/my_sublist.txt (should be a list of subjects similar to the one provided in this example)
  • PATH_OUT=/data/pt_02271/Data/wd/ (this will be where all the output files will be generated, the size of the output for one subject will be 140 MB, runtime is about 8 minutes per subject when using the specified settings)
  • PATH_FREESURFER=/data/p_02271/NAKO-381/NAKO-381_MRT_Dateien/V2/FreeSurfer/ (path to FreeSurfer results)
  • PATH_LESION=/data/pt_02271/Data/predicted/ (path to where WMH maps/FLAIR images are located)

Specific file patters to identify subject-specific input files inside the PATH_LESION can be specified in MAIN_script.sh. Here, ${PATH_LESION} is the PATH_LESION you specified in the MAIN_script_batch.sh, and ${sj_id} is the specific subject ID from my_sublist.txt. If your file ordering is different (e.g. LESION and FLAIR not in the same folder), you can modify the ${PATH_LESION} in MAIN_script_batch.sh and the definitions in MAIN_script.sh so that it matches. If for example your FLAIRs and LESIONS are all in the same folder, but your file name contains the ${sj_id}, you could change it to:

  • FLAIR=${PATH_LESION}/FLAIR_intensity_normed_${sj_id}.nii.gz #exact name of FLAIR/T2 image used for registration
  • LESION=${PATH_LESION}/${sj_id}/predictedWMH_${sj_id}.nii.gz #exact file name of wmh probability map

Running the script

  1. activate FreeSurfer (e.g. load module freesurfer/7.4.1) and the Conda/Python environment (e.g. with conda source activate spatialwmh) locally or in the MAIN_script.sh ll. 41-43 (depending on your compute infrastructure)
  2. change the subject list and file names in MAIN_script_batch.sh and MAIN_script.sh
  3. go to the folder containing the scripts
  4. run the main batch script by typing ./MAIN_script_batch.sh to run it on your compute cluster

OUTPUT

After execution of the pipeline the directory PATH_OUT will contain ${sj_id} folders with the following structure:

  • subject-id1
    • bullseye_wmparc.nii.gz
    • LESION_warped.nii.gz
    • FLAIR_warped.nii.gz
    • res.txt
    • /bullseyelesion_bbreg
  • subject-id2
    ...

containing, respectively, the final bullseye parcellation (bullseye_wmparc.nii.gz), the FLAIR (flair2anat_bbreg.nii.gz) and WMH map (LESION_warped.nii.gz) coregistered with the T1, the results file (res_sum.txt) and a directory with intermediate files (/bullseyelesion_bbreg) that can be deleted.

More information on the python workflow

The pipeline includes the following steps:

  1. the coregistration of WMH probability map to FreeSurfer space (create_flairreg_pipeline.py) latest version (11/23): WMH probability maps are coregistered to FreeSurfer space and the probability values are summed up in the regions defined in the Bullseye segmentation. The values in each region have the unit of 1mm³ (because the WMH probability maps are transformed into FreeSurfer native space. Probabilities lower than 1 are treated like partial volumes.

  2. the bullseye WM parcellation (bullseye_pipeline.py) provided in this github repository) It provides an (anatomy-independent) spatial localization based on an radial component (ie, lobes) and a depth component. It can be used to obtain region-specific quantification of white matter parameters (eg, a similar approach has been used to quantify regional white matter hyperintensity load in this and this papers).

The internals of the process are explained in this blog post.

  • the bullseye parcellation is the intersection of a lobar parcellation and a depth parcellation
  • the lobar parcellation consists of 4 lobes per hemisphere (frontal, parietal, temporal and occipital) + 1 consisting of the basal ganglia and thalamus as an additional region: (4*2) + 1 = 9 lobes
  • the depth parcellation consists of 4 equidistant parcels spanning from the surface of the ventricles to the internal surface of the cortex
  1. the extraction of bullseye parcellated WML volumes (in utils.py) The lesion map is masked by the bullseye parcellation and the sum is taken for each value of the Bullseye segmentation. 36 + 5 WMH volumes are extracted and saved in text file.

    • 51, 52, 53, 54 : BG + depth
    • 111, 112, 113, 114: lh frontal + depth
    • 121, 122, 123, 124: lh occipital + depth
    • 131, 132, 133, 134: lh temporal + depth
    • 141, 142, 143, 144: lh parietal + depth
    • 211, 212, 213, 214: rh frontal + depth
    • 221, 222, 223, 224: rh occipital + depth
    • 231, 232, 233, 234: rh temporal + depth
    • 241, 242, 243, 244: rh parietal + depth
    • 251, 252, 253, 254, 255: Corpus Callosum (posterior - anterior following FreeSurfer's definition in mri_cc)