In the present work, we are interested in visual indexing, search and classification of multimodal MRI brain images by content to assist in the diagnosis of Alzheimer’s disease. The main idea is to give the clinician information about images with similar visual characteristics. Three categories of subjects are to be distinguished: normal control subjects (NC), subjects with mild cognitive impairment (MCI) and subjects with Alzheimer’s disease (AD). We apply deep learning methods based on neural networks on multimodal MRI images (structural MRI and diffusion tone MRI) for the detection of structural change in the human brain, specifically in known affected areas by Alzheimer’s disease. We extract local patches specific to AD, being focused on the region the most involved in the case of AD, which causes particular changes in the brain structure. This is the area called the Hippocampus. This brain structure is responsible for memorization, and damage to this area leads to memory dysfunctions.
2D images extracted from this region are then used to feed neural models/networks for subject classification. This step consists of extraction of 2D slices from three projections of 3D images (sagittal, coronal, and axial) of the Hippocampus region, This gives a transformation of a 3D image of the ROI of the brain into 2D patches. The proposed method is automatic (without any intervention of clinician), does not require an expensive and time-consuming segmentation step due to the use of a standardized Atlas. We have proposed an approach called "2D+ ε ", and our methods comprise the combination of information from different sources (projections/modalities), as well as a new Transfer Learning scheme.

Français
LaBRI (visioconférence ou présentiel suivant la situation sanitaire)