Model Description
A machine learning model for waste classification
- Developed by: rootstrap
- Model type: image-classifier
- License: mit
Alzheimer Classifier Model
The aim is to build a model for MRI classification that identifies among the different classes:
- Alzheimer's
- Mild Cognitive Impairment
- Control
This machine learning model will help medical staff get a second opinion on whether a pacient MRI indicates the presence of Alzheimer's desease
The model was built using Monai MONAI is a freely available, community-supported, PyTorch-based framework for deep learning in healthcare imaging.
It has two main design goals:
To be approachable and rapidly productive To be also configurable.
Model Sources
- Repository: https://github.com/rootstrap/MRI-classifier
Uses
This model was created in the spirit of combining the interesting worlds of neuroscience and machine learning. It can be used to quickly detect Alzheimer's or a Mild Cognitive Impairment in a pacients MRI and hence help medical staff.
Direct Use
model = nets.DenseNet121(spatial_dims=3, in_channels=1, out_channels=3)
checkpoint = torch.load("86_acc_model.pth")
model.load_state_dict(checkpoint)
model.predict()
Bias, Risks, and Limitations
This model has an 86% of accuracy. Although it is an outstanding result, his means that 13% of times the prediction is wrong. This is why its important to understand that this tool is not a real medical opinion and can not be used as a final diagnosis by any means. This project does not aim to replace medical staff in diagnosing Alzheimer's desease, instead it is a tool to help them to get a quick and accurate second opinion.
Training Details
Training Data
The MRI data was gathered from the Alzheimer’s Desease Neuroimaging Initiative (ADNI) Database. Afterwards, the data was splitted into one folder for each class.
The full dataset used consisted of 1614 NIfTI files and the model has been trained to classify MRI into 3 classes:
- 328 Alzheimer's Desease
- 799 Mild Cognitive Impairment
- 487 Control
Split into train/test The data was then splitted using the 60% for training, 20% for validation and 20% for testing
Training Procedure
You can find the code for training at train.ipynb Training the model using DenseNet121 adapted for 3D images.
Evaluation and Results
After 100 epochs, the model reached an accuracy of 85.76%