Development Overview
A deeper look into how NeuroView was built
The development of NeuroView was grounded in a practical research objective: to build a deep learning system capable of analyzing brain MRI scans for tumor detection. This system was designed with clinical use in mind, focusing on accuracy, interpretability, and speed.
Unlike many implementations that rely on pre-trained architectures, NeuroView's neural network was built entirely from scratch. Each layer and parameter was designed, tuned, and tested to suit the specific characteristics of brain imaging data.

Dataset & Preprocessing
The Brain Tumor MRI Dataset, curated by Masoud Nickparvar, is a large and diverse collection of 7,023 human brain MRI images designed to support the classification and detection of brain tumors using machine learning and deep learning techniques. The dataset is organized into four classes: glioma, meningioma, pituitary, and no tumor, providing a comprehensive foundation for multi-class tumor diagnosis and differentiation.
Compiled from Figshare, SARTAJ, and Br35H, this dataset addresses limitations of earlier collections by providing a more reliable resource for model training and testing. Its main goal is to improve medical imaging through early tumor detection, accurate classification, and precise localization to support clinical decision-making.
The dataset includes both training and testing subsets, with the training folder containing 5,712 MRI images used to train the model, and the testing folder containing 1,311 images used to evaluate its performance.
As part of the preprocessing step, all the grayscale images were tranformed into pixel values and stored them in a CSV file to facilitate easier handling and integration with neural networks.
Dataset Class Distribution
Training vs Testing Split
Dataset Balance Analysis

Model Architecture
The model is a deep neural network designed for multi-class classification of brain tumor MRI images. The input layer receives 4,096 features, representing the flattened grayscale pixel values from preprocessed MRI scans. It consists of two hidden layers:
Hidden Layer 1: 128 neurons with Leaky ReLU activation and a dropout rate of 0.4
Hidden Layer 2: 64 neurons with Leaky ReLU activation and dropout at 0.4
To prevent overfitting and promote generalization, L2 regularization (λ = 0.01) is applied to all weight matrices. The output layer has 4 neurons corresponding to the four tumor classes, using Softmax activation to output class probabilities.

Training & Optimization
Several strategies were used to improve learning stability and model generalization.
Dropout was applied to both hidden layers to randomly deactivate neurons during training.
L2 regularization penalized large weight values to encourage simpler models.
A class-weighted cross-entropy loss was used to address class imbalance, with weights computed based on class frequencies.
Stratified splitting ensured that both training and validation sets maintained the original class distribution.
The model was trained using the Adam optimizer with hyperparameters β₁ = 0.9, β₂ = 0.999, and ε = 1e-8. The learning rate decayed exponentially every 5 epochs, starting from 0.0005.
Manual forward and backward propagation was implemented.
Dropout masks were used during the backward pass.
Gradients were adjusted with both L2 penalties and class/sample weights.
The training loop ran for up to 200 epochs, with early stopping triggered after 15 epochs without improvement in validation loss.
The best model weights were restored after training.

Evaluation Metrics
The model demonstrated high classification performance on unseen data. It achieved a test accuracy of 93.52%, closely matching the validation accuracy of 93.61%, resulting in a generalization gap of just 0.1%. This small gap indicates that the model generalizes well and is not overfitting to the training data.
For the classification report, the model consistently performed well across all four tumor classes. The pituitary tumor class (Class 2) stood out with the highest F1-score of 0.97, reflecting both strong precision and recall. On the other hand, meningioma (Class 1) showed slightly lowerrecall at 0.86, suggesting the model was less confident or occasionally misclassified these samples.
The confusion matrix confirmed this trend, showing very few misclassifications and well-separated class predictions. Additionally, the prediction distribution remained proportionally aligned with the true class distribution, suggesting no bias toward over- or under-represented classes.
Confusion Matrix
Glioma
Meningioma
Pituitary
No Tumor
Glioma
Meningioma
Pituitary
No Tumor
Classification Report
Class | Precision | Recall | F1-Score | Support |
---|---|---|---|---|
0 - Glioma | 0.93 | 0.89 | 0.91 | 300 |
1 - Meningioma | 0.87 | 0.86 | 0.87 | 306 |
2 - Pituitary | 0.95 | 0.99 | 0.97 | 405 |
3 - No Tumor | 0.98 | 0.99 | 0.98 | 300 |
Accuracy | - | - | 0.94 | 1311 |
Macro Avg | 0.93 | 0.93 | 0.93 | 1311 |
Weighted Avg | 0.93 | 0.94 | 0.93 | 1311 |
Overall, the metrics indicate a well-regularized, accurate, and robust model suitable for real-world tumor classification tasks.

Future Improvements
While NeuroView has shown excellent performance in classifying brain tumors from MRI scans, several opportunities remain to enhance the neural network architecture and its real-world applicability.
One key area for enhancing the neural network is data augmentation. By applying techniques such as image rotation, zooming, flipping, and noise injection during training, the model can be exposed to a wider range of input variations. This helps improve generalization, allowing the network to better handle real-world clinical conditions where MRI images may differ in orientation, contrast, or clarity.
Another promising direction lies in enhancing the use of convolutional neural networks (CNNs) to process entire MRI volumes rather than isolated slices. While the current model works on individual 2D images, this limits its ability to capture spatial relationships across multiple slices. By extending the CNN to analyze volumetric data as a whole, the network can learn deeper and more contextualized features that reflect the full structure of the brain. This would lead to more precise tumor detection and localization, aligning more closely with clinical diagnostic practices.
Although NeuroView was intentionally built from scratch for interpretability and full control over architecture, transfer learning presents an opportunity for future enhancement. By incorporating select pre-trained components such as feature extractors from medical imaging models into the custom network, future versions of NeuroView could potentially benefit from both improved accuracy and faster convergence, especially when training on larger or more diverse datasets.
By pursuing these improvements, NeuroView can evolve into a more robust and clinically valuable system for brain tumor detection and analysis.