Skip to content

Latest commit

 

History

History
36 lines (24 loc) · 1.45 KB

README.md

File metadata and controls

36 lines (24 loc) · 1.45 KB

Face Mask Detection using VGG16 Fine-Tuning

Table of Contents

Introduction

This project implements a face mask detection system using fine-tuning of the VGG16 model. The aim is to detect whether a person is wearing a mask or not, which is particularly useful in the context of the COVID-19 pandemic.

Dataset

The dataset used for this project contains images of people with and without masks. It is divided into training and validation sets to ensure the model generalizes well to unseen data. If you use a specific dataset or combination of datasets, please mention it here.

Model Architecture

The model is based on VGG16, a convolutional neural network that is 16 layers deep. The VGG16 model is pre-trained on the ImageNet dataset, and then fine-tuned on our face mask dataset.

Key steps in the model training process:

  • Load the pre-trained VGG16 model, excluding the top fully connected layers.
  • Add custom fully connected layers for the face mask classification task.
  • Compile the model with an appropriate optimizer and loss function.
  • Train the model on the face mask dataset.

Installation

To get started with this project, clone the repository and install the required dependencies.

git clone https://github.com/yourusername/facemask-detection.git
cd facemask-detection
pip install -r requirements.txt