This repository contains the implementation of a real-time drone data analysis framework that processes video feeds from multiple drones, stitches their frames together, and performs object detection. The project is optimized for real-time processing through efficient local inference and keypoint matching techniques.
The project demonstrates the following key features:
-
Local Inference on Intersecting Drones:
- Detect intersections between drones' fields of view based on their positions. This is performed locally on drones for efficiency.
-
Keypoint Extraction:
- Extract and match keypoints from overlapping video frames using advanced feature-matching algorithms.
-
Image Stitching:
- Seamlessly combine frames from multiple drones into a composite image at the central server.
-
Object Detection:
- Apply object detection algorithms to stitched images to identify and annotate objects.
-
Drone Positions and Local Inference:
- Identify drones with overlapping fields of view. This is determined by finding drones within the range of h⋅tan(θ), where:
- h: Height of the drone.
- θ: Angle of the camera relative to the ground.
- Identify drones with overlapping fields of view. This is determined by finding drones within the range of h⋅tan(θ), where:
-
Keypoint Extraction:
- Extract features using techniques such as SIFT (Scale-Invariant Feature Transform).
-
Image Stitching:
- Perform feature matching and alignment using homography estimation for seamless transitions between frames.
-
Object Detection:
- Apply object detection models to stitched panoramas and annotate detected objects.
-
Drone Position Simulation:
Simulate drone positions and movements to visualize the network of drones and overlapping fields of view.
-
Keypoint Matching Visualization:
Debug and validate matched features across frames to ensure stitching accuracy. -
Real-Time Image Stitching:
Align and blend frames efficiently using feature matching and homography estimation. -
Object Detection:
Detect and annotate objects in stitched images using pre-trained object detection models such as YOLO, SSD, or R-CNN.
- Identify overlapping fields of view locally using drone positional data.
- Only overlapping frames are sent for stitching, reducing computational overhead.
- Extract visual features using SIFT.
- Perform matching with FLANN-based matcher for accurate alignment.
- Use matched features to compute homography for aligning frames.
- Blend frames smoothly with alpha blending for high-quality panoramas.
- Annotate stitched images with bounding boxes and labels for detected objects.
- Models like YOLO, SSD, and R-CNN are supported for object detection.
-
Clone the repository:
git clone https://github.com/Dhruv-Sharma01/CS399-Real-Time-Drone-Data-Analysis.git cd CS399-Real-Time-Drone-Data-Analysis
-
Install dependencies:
pip install -r requirements.txt
- The pipeline produces high-quality stitched panoramas with accurately detected and annotated objects.
- Extend the pipeline with:
- Distributed processing on drones for low-latency analysis.
- Advanced drone navigation and control systems.
- Real-time optimizations for object detection in resource-constrained environments.
- OpenCV Documentation: https://docs.opencv.org/
- Ultralytics YOLOv8: https://ultralytics.com/
- Microsoft COCO Dataset: https://cocodataset.org/
- Multiple Image Stitching: https://www.kaggle.com/code/deepzsenu/multiple-image-stitching
- Dataset for Calculation of Image Stitching Metrics:
https://www.kaggle.com/datasets/mikhailma/house-rooms-streets-image-dataset - Lowe, D.G., "Distinctive Image Features from Scale-Invariant Keypoints," IJCV 2004.
Contributions are welcome! Fork the repository and submit a pull request to suggest improvements or add new features.
Special thanks to Professor Anirban Dasgupta for his guidance and support throughout the project.
For questions or suggestions, please reach out to Dhruv Sharma at [email protected].