Warning
Work in Progress / Demo
- This is a small side project/demonstration, not a forensic tool.
- Results are probabilistic and based on a general-purpose model (
dima806/ai_vs_real_image_detection). - Thresholds are not fully optimized. Compression, resizing, or editing can affect results.
- Do not rely on this tool for critical verification.
A privacy-focused tool to estimate if an image is AI-generated, running entirely locally on your machine.
backend/: FastAPI Python application.app/detectors/: Contains detector implementations (Dummy, ONNX, Hugging Face).app/api/: API route definitions.
frontend/: Vite React TypeScript application.src/components/: UI components (ResultCard, StatusBadge, Dropzone).src/api/: Typed API client.
- Python 3.10+
- Node.js & npm (v18+ recommended)
cd backend
python -m venv .venv
.\.venv\Scripts\Activate
pip install -r requirements.txt
# Run Tests (Optional)
python -m pytest
# Start Server
python -m uvicorn app.main:app --reload --port 8000cd backend
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
# Run Tests (Optional)
python3 -m pytest
# Start Server
python3 -m uvicorn app.main:app --reload --port 8000The API will be available at http://localhost:8000.
Open a new terminal window.
cd frontend
npm install
npm run devThe app will open at http://localhost:5173. Ensure the backend is running.
By default, the app may use a "Dummy" or "ONNX" detector. To use the state-of-the-art Hugging Face model (dima806/ai_vs_real_image_detection), follow these steps.
$env:USE_HF_DETECTOR="1"
$env:HF_HOME=".\.hf_cache" # Caches model in backend/.hf_cache
python -m uvicorn app.main:app --reload --port 8000export USE_HF_DETECTOR=1
export HF_HOME="./.hf_cache"
python3 -m uvicorn app.main:app --reload --port 8000Note: The first run will download model weights (~300MB) to .hf_cache.
Once downloaded, you can run entirely offline:
Windows:
$env:HF_HUB_OFFLINE="1"
python -m uvicorn app.main:app --reload --port 8000macOS / Linux:
export HF_HUB_OFFLINE=1
python3 -m uvicorn app.main:app --reload --port 8000- Accuracy: The model identifies patterns common in AI generation but can false-positive on heavily edited or compressed "real" photos.
- Performance: Inference runs on CPU by default. Latency depends on your machine specs.
- Scope: Intended for standard photography vs. generative AI (Midjourney/Flux/DALLE). May not detect subtle edits (Photoshop AI fill).