Fast API for chat toxicity scoring & moderation.
- Fine-tuned DistilBERT model for six toxicity categories plus non-toxic
- Fast inference via FastAPI + Uvicorn
- Containerized with Docker, deployable on Hugging Face Spaces & RapidAPI
- Configurable thresholds and batch support
git clone https://github.com/Kwaseda/toxicity-score-api.git
cd toxicity-score-apiCreate or update the .env file with your settings:
MODEL_DIR=./enhanced_toxic_comment_model
MAX_LENGTH=256
BATCH_LIMIT=100
PORT=7860
TOXICITY_THRESHOLD=0.5
THRESHOLDS_PATH=./enhanced_toxic_comment_model/thresholds.jsondocker build -t toxicity-api .
docker run -d -p 7860:7860 --name toxicity-api-container toxicity-api- Health check:
curl http://localhost:7860/health- Predict toxicity:
curl -X POST http://localhost:7860/predict \
-H "Content-Type: application/json" \
-d '{"text": "Your sample comment"}'toxicity-score-api
├── core_api_refactored.py # Main FastAPI application
├── Dockerfile # Container definition
├── requirements.txt # Python dependencies
├── .env # Environment variables for configuration
├── .gitignore # Ignored files
└── enhanced_toxic_comment_model/
├── config.json
├── model.safetensors
├── thresholds.json
├── tokenizer_config.json
├── tokenizer.json
├── special_tokens_map.json
└── vocab.txt
This project is licensed under the Apache License 2.0. See LICENSE for details.
Built in public by Dominic Jesse Kwame Addo (Kwame Aseda)