Project Asylum is a self-adapting infrastructure management framework that uses AI/ML, Terraform, and monitoring tools to build secure, evolving honeypot environments. The system automatically analyzes attacker behavior, detects anomalies, and adapts its infrastructure in real-time to maximize deception and security research value.
- π€ AI/ML Adaptation Engine: TensorFlow-based anomaly detection that learns from attacker behavior
- π‘οΈ CVE/CVSS Enrichment: Automatic detection and enrichment of CVE identifiers with NIST NVD data for context-aware threat intelligence
- ποΈ Infrastructure as Code: Modular Terraform configurations for Docker, Proxmox, AWS, GCP, and Azure
- π Comprehensive Monitoring: Prometheus, Grafana, and ELK Stack for metrics and log analysis
- π― Honeypot System: Cowrie SSH/Telnet honeypot with automatic configuration rotation
- π Orchestration Layer: Event-driven automation with Node.js API and message queue
- π Feedback Loop: Continuous analysis and infrastructure adaptation based on detected threats
- π Security First: Built-in secret management, least-privilege access, and TLS encryption
- Docker & Docker Compose (20.10+)
- Terraform (1.0+)
- Git
Optional for cloud deployment:
- AWS CLI / GCP SDK / Azure CLI (configured with credentials)
# Clone the repository
git clone https://github.com/folkvarlabs/project-asylum.git
cd project-asylum
# Copy environment configuration
cp .env.example .env
# Start all services
docker-compose up -dThat's it! The system will start with:
- π― Cowrie honeypot on ports 2222-2223
- π Grafana dashboard at http://localhost:3000 (admin/asylum_admin_2024)
- π Prometheus at http://localhost:9090
- π Kibana at http://localhost:5601
- π€ AI API at http://localhost:8000
- ποΈ Orchestration API at http://localhost:3001
# Check all services are running
docker-compose ps
# View logs
docker-compose logs -f
# Test honeypot
ssh -p 2222 root@localhost
# Check AI API
curl http://localhost:8000/health
# View metrics
curl http://localhost:9090/metricsproject-asylum/
βββ terraform/ # Infrastructure as Code
β βββ modules/
β β βββ docker/ # Local Docker deployment
β β βββ aws/ # AWS cloud deployment
β β βββ gcp/ # Google Cloud deployment
β β βββ azure/ # Azure cloud deployment
β βββ envs/
β β βββ dev/ # Development environment
β β βββ prod/ # Production environment
β βββ main.tf
β βββ variables.tf
β βββ outputs.tf
β
βββ ai/ # AI/ML Adaptation Engine
β βββ model/
β β βββ anomaly_detector.py # TensorFlow anomaly detection
β βββ api/
β β βββ main.py # FastAPI REST interface
β βββ train.py # Model training script
β βββ requirements.txt
β βββ Dockerfile
β
βββ monitoring/ # Monitoring Stack
β βββ prometheus/
β β βββ prometheus.yml
β β βββ rules/
β β βββ Dockerfile
β βββ grafana/
β β βββ provisioning/
β β βββ Dockerfile
β βββ elk/
β βββ elasticsearch/
β βββ logstash/
β βββ kibana/
β
βββ honeypot/ # Honeypot Subsystem
β βββ cowrie/
β β βββ cowrie.cfg
β β βββ userdb.txt
β β βββ rotate_config.sh
β β βββ Dockerfile
β βββ honeyd/
β
βββ orchestration/ # Integration Layer
β βββ api/
β β βββ server.js # Express API server
β βββ scheduler/
β β βββ index.js # Feedback loop scheduler
β βββ package.json
β βββ Dockerfile
β
βββ docs/ # Documentation
β βββ feedback-loop.md
β βββ roadmap.md
β βββ target-milestones.md
β βββ build.md
β
βββ .github/
β βββ workflows/
β βββ deploy.yml # CI/CD pipeline
β
βββ docker-compose.yml # Single-command deployment
βββ .env.example # Environment template
βββ .gitignore
βββ CONTRIBUTING.md
βββ README.md
cd terraform
terraform init
terraform plan -var-file=envs/dev/terraform.tfvars
terraform apply -var-file=envs/dev/terraform.tfvars# Configure AWS credentials
export AWS_ACCESS_KEY_ID="your-key"
export AWS_SECRET_ACCESS_KEY="your-secret"
# Deploy to AWS
cd terraform
terraform init
terraform workspace new prod # or select prod
terraform apply -var-file=envs/prod/terraform.tfvarsEdit terraform/envs/dev/terraform.tfvars or prod/terraform.tfvars:
environment = "dev"
provider_type = "docker" # or "aws", "gcp", "azure"
node_count = 2
instance_type = "t3.micro"
network_cidr = "172.20.0.0/16"
storage_size_gb = 10
enable_monitoring = true
enable_honeypot = true# Using synthetic data (for testing)
docker-compose exec ai-api python train.py --synthetic --epochs 50
# Using real data
docker-compose exec ai-api python train.py --data /app/data/logs.json --epochs 50
# View model info
curl http://localhost:8000/model/infocurl -X POST http://localhost:8000/predict \
-H "Content-Type: application/json" \
-d '{
"features": [[0.1, 0.2, 0.3, ...]] # 20 features
}'curl -X POST http://localhost:8000/analyze \
-H "Content-Type: application/json" \
-d '{
"features": [[...]]
}'The system automatically detects CVE identifiers in honeypot logs and enriches them with data from NIST's National Vulnerability Database (NVD).
# Get detailed CVE information
curl "http://localhost:8000/cveinfo?cve=CVE-2021-44228"
# Detect CVEs in text
curl -X POST http://localhost:8000/cve/detect \
-H "Content-Type: application/json" \
-d '{"text": "Exploit attempt using CVE-2021-44228"}'The orchestration layer automatically adapts infrastructure based on CVE severity:
- CVSS >= 9.0 (Critical): Deploy specialized honeypots, redirect to isolated decoy, maximum monitoring
- CVSS >= 7.0 (High): Scale honeypots, elevated monitoring
- CVSS >= 4.0 (Medium): Enhanced logging
- CVSS < 4.0 (Low): Log for analysis
# Run the CVE enrichment demo
docker-compose exec ai-api python demo_cve.pySee docs/cve-integration.md for complete documentation.
Access Grafana at http://localhost:3000
- Username: admin
- Password: asylum_admin_2024
Pre-configured dashboards:
- Honeypot Activity Overview
- Anomaly Detection Metrics
- Infrastructure Health
- Attacker Behavior Analysis
# Connection rate to honeypots
rate(honeypot_connections_total[5m])
# Anomaly detection rate
rate(honeypot_anomalies_total[5m]) / rate(honeypot_events_total[5m])
# AI inference latency
histogram_quantile(0.95, rate(ai_inference_duration_seconds_bucket[5m]))
Access Kibana at http://localhost:5601
Search for high-anomaly events:
# High anomaly scores
event_category:command_execution AND anomaly_score:>15
# CVE-related events
cve_detected:true
# Critical CVEs only
cve_severity:CRITICAL
# High CVSS scores
max_cvss_score:>=9.0
The feedback loop continuously monitors, analyzes, and adapts:
graph LR
A[Honeypot Logs] --> B[Elasticsearch]
B --> C[AI Analysis]
C --> D{Severity?}
D -->|Critical| E[Scale Infrastructure]
D -->|High| F[Rotate Honeypots]
D -->|Medium| G[Increase Monitoring]
E --> H[Terraform Apply]
F --> H
H --> I[Updated Infrastructure]
I --> A
See docs/feedback-loop.md for detailed documentation.
Scheduler intervals (in .env):
ANALYSIS_INTERVAL="*/15 * * * *" # Every 15 minutes
DRIFT_CHECK_INTERVAL="0 */6 * * *" # Every 6 hours
MODEL_RETRAIN_INTERVAL="0 2 * * *" # Daily at 2 AM- Never commit secrets to Git
- Use
.envfiles (already in.gitignore) - For production, use:
- AWS Secrets Manager
- HashiCorp Vault
- Environment variables in CI/CD
# Copy and edit .env
cp .env.example .env
# Set your secrets
echo "AWS_ACCESS_KEY_ID=your-key" >> .env
echo "AWS_SECRET_ACCESS_KEY=your-secret" >> .env- All services communicate via internal Docker network
- Expose only necessary ports
- Use TLS/SSL for external access
- Configure firewall rules for cloud deployments
# Python tests
cd ai
pip install pytest
pytest
# Node.js tests
cd orchestration
npm test
# Integration tests
docker-compose -f docker-compose.test.yml up --abort-on-container-exit# Test honeypot
ssh -p 2222 root@localhost
# Try: whoami, ls, cat /etc/passwd
# Generate test events
curl -X POST http://localhost:3001/events \
-H "Content-Type: application/json" \
-d '{
"type": "anomaly_detected",
"source": "test",
"data": {"anomaly_score": 85}
}'- CVE Integration: CVE/CVSS enrichment and adaptive response guide
- Feedback Loop: Detailed feedback loop documentation
- Roadmap: Project development roadmap
- Target Milestones: Planned milestones
- Build Guide: Architecture and build information
- Contributing: Contribution guidelines
We welcome contributions! Please see CONTRIBUTING.md for:
- Branch and commit standards
- Code style guidelines
- Pull request process
- Testing requirements
| Variable | Type | Default | Description |
|---|---|---|---|
environment |
string | "dev" | Environment name (dev, prod) |
provider_type |
string | "docker" | Infrastructure provider |
node_count |
number | 3 | Number of honeypot nodes |
instance_type |
string | "t3.micro" | Cloud instance type |
network_cidr |
string | "10.0.0.0/16" | Network CIDR block |
storage_size_gb |
number | 20 | Storage size per node |
region |
string | "us-east-1" | Cloud provider region |
enable_monitoring |
bool | true | Enable monitoring stack |
enable_honeypot |
bool | true | Enable honeypot deployment |
# Check Docker resources
docker system df
docker system prune # If low on space
# View service logs
docker-compose logs [service-name]
# Restart specific service
docker-compose restart [service-name]Edit docker-compose.yml to change port mappings:
ports:
- "3001:3000" # Change 3001 to available port# Retrain model
docker-compose exec ai-api python train.py --synthetic --epochs 50
# Check model status
curl http://localhost:8000/model/info# Refresh state
terraform refresh
# Unlock state (if locked)
terraform force-unlock <lock-id>
# Validate configuration
terraform validateEdit docker-compose.yml for resource limits:
services:
elasticsearch:
deploy:
resources:
limits:
memory: 2G
cpus: '2'# Scale honeypots
docker-compose up -d --scale cowrie=5
# Scale via Terraform
# Edit terraform/envs/prod/terraform.tfvars
node_count = 10
terraform apply- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Documentation: See
docs/directory
MIT
- Pawel M (pmaksymiak)
- Kylo P (cywf)
- Cowrie honeypot project
- Terraform community
- ELK Stack team
- TensorFlow team