Skip to content
/ asy1um Public

Project Asylum is a self-adapting infrastructure management system using AI/ML, Terraform, and monitoring tools to create secure honeypots. It engages attackers, analyzes behavior, and evolves in response, providing valuable insights and system protection.

Notifications You must be signed in to change notification settings

PR-CYBR/asy1um

Repository files navigation

alt text

Project Asylum

Project Asylum is a self-adapting infrastructure management framework that uses AI/ML, Terraform, and monitoring tools to build secure, evolving honeypot environments. The system automatically analyzes attacker behavior, detects anomalies, and adapts its infrastructure in real-time to maximize deception and security research value.

🎯 Core Features

  • πŸ€– AI/ML Adaptation Engine: TensorFlow-based anomaly detection that learns from attacker behavior
  • πŸ›‘οΈ CVE/CVSS Enrichment: Automatic detection and enrichment of CVE identifiers with NIST NVD data for context-aware threat intelligence
  • πŸ—οΈ Infrastructure as Code: Modular Terraform configurations for Docker, Proxmox, AWS, GCP, and Azure
  • πŸ“Š Comprehensive Monitoring: Prometheus, Grafana, and ELK Stack for metrics and log analysis
  • 🍯 Honeypot System: Cowrie SSH/Telnet honeypot with automatic configuration rotation
  • πŸ”„ Orchestration Layer: Event-driven automation with Node.js API and message queue
  • πŸ” Feedback Loop: Continuous analysis and infrastructure adaptation based on detected threats
  • πŸ”’ Security First: Built-in secret management, least-privilege access, and TLS encryption

πŸš€ Quick Start

Prerequisites

  • Docker & Docker Compose (20.10+)
  • Terraform (1.0+)
  • Git

Optional for cloud deployment:

  • AWS CLI / GCP SDK / Azure CLI (configured with credentials)

One-Command Deployment

# Clone the repository
git clone https://github.com/folkvarlabs/project-asylum.git
cd project-asylum

# Copy environment configuration
cp .env.example .env

# Start all services
docker-compose up -d

That's it! The system will start with:

Verify Deployment

# Check all services are running
docker-compose ps

# View logs
docker-compose logs -f

# Test honeypot
ssh -p 2222 root@localhost

# Check AI API
curl http://localhost:8000/health

# View metrics
curl http://localhost:9090/metrics

πŸ“ Project Structure

project-asylum/
β”œβ”€β”€ terraform/              # Infrastructure as Code
β”‚   β”œβ”€β”€ modules/
β”‚   β”‚   β”œβ”€β”€ docker/        # Local Docker deployment
β”‚   β”‚   β”œβ”€β”€ aws/           # AWS cloud deployment
β”‚   β”‚   β”œβ”€β”€ gcp/           # Google Cloud deployment
β”‚   β”‚   └── azure/         # Azure cloud deployment
β”‚   β”œβ”€β”€ envs/
β”‚   β”‚   β”œβ”€β”€ dev/           # Development environment
β”‚   β”‚   └── prod/          # Production environment
β”‚   β”œβ”€β”€ main.tf
β”‚   β”œβ”€β”€ variables.tf
β”‚   └── outputs.tf
β”‚
β”œβ”€β”€ ai/                     # AI/ML Adaptation Engine
β”‚   β”œβ”€β”€ model/
β”‚   β”‚   └── anomaly_detector.py  # TensorFlow anomaly detection
β”‚   β”œβ”€β”€ api/
β”‚   β”‚   └── main.py        # FastAPI REST interface
β”‚   β”œβ”€β”€ train.py           # Model training script
β”‚   β”œβ”€β”€ requirements.txt
β”‚   └── Dockerfile
β”‚
β”œβ”€β”€ monitoring/             # Monitoring Stack
β”‚   β”œβ”€β”€ prometheus/
β”‚   β”‚   β”œβ”€β”€ prometheus.yml
β”‚   β”‚   β”œβ”€β”€ rules/
β”‚   β”‚   └── Dockerfile
β”‚   β”œβ”€β”€ grafana/
β”‚   β”‚   β”œβ”€β”€ provisioning/
β”‚   β”‚   └── Dockerfile
β”‚   └── elk/
β”‚       β”œβ”€β”€ elasticsearch/
β”‚       β”œβ”€β”€ logstash/
β”‚       └── kibana/
β”‚
β”œβ”€β”€ honeypot/              # Honeypot Subsystem
β”‚   β”œβ”€β”€ cowrie/
β”‚   β”‚   β”œβ”€β”€ cowrie.cfg
β”‚   β”‚   β”œβ”€β”€ userdb.txt
β”‚   β”‚   β”œβ”€β”€ rotate_config.sh
β”‚   β”‚   └── Dockerfile
β”‚   └── honeyd/
β”‚
β”œβ”€β”€ orchestration/         # Integration Layer
β”‚   β”œβ”€β”€ api/
β”‚   β”‚   └── server.js      # Express API server
β”‚   β”œβ”€β”€ scheduler/
β”‚   β”‚   └── index.js       # Feedback loop scheduler
β”‚   β”œβ”€β”€ package.json
β”‚   └── Dockerfile
β”‚
β”œβ”€β”€ docs/                  # Documentation
β”‚   β”œβ”€β”€ feedback-loop.md
β”‚   β”œβ”€β”€ roadmap.md
β”‚   β”œβ”€β”€ target-milestones.md
β”‚   └── build.md
β”‚
β”œβ”€β”€ .github/
β”‚   └── workflows/
β”‚       └── deploy.yml     # CI/CD pipeline
β”‚
β”œβ”€β”€ docker-compose.yml     # Single-command deployment
β”œβ”€β”€ .env.example           # Environment template
β”œβ”€β”€ .gitignore
β”œβ”€β”€ CONTRIBUTING.md
└── README.md

πŸ—οΈ Infrastructure Deployment

Local Development (Docker)

cd terraform
terraform init
terraform plan -var-file=envs/dev/terraform.tfvars
terraform apply -var-file=envs/dev/terraform.tfvars

Cloud Deployment (AWS)

# Configure AWS credentials
export AWS_ACCESS_KEY_ID="your-key"
export AWS_SECRET_ACCESS_KEY="your-secret"

# Deploy to AWS
cd terraform
terraform init
terraform workspace new prod  # or select prod
terraform apply -var-file=envs/prod/terraform.tfvars

Configuration Variables

Edit terraform/envs/dev/terraform.tfvars or prod/terraform.tfvars:

environment       = "dev"
provider_type     = "docker"  # or "aws", "gcp", "azure"
node_count        = 2
instance_type     = "t3.micro"
network_cidr      = "172.20.0.0/16"
storage_size_gb   = 10
enable_monitoring = true
enable_honeypot   = true

πŸ€– AI/ML Model

Training the Model

# Using synthetic data (for testing)
docker-compose exec ai-api python train.py --synthetic --epochs 50

# Using real data
docker-compose exec ai-api python train.py --data /app/data/logs.json --epochs 50

# View model info
curl http://localhost:8000/model/info

Making Predictions

curl -X POST http://localhost:8000/predict \
  -H "Content-Type: application/json" \
  -d '{
    "features": [[0.1, 0.2, 0.3, ...]]  # 20 features
  }'

Getting Recommendations

curl -X POST http://localhost:8000/analyze \
  -H "Content-Type: application/json" \
  -d '{
    "features": [[...]]
  }'

πŸ›‘οΈ CVE/CVSS Enrichment

The system automatically detects CVE identifiers in honeypot logs and enriches them with data from NIST's National Vulnerability Database (NVD).

Query CVE Information

# Get detailed CVE information
curl "http://localhost:8000/cveinfo?cve=CVE-2021-44228"

# Detect CVEs in text
curl -X POST http://localhost:8000/cve/detect \
  -H "Content-Type: application/json" \
  -d '{"text": "Exploit attempt using CVE-2021-44228"}'

Adaptive Response Based on CVSS

The orchestration layer automatically adapts infrastructure based on CVE severity:

  • CVSS >= 9.0 (Critical): Deploy specialized honeypots, redirect to isolated decoy, maximum monitoring
  • CVSS >= 7.0 (High): Scale honeypots, elevated monitoring
  • CVSS >= 4.0 (Medium): Enhanced logging
  • CVSS < 4.0 (Low): Log for analysis

Demo

# Run the CVE enrichment demo
docker-compose exec ai-api python demo_cve.py

See docs/cve-integration.md for complete documentation.

πŸ“Š Monitoring & Visualization

Grafana Dashboards

Access Grafana at http://localhost:3000

  • Username: admin
  • Password: asylum_admin_2024

Pre-configured dashboards:

  • Honeypot Activity Overview
  • Anomaly Detection Metrics
  • Infrastructure Health
  • Attacker Behavior Analysis

Prometheus Queries

# Connection rate to honeypots
rate(honeypot_connections_total[5m])

# Anomaly detection rate
rate(honeypot_anomalies_total[5m]) / rate(honeypot_events_total[5m])

# AI inference latency
histogram_quantile(0.95, rate(ai_inference_duration_seconds_bucket[5m]))

Kibana Log Analysis

Access Kibana at http://localhost:5601

Search for high-anomaly events:

# High anomaly scores
event_category:command_execution AND anomaly_score:>15

# CVE-related events
cve_detected:true

# Critical CVEs only
cve_severity:CRITICAL

# High CVSS scores
max_cvss_score:>=9.0

πŸ”„ Feedback Loop

The feedback loop continuously monitors, analyzes, and adapts:

graph LR
    A[Honeypot Logs] --> B[Elasticsearch]
    B --> C[AI Analysis]
    C --> D{Severity?}
    D -->|Critical| E[Scale Infrastructure]
    D -->|High| F[Rotate Honeypots]
    D -->|Medium| G[Increase Monitoring]
    E --> H[Terraform Apply]
    F --> H
    H --> I[Updated Infrastructure]
    I --> A
Loading

See docs/feedback-loop.md for detailed documentation.

Configuration

Scheduler intervals (in .env):

ANALYSIS_INTERVAL="*/15 * * * *"      # Every 15 minutes
DRIFT_CHECK_INTERVAL="0 */6 * * *"    # Every 6 hours
MODEL_RETRAIN_INTERVAL="0 2 * * *"    # Daily at 2 AM

πŸ”’ Security

Secret Management

  1. Never commit secrets to Git
  2. Use .env files (already in .gitignore)
  3. For production, use:
    • AWS Secrets Manager
    • HashiCorp Vault
    • Environment variables in CI/CD

Credentials

# Copy and edit .env
cp .env.example .env

# Set your secrets
echo "AWS_ACCESS_KEY_ID=your-key" >> .env
echo "AWS_SECRET_ACCESS_KEY=your-secret" >> .env

Network Security

  • All services communicate via internal Docker network
  • Expose only necessary ports
  • Use TLS/SSL for external access
  • Configure firewall rules for cloud deployments

πŸ§ͺ Testing

Run Tests

# Python tests
cd ai
pip install pytest
pytest

# Node.js tests
cd orchestration
npm test

# Integration tests
docker-compose -f docker-compose.test.yml up --abort-on-container-exit

Manual Testing

# Test honeypot
ssh -p 2222 root@localhost
# Try: whoami, ls, cat /etc/passwd

# Generate test events
curl -X POST http://localhost:3001/events \
  -H "Content-Type: application/json" \
  -d '{
    "type": "anomaly_detected",
    "source": "test",
    "data": {"anomaly_score": 85}
  }'

πŸ“š Documentation

🀝 Contributing

We welcome contributions! Please see CONTRIBUTING.md for:

  • Branch and commit standards
  • Code style guidelines
  • Pull request process
  • Testing requirements

πŸ“‹ Terraform Variables Reference

Variable Type Default Description
environment string "dev" Environment name (dev, prod)
provider_type string "docker" Infrastructure provider
node_count number 3 Number of honeypot nodes
instance_type string "t3.micro" Cloud instance type
network_cidr string "10.0.0.0/16" Network CIDR block
storage_size_gb number 20 Storage size per node
region string "us-east-1" Cloud provider region
enable_monitoring bool true Enable monitoring stack
enable_honeypot bool true Enable honeypot deployment

πŸ› οΈ Troubleshooting

Services Won't Start

# Check Docker resources
docker system df
docker system prune  # If low on space

# View service logs
docker-compose logs [service-name]

# Restart specific service
docker-compose restart [service-name]

Port Conflicts

Edit docker-compose.yml to change port mappings:

ports:
  - "3001:3000"  # Change 3001 to available port

AI Model Issues

# Retrain model
docker-compose exec ai-api python train.py --synthetic --epochs 50

# Check model status
curl http://localhost:8000/model/info

Terraform Errors

# Refresh state
terraform refresh

# Unlock state (if locked)
terraform force-unlock <lock-id>

# Validate configuration
terraform validate

πŸ“Š Performance Tuning

Resource Allocation

Edit docker-compose.yml for resource limits:

services:
  elasticsearch:
    deploy:
      resources:
        limits:
          memory: 2G
          cpus: '2'

Scaling

# Scale honeypots
docker-compose up -d --scale cowrie=5

# Scale via Terraform
# Edit terraform/envs/prod/terraform.tfvars
node_count = 10
terraform apply

πŸ“ž Support

πŸ“œ License

MIT

πŸ‘₯ Authors

  • Pawel M (pmaksymiak)
  • Kylo P (cywf)

πŸ™ Acknowledgments

  • Cowrie honeypot project
  • Terraform community
  • ELK Stack team
  • TensorFlow team

⚠️ Warning: This system is designed for security research and authorized honeypot deployments only. Ensure you have proper authorization and follow applicable laws and regulations when deploying honeypots.

About

Project Asylum is a self-adapting infrastructure management system using AI/ML, Terraform, and monitoring tools to create secure honeypots. It engages attackers, analyzes behavior, and evolves in response, providing valuable insights and system protection.

Topics

Resources

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •