Skip to content

Releases: dhruvkshah75/TaskFlow

v2.1.0 Taskflow-CLI

25 Jan 10:41

Choose a tag to compare

TaskFlow v2.1.0: The CLI Revolution

TaskFlow v2.1.0 marks a major architectural shift, transforming from a backend-heavy system into a CLI-first distributed platform. This release introduces a unified Debian-based installation workflow that bootstraps a production-grade environment on any Linux host with a single command.


📦 Unified Distribution: The .deb Experience

We have moved away from manual repository cloning and multi-step configurations. TaskFlow is now distributed as a standard Debian package for a seamless "Thin Client" experience:

  • One-Command Setup: Installing the .deb automatically pulls the latest Docker images from GHCR, configures Minikube, and stands up the entire Kubernetes stack.
  • Persistent Orchestration: A background systemd service manages persistent port-forwarding, ensuring the CLI can always reach the cluster at localhost:8080 without manual intervention.
  • Global Access: The taskflow command is registered system-wide, allowing you to manage tasks from any directory.

The Power of the TaskFlow CLI

The new CLI acts as the "brain" for your distributed workers. It abstracts complex kubectl and docker commands into a high-level developer interface:

  • Auth Flow: Built-in register and login commands manage JWT-based sessions for secure API communication.
  • Cluster Health: Use taskflow status to view real-time pod health and scaling metrics.
  • Interactive Monitoring: Color-coded log streaming for API, Worker, and Manager components.
  • Seamless Scaling: Integrated with KEDA to scale your worker fleet from 2 to 20 pods based on Redis queue depth automatically.

The Modular Worker: Dynamic Execution

The Modular Worker continues to provide the core flexibility of the TaskFlow engine, allowing for dynamic orchestration instead of static code:

  • Dynamic Import: The worker identifies the script name from task metadata at runtime.
  • Cache Refresh: It automatically clears sys.modules to ensure you are always running the latest version of your code.
  • Developer Rule - Title-to-File: Your task title must exactly match the filename (e.g., process_data.py requires a title of process_data).

Standard Task Structure

# Example: my_custom_task.py

async def handler(payload: str):
    """
    The worker dynamically calls this 'handler' function.
    Support for both 'async def' and standard 'def' is native.
    """
    # Custom logic here...

How to Install

Download the release asset and run:

cd Downloads/
sudo dpkg -i taskflow-cli.deb
taskflow register

Full Changelog: v2.0.0...v2.1.0

v2.0.0 Modular Worker Update

18 Jan 11:59

Choose a tag to compare


The Modular Worker: How it Works

The Modular Worker represents a shift from static execution to dynamic orchestration. Instead of hardcoding task logic into the worker's source code, the worker acts as a "host".

When a task is picked up from the Redis queue:

  • Dynamic Import: The worker identifies the script name from the task metadata.
  • Cache Refresh: It clears existing versions of that module from memory (sys.modules) to ensure the latest upload is always used.
  • Execution: It dynamically loads the script and calls the handler function, passing the task's JSON data as a dictionary.

Developer Guide: Creating & Defining Your Task

To ensure your task executes correctly, follow these two mapping rules when creating a task via the API:

  1. The Title-to-File Link: The title field of your task must exactly match the filename (without the .py extension) of your uploaded script. For example, if your file is process_data.py, your task title must be process_data.
  2. Flexible Payload: You can include any string in the payload field.

Task Script Structure

Every uploaded script must define an entry point as shown below:

# Example: process_data.py

async def handler(payload: dict):
    """
    Entry point for TaskFlow v2.0.0. 
    The worker will automatically pass the 'payload' dictionary here.
    """
    # Use the payload to identify your specific test run
    run_note = payload.get("test_metadata", "No note provided")
    
    data = payload.get("data")
    # Your custom logic here...
    
    return {"status": "success", "note": run_note, "result": data}

# Note: Standard synchronous functions are also supported
# def handler(payload):
#     return {"message": "Sync handler executed"}

Full Changelog: v1.3.0...v2.0.0

v1.3.0: Event-Driven Autoscaling of worker pods

09 Jan 20:02

Choose a tag to compare

v1.3.0: Event-Driven Autoscaling & Architecture Overhaul

This release introduces Event-Driven Autoscaling using KEDA, allowing the worker pool to dynamically scale from 0 to 20 replicas based on real-time Redis queue depth. It also includes significant optimizations for the CI pipeline and system architecture visibility.

New Features

  • feat(autoscaling): Implemented KEDA ScaledObject to monitor Redis List length.
    • Trigger: Scales up when tasks > 0 in redis-high.
    • Target: Adds 1 worker for every 10 tasks in the queue.
    • Scale-to-Zero: Automatically removes all worker pods when the queue is empty to save resources.
  • feat(infra): Connected API Gateway to redis-low for dedicated caching and rate-limiting, isolating it from the task queue.
  • feat(architecture): Added Queue Manager to the system architecture diagram and updated logic flow (Queue Manager now handles pushing tasks to Redis High).

Performance & CI/CD

  • perf: Optimized worker resource requests (memory: 64Mi) to run efficiently on standard GitHub Actions runners.
  • ci: Migrated CI pipeline to a Matrix Strategy, building API, Worker, and Manager images in parallel (reduced build time by ~60%).
  • ci: Added Build Artifact Caching to share Docker images between build and test jobs.

Bug Fixes & Improvements

  • fix(diagrams): Updated System Architecture diagrams to use Mermaid.js with official logos (Redis, Postgres, K8s) and corrected data flow directions.
  • chore: Updated requirements.txt and Dockerfile layers for faster caching.

Deployment

To deploy this version with autoscaling enabled:

# Apply the new KEDA configurations
kubectl apply -f k8s/autoscaling/

v1.2.0 Auto-Scaling & Kubernetes Support

26 Dec 16:43

Choose a tag to compare

Major Features: Cloud-Native Scaling

  • Horizontal Pod Autoscaling (HPA): Implemented worker-hpa to automatically scale worker nodes based on real-time load.
    • Workers now scale from 1 to 10 replicas dynamically.
    • Scaling triggers on 50% CPU utilization.
  • Full Kubernetes Support: Added production-grade manifests for Minikube, EKS, and GKE in k8s/apps and k8s/infrastructure.

Infrastructure Improvements

  • Service Discovery Fix: Resolved DNS resolution issues (redis-high vs redis_high) to ensure reliable internal communication between API and Workers.
  • Secrets Management: Added secure secrets.yaml handling for injecting credentials into K8s pods.
  • Resilience: Implemented Liveness and Readiness probes to ensure zero-downtime deployments; pods now restart automatically if they freeze.

Documentation

  • Deployment Guide: Added a complete "Kubernetes Deployment" section to README.md covering Secrets, Minikube Tunneling, and HPA.
  • Architecture: Updated documentation to reflect the distributed, auto-scaling nature of the cluster.

v1.1.0 - Kubernetes Support Added

16 Dec 20:06

Choose a tag to compare

New Features

  • Kubernetes Support: Added full support for deploying TaskFlow on Kubernetes (Minikube, EKS, GKE).
  • K8s Manifests: Included production-ready YAML configurations for API, Workers, Queue Manager, Redis, and Postgres in k8s/.
  • Secrets Management: Implemented secure secrets handling for Kubernetes using secrets.yaml and environment variable injection.

Bug Fixes

  • Service Discovery: Resolved DNS resolution issues for internal services (Redis/Postgres) when running in container orchestration environments.
  • Environment Configuration: Fixed environment variable overrides to ensure correct service naming (redis-high vs redis_high) across different platforms.

Documentation

  • Deployment Guide: Added a comprehensive guide to README.md for deploying to Kubernetes, including secrets generation and Minikube tunneling.

v1.0.1 - Production Ready

12 Dec 19:02

Choose a tag to compare

First Production Release

This marks the first stable release of TaskFlow! We have transitioned from pre-release to a production-ready infrastructure using Docker, Nginx, and PgBouncer.

What's New

  • Nginx Integration: Added a custom Nginx reverse proxy mapping Port 80 to the API.
  • Database Optimization: Integrated PgBouncer for efficient connection pooling, significantly reducing database load and improving concurrency handling.
  • Production Hardening: The API now sits behind Nginx and is no longer directly exposed on port 8000.
  • Automated Maintenance: The "Janitor" script is now chained to run automatically after database migrations and before the server starts.
  • Reliability: Fixed startup race conditions; the app now waits for Alembic migrations to complete before launching.

How to Update/Run

If you are already running an older version, simply pull the changes and restart:

git pull origin main
docker compose down
docker compose up -d --build

Dockerized Distributed Task Queue

10 Dec 09:04

Choose a tag to compare

Pre-release

Initial Release: Distributed Task Queue

This is the first stable release of TaskFlow running fully on Docker. The system now supports distributed task processing with multiple concurrent workers, Redis-based locking, and PostgreSQL persistence.

Key Features

  • Full Docker Support: The entire stack (API, Workers, Redis, Postgres) spins up with a single command (docker compose up).
  • Distributed Workers: Supports multiple worker containers running in parallel to process tasks faster.
  • Concurrency Control: Implemented "Competing Consumers" pattern using Redis to prevent race conditions (no two workers process the same task).
  • Persistent Storage: All tasks and results are saved to a PostgreSQL database.
  • REST API: Fully functional FastAPI endpoints to submit tasks and check status.

How to Run

  1. Clone the repository.
  2. Create a .env file (see README.md).
  3. Run the system:
    docker compose up --build
  4. Access the API at http://localhost:8000.