This repository contains the solution for the Blueprint full-stack coding exercise.
The goal is to build a system that processes patient answers to a diagnostic screener, scores them based on predefined domains (depression, mania, anxiety, substance use), and recommends appropriate Level-2 clinical assessments based on those scores. The exercise involves creating a backend API (Part I) and a frontend UI (Part II).
- Backend (Part I): A Python Flask API provides an endpoint (
/score
) that accepts patient answers, uses a domain mapping (stored inbackend/domain_mapping.json
), calculates scores for each domain, and returns a list of recommended assessments based on score thresholds. It runs inside a Docker container. - Frontend (Part II): A React web application presents the screener questions one by one, collects answers, and submits them to the backend API. It also runs inside a Docker container, served by Vite's development server.
- Orchestration: Docker Compose is used to build and run both the backend and frontend services with a single command.
This project uses Docker and Docker Compose to simplify setup and ensure consistency across different operating systems (Windows, macOS, Linux).
- Docker Desktop (for Windows or macOS) or Docker Engine + Docker Compose (for Linux) installed. Download from https://www.docker.com/products/docker-desktop/.
-
Clone the repository:
git clone <repository-url> # Replace <repository-url> with the actual URL cd blueprint-exercise
-
Build and start the services: Open a terminal in the project's root directory (
blueprint-exercise/
) and run:docker-compose up --build
--build
tells Docker Compose to build the images if they don't exist or if the Dockerfiles/code dependencies have changed.- This command will build both the backend and frontend images (if needed) and then start containers for both services. You will see logs from both services interleaved in your terminal.
-
Access the application:
- Frontend UI: Open your web browser and navigate to
http://localhost:5173
- Backend API: The API is running on
http://localhost:5001
. The frontend UI will automatically communicate with it. You can also test the/score
endpoint directly using tools likecurl
or Postman againsthttp://localhost:5001/score
.
- Frontend UI: Open your web browser and navigate to
-
Stopping the application:
- Press
Ctrl+C
in the terminal wheredocker-compose up
is running. - To remove the containers (optional cleanup), run:
docker-compose down
- Press
- Because we've used
volumes
indocker-compose.yml
, changes you make to the code in./backend
or./frontend
on your host machine will be reflected inside the running containers. - The Flask development server (backend) and Vite development server (frontend) are configured to automatically reload when they detect code changes, making development iterative without needing to constantly rebuild the Docker images (unless you change dependencies in
requirements.txt
orpackage.json
). - If you do change dependencies, you'll need to stop (
Ctrl+C
) and rebuild/restart usingdocker-compose up --build
.
- Language/Framework: Python 3 / Flask
- Reasoning: Python is well-suited for data handling and rapid API development. Flask is a lightweight microframework, ideal for a single-endpoint API as required by the exercise, avoiding unnecessary complexity.
- Persistence (Domain Mapping): JSON file (
backend/domain_mapping.json
)- Reasoning: The exercise allows flexibility. For the static, small domain mapping provided, a JSON file is the simplest approach, requiring no external database setup and making the project easy to run via Docker. In a production scenario, this would likely be stored in a database (e.g., PostgreSQL, MongoDB) or a configuration management system for easier updates and scalability.
- Dependencies:
Flask
,Flask-CORS
(to allow requests from the frontend during development). - Containerization: Docker
- Reasoning: Provides a consistent, isolated environment for running the service, simplifying setup and deployment across different machines and operating systems.
POST /score
- Description: Accepts patient screener answers and returns recommended assessments.
- Request Body (JSON):
{ "answers": [ { "question_id": "question_a", "value": 1 }, { "question_id": "question_b", "value": 0 }, // ... more answers ] }
value
must be an integer between 0 and 4.
- Success Response (200 OK):
{ "results": ["ASRM", "PHQ-9"] // Example, sorted list }
- Returns a sorted list of unique assessment names.
- Error Responses:
400 Bad Request
: If the request is not JSON, missinganswers
, or if the answers format/values are invalid. Includes anerror
message in the JSON response.500 Internal Server Error
: If there's an issue loading the domain mapping or during processing. Includes anerror
message.
# Ensure docker-compose up is running first
curl -X POST -H "Content-Type: application/json" \
-d '{
"answers": [
{ "value": 1, "question_id": "question_a" },
{ "value": 0, "question_id": "question_b" },
{ "value": 2, "question_id": "question_c" },
{ "value": 3, "question_id": "question_d" },
{ "value": 1, "question_id": "question_e" },
{ "value": 0, "question_id": "question_f" },
{ "value": 1, "question_id": "question_g" },
{ "value": 0, "question_id": "question_h" }
]
}' \
http://localhost:5001/score
Expected Output for Example:
{
"results": [
"ASRM",
"PHQ-9"
]
}
- Framework: React with TypeScript (using Vite)
- Reasoning: React is a widely-used, component-based library suitable for building interactive UIs. TypeScript adds static typing for better code quality and maintainability. Vite provides a fast development server and efficient build process.
- State Management: React's built-in
useState
hook.- Reasoning: Sufficient for managing the current question index and collected answers in this single-view application. More complex state management libraries (like Redux or Zustand) are overkill for this scope.
- Styling: Basic CSS (
frontend/src/App.css
)- Reasoning: Focuses on achieving the required layout and functionality without introducing complex styling libraries, keeping the solution simple and focused.
- Data Loading: Screener data is loaded directly from
frontend/src/screenerData.ts
.- Reasoning: As allowed by the prompt ("load the screener from memory"), this simplifies the setup by avoiding the need for an additional backend endpoint just to serve static data for the exercise. In production, this would likely be fetched from an API.
- Containerization: Docker
- Reasoning: Provides a consistent, isolated environment for running the service, simplifying setup and deployment across different machines and operating systems. Ensures Node.js version and dependencies are managed correctly.
- Displays questions one by one from the loaded screener data.
- Shows the overall prompt (
section.title
). - Displays the assessment name (
display_name
). - Shows the current question number and total count (e.g., "Question 3 of 8").
- Presents answer options as clickable buttons.
- Clicking an answer automatically advances to the next question.
- A progress bar visually indicates completion status.
- Upon answering the last question:
- Displays a completion message.
- Logs the collected answers (in the format required by Part I) to the browser's developer console.
- Attempts to POST the collected answers to the backend API (
http://backend:5001/score
from within the Docker network, accessed viahttp://localhost:5173
in the browser) and displays the recommended assessments received from the backend or any error encountered.
This section outlines how this application might be deployed and managed in a true production environment.
- Cloud Platform: Deploy containers to a managed container orchestration service like AWS ECS (with Fargate for serverless compute) or Google Kubernetes Engine (GKE).
- Load Balancing: Place an Application Load Balancer (ALB/Cloud Load Balancer) in front of the containers. This distributes traffic across multiple instances and can handle SSL termination.
- Auto Scaling: Configure auto-scaling rules based on CPU/memory utilization or request count to automatically adjust the number of running frontend and backend containers based on traffic load.
- Database: Replace the
domain_mapping.json
file with a managed, scalable database (e.g., AWS RDS PostgreSQL, MongoDB Atlas). The API would query this database instead of reading a local file. Consider read replicas for high read scenarios. - Caching: Implement caching where appropriate. The domain mapping could be cached in memory (e.g., using Flask-Caching or an external cache like Redis) in the backend service to reduce database lookups. API gateway caching could also be used.
- CDN: Serve frontend static assets (JS, CSS, images) via a Content Delivery Network (CDN) like AWS CloudFront or Cloudflare to reduce latency for users globally. The
vite build
command would generate optimized static assets for this purpose. - Stateless Services: Ensure both frontend and backend services are stateless. User session data or temporary state should be stored externally (e.g., in Redis, a database) if needed, allowing any container instance to handle any request.
- HTTPS: Enforce HTTPS for all traffic using the load balancer (SSL termination). Redirect HTTP to HTTPS.
- Secrets Management: Use a dedicated secrets management service (e.g., AWS Secrets Manager, HashiCorp Vault) to store sensitive information like database credentials or API keys, rather than hardcoding or putting them in environment variables directly in the container definition.
- Input Validation: Rigorous input validation on the backend (
/score
endpoint) is crucial to prevent injection attacks or unexpected behavior. The current validation is basic; production would require more thorough checks (e.g., using a library like Marshmallow or Pydantic in Python). - CORS Configuration: In production, configure CORS (
Flask-CORS
) on the backend to only allow requests from the specific domain(s) where the frontend is hosted, instead of the wildcard (*
) used for development. - Rate Limiting: Implement rate limiting on the API endpoint (using the load balancer, API gateway, or middleware like
Flask-Limiter
) to prevent abuse. - Network Security: Configure firewall rules (e.g., AWS Security Groups) to restrict traffic between services and from the internet. Only expose necessary ports (e.g., 443 on the load balancer).
- Container Security: Regularly scan container images for vulnerabilities using tools like Docker Scout, AWS ECR Scan, or Snyk. Use minimal base images (like
-slim
or-alpine
). Run containers as non-root users.
- Logging: Implement structured logging (e.g., JSON format) in both frontend and backend applications. Forward logs to a centralized logging system (e.g., AWS CloudWatch Logs, Datadog, ELK stack) for analysis and alerting. Include request IDs to trace requests across services.
- Metrics: Collect key application and system metrics (request latency, error rates, CPU/memory usage, container count). Use monitoring tools (e.g., CloudWatch Metrics, Prometheus/Grafana, Datadog) to visualize dashboards and set up alerts for anomalies or threshold breaches.
- Tracing: Implement distributed tracing (e.g., using OpenTelemetry with AWS X-Ray, Jaeger, or Datadog APM) to understand request flows across the frontend and backend, identify bottlenecks, and diagnose errors.
- Health Checks: Configure load balancer health checks for both frontend and backend containers to ensure traffic is only sent to healthy instances. Implement specific
/health
endpoints in the backend/frontend services. - Error Reporting: Use an error reporting service (e.g., Sentry, Rollbar) to capture and aggregate frontend and backend exceptions in real-time, providing stack traces and context for faster debugging.
- Domain Mapping Persistence: Using a JSON file is simple for the exercise but not suitable for production. A database is needed for scalability and easier updates.
- Screener Data Loading (Frontend): Loading the screener from a local file is convenient but unrealistic. In a real app, the frontend would fetch the specific screener structure from a backend API endpoint (e.g.,
/api/screeners/{screener_id}
). - Error Handling: Error handling is basic. Production code would need more robust error handling, potentially providing more specific error codes/messages and handling edge cases more gracefully (e.g., what if a question ID exists in the input but not in the mapping?).
- Testing: No automated tests (unit, integration, end-to-end) were included. A production application would require a comprehensive test suite to ensure correctness and prevent regressions. Pytest could be used for the backend, and Jest/React Testing Library for the frontend.
- State Management (Frontend):
useState
is sufficient now, but if the application grew (e.g., multiple screeners, user accounts, complex UI interactions), a more robust state management solution (Context API with reducers, Zustand, Redux) might be needed. - UI/UX: The UI is functional but basic. More attention could be paid to visual design, accessibility (ARIA attributes, keyboard navigation), responsiveness across different screen sizes, and user feedback (e.g., loading indicators, transition animations).
- Authentication/Authorization: The API is unsecured. A real application would require user authentication (e.g., identifying the patient) and authorization (ensuring the user is allowed to submit/view data).
- Scalability of Scoring Logic: The current scoring logic is simple. If rules became significantly more complex, a dedicated rules engine or a more abstracted calculation service might be considered.
- Docker Image Size: Images could be optimized further (e.g., using multi-stage builds, especially for the frontend to serve static files via a lightweight server like Nginx instead of the Node dev server).