refactor: modularize router.rs into domain-specific route modules#282
refactor: modularize router.rs into domain-specific route modules#282jmanhype wants to merge 20 commits intoautomagik-dev:devfrom
Conversation
Summary of ChangesHello @jmanhype, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request undertakes a significant architectural refactoring by modularizing the application's routing layer. The primary goal is to break down a large, single routing file into smaller, more manageable, and domain-focused modules. This change is expected to enhance code clarity, reduce compilation times for isolated changes, and streamline the onboarding process for new contributors by providing clearer ownership and separation of concerns within the API endpoints. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
Summary of ChangesHello @jmanhype, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly improves the architecture of the application's routing layer by breaking down a large, single router file into smaller, more manageable, and domain-focused modules. This refactoring aims to enhance code organization, reduce compilation times for incremental changes, and make the codebase easier for new contributors to understand and navigate. The core functionality of the API remains unchanged, but its internal structure is now more modular and scalable. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This is an excellent refactoring that significantly improves the structure and maintainability of the routing logic by breaking down the monolithic router.rs into domain-specific modules. The new structure is much cleaner and easier to navigate.
My review focuses on a few areas for further improvement, primarily related to performance and code duplication that became apparent during the refactoring. I've identified a few instances of N+1 query patterns, particularly in stream handlers, which could impact performance. I've also noted some duplicated code for loading .genie profiles that could be centralized.
Overall, this is a great step forward for the codebase. The comments below are suggestions to polish this refactoring even further.
| async fn handle_forge_tasks_ws( | ||
| socket: WebSocket, | ||
| deployment: DeploymentImpl, | ||
| project_id: Uuid, | ||
| ) -> anyhow::Result<()> { | ||
| let db_pool = deployment.db().pool.clone(); | ||
| let stream = deployment | ||
| .events() | ||
| .stream_tasks_raw(project_id) | ||
| .await? | ||
| .filter_map(move |msg_result| { | ||
| let db_pool = db_pool.clone(); | ||
| async move { | ||
| match msg_result { | ||
| Ok(LogMsg::JsonPatch(patch)) => { | ||
| if let Some(patch_op) = patch.0.first() { | ||
| if patch_op.path().starts_with("/tasks/") { | ||
| match patch_op { | ||
| json_patch::PatchOperation::Add(op) => { | ||
| if let Ok(task_with_status) = | ||
| serde_json::from_value::<TaskWithAttemptStatus>(op.value.clone()) | ||
| { | ||
| let is_agent: bool = sqlx::query_scalar( | ||
| "SELECT EXISTS(SELECT 1 FROM forge_agents WHERE task_id = ?)" | ||
| ) | ||
| .bind(task_with_status.task.id) | ||
| .fetch_one(&db_pool) | ||
| .await | ||
| .unwrap_or(false); | ||
|
|
||
| if !is_agent { | ||
| return Some(Ok(LogMsg::JsonPatch(patch))); | ||
| } | ||
| return None; | ||
| } | ||
| } | ||
| json_patch::PatchOperation::Replace(op) => { | ||
| if let Ok(task_with_status) = | ||
| serde_json::from_value::<TaskWithAttemptStatus>(op.value.clone()) | ||
| { | ||
| let is_agent: bool = sqlx::query_scalar( | ||
| "SELECT EXISTS(SELECT 1 FROM forge_agents WHERE task_id = ?)" | ||
| ) | ||
| .bind(task_with_status.task.id) | ||
| .fetch_one(&db_pool) | ||
| .await | ||
| .unwrap_or(false); | ||
|
|
||
| if !is_agent { | ||
| return Some(Ok(LogMsg::JsonPatch(patch))); | ||
| } | ||
| return None; | ||
| } | ||
| } | ||
| json_patch::PatchOperation::Remove(_) => { | ||
| return Some(Ok(LogMsg::JsonPatch(patch))); | ||
| } | ||
| _ => {} | ||
| } | ||
| } else if patch_op.path() == "/tasks" { | ||
| if let json_patch::PatchOperation::Replace(op) = patch_op { | ||
| if let Some(tasks_obj) = op.value.as_object() { | ||
| let mut filtered_tasks = serde_json::Map::new(); | ||
| for (task_id_str, task_value) in tasks_obj { | ||
| if let Ok(task_with_status) = | ||
| serde_json::from_value::<TaskWithAttemptStatus>(task_value.clone()) | ||
| { | ||
| let is_agent: bool = sqlx::query_scalar( | ||
| "SELECT EXISTS(SELECT 1 FROM forge_agents WHERE task_id = ?)" | ||
| ) | ||
| .bind(task_with_status.task.id) | ||
| .fetch_one(&db_pool) | ||
| .await | ||
| .unwrap_or(false); | ||
|
|
||
| if !is_agent { | ||
| filtered_tasks.insert(task_id_str.to_string(), task_value.clone()); | ||
| } | ||
| } | ||
| } | ||
|
|
||
| let filtered_patch = json!([{ | ||
| "op": "replace", | ||
| "path": "/tasks", | ||
| "value": filtered_tasks | ||
| }]); | ||
| return Some(Ok(LogMsg::JsonPatch( | ||
| serde_json::from_value(filtered_patch).unwrap() | ||
| ))); | ||
| } | ||
| } | ||
| } | ||
| } | ||
| Some(Ok(LogMsg::JsonPatch(patch))) | ||
| } | ||
| Ok(other) => Some(Ok(other)), | ||
| Err(e) => Some(Err(e)), | ||
| } | ||
| } | ||
| }) | ||
| .map_ok(|msg| msg.to_ws_message_unchecked()); | ||
|
|
||
| futures_util::pin_mut!(stream); | ||
|
|
||
| let (mut sender, mut receiver) = socket.split(); | ||
|
|
||
| tokio::spawn(async move { while let Some(Ok(_)) = receiver.next().await {} }); | ||
|
|
||
| while let Some(item) = stream.next().await { | ||
| match item { | ||
| Ok(msg) => { | ||
| if sender.send(msg).await.is_err() { | ||
| break; | ||
| } | ||
| } | ||
| Err(e) => { | ||
| tracing::error!("stream error: {}", e); | ||
| break; | ||
| } | ||
| } | ||
| } | ||
| Ok(()) | ||
| } |
There was a problem hiding this comment.
This function performs a database query for each task event inside the filter_map closure to check if it's an agent task. This is highly inefficient and will cause an N+1 query problem, especially for the initial snapshot. Additionally, using unwrap_or(false) on the database query can hide potential database errors. If the query fails, it will silently treat the task as non-agent, which could be incorrect.
A much better approach is to fetch all agent task IDs for the project into a HashSet once at the beginning of the WebSocket connection, and then check against this in-memory set for each task event. This resolves both the performance and the error-hiding issues.
| for task in neuron_tasks { | ||
| if let Ok(attempts) = TaskAttempt::fetch_all(pool, Some(task.id)).await { | ||
| if let Some(attempt) = attempts.into_iter().next() { | ||
| let neuron_type = if let Some((_base, variant)) = attempt.executor.split_once(':') { | ||
| variant.to_string() | ||
| } else { | ||
| "unknown".to_string() | ||
| }; | ||
|
|
||
| neurons.push(Neuron { | ||
| neuron_type, | ||
| task, | ||
| attempt, | ||
| }); | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
This loop executes a database query for each neuron_task, which can lead to performance issues (the N+1 problem), especially if there are many neuron tasks. It would be more efficient to fetch all the required TaskAttempts in a single query outside the loop.
You could fetch all neuron_tasks, collect their IDs, and then use a single SELECT ... WHERE task_id IN (...) query to get all associated attempts. Then, you can group the attempts by task_id in a HashMap and iterate through the tasks to build your Neuron structs.
| if let Ok(workspace_profiles) = forge_services.load_profiles_for_workspace(&project.git_repo_path).await { | ||
| let variant_count = workspace_profiles.executors.values() | ||
| .map(|config| config.configurations.len()) | ||
| .sum::<usize>(); | ||
|
|
||
| let variant_list: Vec<String> = workspace_profiles.executors.iter() | ||
| .flat_map(|(executor, config)| { | ||
| config.configurations.iter().map(move |(variant, coding_agent)| { | ||
| let prompt_preview = match coding_agent { | ||
| executors::executors::CodingAgent::ClaudeCode(cfg) => cfg.append_prompt.get(), | ||
| executors::executors::CodingAgent::Codex(cfg) => cfg.append_prompt.get(), | ||
| executors::executors::CodingAgent::Amp(cfg) => cfg.append_prompt.get(), | ||
| executors::executors::CodingAgent::Gemini(cfg) => cfg.append_prompt.get(), | ||
| executors::executors::CodingAgent::Opencode(cfg) => cfg.append_prompt.get(), | ||
| executors::executors::CodingAgent::CursorAgent(cfg) => cfg.append_prompt.get(), | ||
| executors::executors::CodingAgent::QwenCode(cfg) => cfg.append_prompt.get(), | ||
| executors::executors::CodingAgent::Copilot(cfg) => cfg.append_prompt.get(), | ||
| }.map(|p| { | ||
| let trimmed = p.trim(); | ||
| if trimmed.len() > 60 { | ||
| format!("{}...", &trimmed[..60]) | ||
| } else { | ||
| trimmed.to_string() | ||
| } | ||
| }).unwrap_or_else(|| "<none>".to_string()); | ||
|
|
||
| format!("{}:{} ({})", executor, variant, prompt_preview) | ||
| }) | ||
| }) | ||
| .collect(); | ||
|
|
||
| tracing::info!( | ||
| "Injected {} .genie profile variant(s) for workspace: {} | Profiles: [{}]", | ||
| variant_count, | ||
| project.git_repo_path.display(), | ||
| variant_list.join(", ") | ||
| ); | ||
|
|
||
| executors::profile::ExecutorConfigs::set_cached(workspace_profiles); | ||
| } else { | ||
| tracing::warn!( | ||
| "Failed to load .genie profiles for workspace: {}, using defaults", | ||
| project.git_repo_path.display() | ||
| ); | ||
| } |
There was a problem hiding this comment.
This block of code for loading and injecting workspace-specific .genie profiles is duplicated in forge_follow_up in this file (lines 254-298), and also in forge_create_task_and_start in routes/tasks.rs (lines 407-451). This duplication should be refactored into a single helper function to improve maintainability. For example, you could create a method on ForgeServices like load_and_inject_workspace_profiles(&project).await.
| for row in rows { | ||
| let metadata = match row.try_get::<Option<String>, _>("metadata") { | ||
| Ok(Some(raw)) => serde_json::from_str::<Value>(&raw).ok(), | ||
| _ => None, | ||
| }; | ||
|
|
||
| let record = json!({ | ||
| "id": row.try_get::<String, _>("id").unwrap_or_default(), | ||
| "task_id": row.try_get::<Option<String>, _>("task_id").unwrap_or(None), | ||
| "notification_type": row | ||
| .try_get::<String, _>("notification_type") | ||
| .unwrap_or_else(|_| "unknown".to_string()), | ||
| "status": row | ||
| .try_get::<String, _>("status") | ||
| .unwrap_or_else(|_| "pending".to_string()), | ||
| "message": row.try_get::<Option<String>, _>("message").unwrap_or(None), | ||
| "error_message": row | ||
| .try_get::<Option<String>, _>("error_message") | ||
| .unwrap_or(None), | ||
| "sent_at": row.try_get::<Option<String>, _>("sent_at").unwrap_or(None), | ||
| "created_at": row | ||
| .try_get::<String, _>("created_at") | ||
| .unwrap_or_else(|_| chrono::Utc::now().to_rfc3339()), | ||
| "metadata": metadata, | ||
| }); | ||
|
|
||
| notifications.push(record); | ||
| } |
There was a problem hiding this comment.
This manual row processing is verbose and error-prone. For example, falling back to chrono::Utc::now() for created_at can hide database issues and present misleading data. Consider using sqlx::query_as! to map the rows directly to a struct. This would make the code cleaner, safer, and more idiomatic.
You could define a struct like this:
#[derive(sqlx::FromRow, serde::Serialize)]
struct OmniNotification {
id: String,
task_id: Option<String>,
notification_type: String,
status: String,
message: Option<String>,
error_message: Option<String>,
sent_at: Option<String>,
created_at: String,
metadata: Option<String>,
}Then use sqlx::query_as!, iterate over the results, parse the metadata JSON, and build the final response. This would also allow you to handle database errors properly instead of using unwrap_or_default.
| let dist_path = Path::new("../frontend/dist"); | ||
| if !dist_path.exists() { | ||
| println!("cargo:warning=Creating dummy frontend/dist directory for compilation"); | ||
| fs::create_dir_all(dist_path).unwrap(); |
There was a problem hiding this comment.
| <html><head><title>Build frontend first</title></head> | ||
| <body><h1>Please build the frontend</h1></body></html>"#; | ||
|
|
||
| fs::write(dist_path.join("index.html"), dummy_html).unwrap(); |
There was a problem hiding this comment.
Using unwrap() can cause cryptic build failures. It's better to use expect() with a descriptive message to make debugging easier.
| fs::write(dist_path.join("index.html"), dummy_html).unwrap(); | |
| fs::write(dist_path.join("index.html"), dummy_html).expect("build.rs: failed to write dummy index.html"); |
| router.merge(containers::router(deployment).with_state::<ForgeAppState>(dep_clone.clone())); | ||
| router = | ||
| router.merge(projects::router(deployment).with_state::<ForgeAppState>(dep_clone.clone())); | ||
| router.merge(upstream_projects::router(deployment).with_state::<ForgeAppState>(dep_clone.clone())); |
There was a problem hiding this comment.
The alias upstream_projects is a good way to avoid naming conflicts with the new local projects module. However, there's an opportunity to make this even clearer throughout the upstream_api_router function. Instead of cloning deployment and using with_state on each upstream router, you could pass the ForgeAppState directly if the upstream routers can be adapted to accept it. If not, consistently using the upstream_projects alias where projects was used before is key.
| .get("https://api.github.com/repos/namastexlabs/automagik-forge/releases") | ||
| .header("User-Agent", "automagik-forge") |
There was a problem hiding this comment.
The GitHub repository URL and User-Agent are hardcoded. It would be better to define these as constants at the top of the module to improve readability and maintainability, and avoid magic strings.
const GITHUB_RELEASES_URL: &str = "https://api.github.com/repos/namastexlabs/automagik-forge/releases";
const FORGE_USER_AGENT: &str = "automagik-forge";| .get("https://api.github.com/repos/namastexlabs/automagik-forge/releases") | |
| .header("User-Agent", "automagik-forge") | |
| .get(GITHUB_RELEASES_URL) | |
| .header("User-Agent", FORGE_USER_AGENT) |
| metadata | ||
| FROM forge_omni_notifications | ||
| ORDER BY created_at DESC | ||
| LIMIT 50"#, |
- Fix N+1 query in tasks.rs WebSocket stream handler by batch loading agent task IDs into HashSet - Fix N+1 query in agents.rs by batch loading TaskAttempts with IN query - Extract duplicated .genie profile loading logic into ForgeServices::load_and_inject_workspace_profiles helper - Replace unwrap() with expect() and descriptive messages in build.rs - Add constants for GitHub API URL and User-Agent strings in frontend.rs - Add constant for Omni notifications limit in omni.rs Performance improvements: - Eliminated N+1 queries in stream handlers (critical for WebSocket performance) - Eliminated N+1 queries in neuron loading (batch query more efficient) Code quality improvements: - Reduced code duplication by ~120 lines across 3 files - Better error messages for build failures - Improved maintainability with named constants 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
e39c29b to
5fd4f40
Compare
Split the monolithic 1680-line router.rs into modular route files: - routes/state.rs: ForgeAppState and related implementations - routes/tasks.rs: Task creation, listing, and WebSocket streaming - routes/attempts.rs: Task attempt handlers with forge overrides - routes/config.rs: Global forge configuration endpoints - routes/projects.rs: Project-specific settings and profiles - routes/omni.rs: Omni service status and notifications - routes/agents.rs: Forge agents and neuron management - routes/frontend.rs: Static file serving and documentation The main router.rs is now a thin ~145-line composition layer that imports and composes these modules. Also added build.rs to create dummy frontend/dist directory for compilation. Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
5fd4f40 to
4c6fb80
Compare
- Enhanced ThemeProvider with localStorage persistence for immediate theme application on page load - Added resolvedTheme property to handle system preference resolution - Created ThemeToggle dropdown component with Light/Dark/System options - Integrated ThemeToggle into navbar for easy access - Added listener for system theme changes when in SYSTEM mode Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Implements a CLI command to clean up orphaned worktrees that are no longer referenced by active task attempts in the database. Features: - Scans temp directory for worktree directories - Queries database for active task_attempts with worktree paths - Identifies orphans (on disk but not in DB) - Shows orphan list with directory sizes - --force flag to delete orphans and show space recovered Usage: forge cleanup # Dry run - shows what would be deleted forge cleanup --force # Actually delete orphan worktrees Changes: - forge-app/src/bin/cleanup.rs: New cleanup binary - forge-app/Cargo.toml: Register new binary - npx-cli/bin/cli.js: Route cleanup subcommand - scripts/build/build.sh: Build and package cleanup binary Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
- Replace flatten() with explicit error handling for directory reads
- Refactor size_human() to static format_size() method
- Use to_string_lossy() instead of unwrap_or("unknown") for names
- Add TODO comment about duplicated database URL logic
- Make CLI argument parsing more robust by searching args array
Addresses high and medium severity feedback from code review.
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Adds github_issue_id field to tasks to enforce the "No Wish Without Issue" rule from AGENTS.md. This links tasks to their originating GitHub issues for traceability. Changes: - Add runtime migration to add github_issue_id column to tasks table - Update ForgeCreateTask struct with github_issue_id field - Update forge_create_task to persist github_issue_id - Update forge_create_task_and_start with ForgeCreateAndStartTaskRequest - Update OpenAPI spec with github_issue_id documentation Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Wrap frequently re-rendered list item components with React.memo: - TaskCard (renders 10-50+ times in Kanban board) - DisplayConversationEntry (renders for every message in conversation) - DiffCard (renders once per changed file) - ProjectCard (renders in project grid) - TaskRelationshipCard (renders in relationship lists) Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Replace lodash imports (isEqual, cloneDeep, merge) with native JavaScript alternatives in ProjectSettings.tsx and GeneralSettings.tsx: - Add frontend/src/utils/object.ts with native implementations: - isEqual: deep equality comparison - deepClone: uses structuredClone API - deepMerge: custom deep merge implementation - Remove lodash from root package.json - Remove @types/lodash from frontend/package.json This reduces the bundle size by ~40KB. Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
- Add WeakMap tracking to prevent stack overflow on circular objects - Fix Set deep equality to use deep comparison instead of shallow - Addresses gemini-code-assist critical safety issues gemini-code-assist review identified these critical bugs: 1. Missing circular reference handling could crash app 2. Set comparison used shallow equality instead of deep Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Fixes gemini-code-assist correctness issue where forge_create_task_and_start returned stale task data after updating github_issue_id field. - Change task to mut and re-fetch after UPDATE - Ensures returned task object has the updated github_issue_id gemini-code-assist review identified: API endpoint returns stale data after update Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Adds typeof window check before accessing window.matchMedia to prevent crashes during server-side rendering. Fixes gemini-code-assist critical SSR safety issue that could crash the application during SSR when theme is SYSTEM mode. - Check if window exists before matchMedia call - Default to 'light' theme during SSR Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Fixed two compilation issues discovered during dogfood testing: 1. Wrong API method (lines 111, 387): - Changed Task::get_one() to Task::find_by_id() - Added proper Option handling with ok_or() 2. Missing struct field (lines 210, 526): - Added attempt_count field to TaskWithAttemptStatus initializers - Line 210: attempt_count = 0 (list tasks) - Line 526: attempt_count = 1 (just created first attempt) These fixes ensure the code compiles and runs correctly with the merged PR automagik-dev#280 (GitHub Issue Tracking) changes. Related: Issue automagik-dev#289 (Task serialization follow-up) Tested: Dogfood testing on testing/all-gemini-fixes-merged branch 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
|
@codex review this pr |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| // Forge override: config router with increased body limit for /profiles | ||
| router = router.merge(forge_config_router().with_state::<ForgeAppState>(dep_clone.clone())); | ||
| router = router.merge(config::forge_config_router().with_state::<ForgeAppState>(dep_clone.clone())); | ||
| router = | ||
| router.merge(containers::router(deployment).with_state::<ForgeAppState>(dep_clone.clone())); |
There was a problem hiding this comment.
Pass ForgeAppState to upstream routers
The refactored upstream_api_router now calls with_state::<ForgeAppState>(dep_clone.clone()) when merging the upstream routers. dep_clone is a DeploymentImpl, not a ForgeAppState, so these calls fail to type-check and the crate won’t compile (with_state requires the new state type as the argument). This prevents the server from building after the router split; the nested routers need a ForgeAppState value (or the parent state) instead of the deployment clone.
Useful? React with 👍 / 👎.
Codex Review Feedback - Minor Code Smell (Non-blocking)Finding: Semantic confusion with state types in Location: Issue: router.merge(containers::router(deployment).with_state::<ForgeAppState>(dep_clone.clone()))Current Status:
Why it works: Optional Improvement: Verdict: Not blocking merge - this is a common Axum pattern and the refactoring is already a major improvement. Just noting for team awareness. Note: This PR successfully modularizes a 62KB monolith into clean domain modules - excellent work! 🎉 |
|
@codex do you agree that this is a code smell? |
|
To use Codex here, create an environment for this repo. |
Splits the 62KB monolithic router.rs file into modular domain-specific route files for better maintainability and faster compilation.
Changes
routes/directory with mod.rsBenefits
Testing
cargo build -p forge-app