feat: add forge cleanup command for stale worktrees#277
feat: add forge cleanup command for stale worktrees#277jmanhype wants to merge 20 commits intoautomagik-dev:devfrom
Conversation
Summary of ChangesHello @jmanhype, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request adds a new Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a forge cleanup command to remove stale worktrees, which is a valuable addition for managing disk space. The implementation is well-structured, but there are several areas for improvement regarding robustness, error handling, and code duplication. I've identified some high-severity issues related to silently ignoring file system errors and duplicating database connection logic, which could lead to incorrect behavior or maintenance challenges. I've also included some medium-severity suggestions to improve code clarity, design, and the robustness of argument parsing.
| fn get_database_url() -> String { | ||
| if let Ok(db_url) = std::env::var("DATABASE_URL") { | ||
| if db_url.starts_with("sqlite://") { | ||
| let path_part = db_url.strip_prefix("sqlite://").unwrap(); | ||
| if PathBuf::from(path_part).is_absolute() { | ||
| db_url | ||
| } else { | ||
| let abs_path = std::env::current_dir() | ||
| .unwrap_or_else(|_| PathBuf::from(".")) | ||
| .join(path_part); | ||
| format!("sqlite://{}", abs_path.to_string_lossy()) | ||
| } | ||
| } else { | ||
| db_url | ||
| } | ||
| } else { | ||
| format!( | ||
| "sqlite://{}", | ||
| utils::assets::asset_dir().join("db.sqlite").to_string_lossy() | ||
| ) | ||
| } | ||
| } |
There was a problem hiding this comment.
The comment on line 17 indicates that this function's logic is duplicated from the db crate. This violates the DRY (Don't Repeat Yourself) principle and can lead to maintenance issues if the logic needs to be updated in the future, as it would have to be changed in multiple places. It would be better to expose this functionality from a shared utility crate or from the db crate itself so it can be reused here.
forge-app/src/bin/cleanup.rs
Outdated
| for entry in entries.flatten() { | ||
| let path = entry.path(); | ||
| if path.is_dir() { | ||
| size += dir_size(&path); | ||
| } else if let Ok(metadata) = path.metadata() { | ||
| size += metadata.len(); | ||
| } | ||
| } |
There was a problem hiding this comment.
Using flatten() on the iterator from read_dir will silently ignore any I/O errors that occur while iterating over directory entries. This could lead to dir_size returning an incorrect (smaller) size, which would be misleading to the user. It would be more robust to handle the Result inside the loop and report any errors.
for entry_result in entries {
if let Ok(entry) = entry_result {
let path = entry.path();
if path.is_dir() {
size += dir_size(&path);
} else if let Ok(metadata) = path.metadata() {
size += metadata.len();
}
} else if let Err(e) = entry_result {
eprintln!("Warning: Failed to read an entry in directory {}: {}", path.display(), e);
}
}
forge-app/src/bin/cleanup.rs
Outdated
| for entry in entries.flatten() { | ||
| let path = entry.path(); | ||
| if path.is_dir() { | ||
| let name = path | ||
| .file_name() | ||
| .and_then(|n| n.to_str()) | ||
| .unwrap_or("unknown") | ||
| .to_string(); | ||
| let size_bytes = dir_size(&path); | ||
| worktrees.push(WorktreeInfo { | ||
| path, | ||
| name, | ||
| size_bytes, | ||
| }); | ||
| } | ||
| } |
There was a problem hiding this comment.
Using flatten() here will silently ignore errors when reading directory entries. This could cause the cleanup tool to miss some worktree directories entirely, leaving them on disk without the user's knowledge. It's better to explicitly handle the Result from the iterator to ensure all potential worktrees are scanned or that errors are reported.
for entry_result in entries {
if let Ok(entry) = entry_result {
let path = entry.path();
if path.is_dir() {
let name = path
.file_name()
.map(|n| n.to_string_lossy().into_owned())
.unwrap_or_else(|| "unknown".to_string());
let size_bytes = dir_size(&path);
worktrees.push(WorktreeInfo {
path,
name,
size_bytes,
});
}
} else if let Err(e) = entry_result {
eprintln!(
"Warning: Failed to read a worktree directory entry in {}: {}",
base_dir.display(),
e
);
}
}
forge-app/src/bin/cleanup.rs
Outdated
| fn size_human(&self) -> String { | ||
| const KB: u64 = 1024; | ||
| const MB: u64 = 1024 * KB; | ||
| const GB: u64 = 1024 * MB; | ||
|
|
||
| if self.size_bytes >= GB { | ||
| format!("{:.2} GB", self.size_bytes as f64 / GB as f64) | ||
| } else if self.size_bytes >= MB { | ||
| format!("{:.2} MB", self.size_bytes as f64 / MB as f64) | ||
| } else if self.size_bytes >= KB { | ||
| format!("{:.2} KB", self.size_bytes as f64 / KB as f64) | ||
| } else { | ||
| format!("{} bytes", self.size_bytes) | ||
| } | ||
| } |
There was a problem hiding this comment.
The size_human method is useful for formatting sizes not just for WorktreeInfo instances, but also for total sizes as seen in main. Creating dummy WorktreeInfo instances just to call this method is awkward. Consider refactoring this into a static method and a separate instance method. This will allow you to call WorktreeInfo::format_size(total_size) in main and improve code clarity.
fn format_size(size_bytes: u64) -> String {
const KB: u64 = 1024;
const MB: u64 = 1024 * KB;
const GB: u64 = 1024 * MB;
if size_bytes >= GB {
format!("{:.2} GB", size_bytes as f64 / GB as f64)
} else if size_bytes >= MB {
format!("{:.2} MB", size_bytes as f64 / MB as f64)
} else if size_bytes >= KB {
format!("{:.2} KB", size_bytes as f64 / KB as f64)
} else {
format!("{} bytes", size_bytes)
}
}
fn size_human(&self) -> String {
Self::format_size(self.size_bytes)
}
forge-app/src/bin/cleanup.rs
Outdated
| let name = path | ||
| .file_name() | ||
| .and_then(|n| n.to_str()) | ||
| .unwrap_or("unknown") | ||
| .to_string(); |
There was a problem hiding this comment.
Using unwrap_or("unknown") can be problematic if multiple directories have non-UTF-8 names, as they would all be indistinguishable in the output. Using to_string_lossy() would provide a more informative, albeit potentially garbled, name by replacing invalid UTF-8 sequences with the replacement character ().
let name = path
.file_name()
.map(|n| n.to_string_lossy().into_owned())
.unwrap_or_else(|| "unknown".to_string());| let args: Vec<String> = std::env::args().collect(); | ||
| let force = args.iter().any(|a| a == "--force" || a == "-f"); | ||
| let help = args.iter().any(|a| a == "--help" || a == "-h"); |
There was a problem hiding this comment.
npx-cli/bin/cli.js
Outdated
| const platformDir = getPlatformDir(); | ||
| const extractDir = path.join(__dirname, "..", "dist", platformDir); | ||
| const isMcpMode = process.argv.includes("--mcp"); | ||
| const isCleanupMode = process.argv[2] === "cleanup"; |
There was a problem hiding this comment.
Hardcoding the argument index with process.argv[2] is fragile. If the way the script is called changes (e.g., with additional node options), this will break. A more robust approach would be to use a simple argument parsing library (like minimist or yargs-parser) or at least search for the cleanup command in the arguments array instead of assuming its position.
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request introduces a forge cleanup command to remove stale worktrees, which is a valuable addition for managing disk space. The implementation is well-contained in a new Rust binary. The core logic for identifying and removing orphaned worktrees is sound. My review focuses on improving the robustness, maintainability, and performance of the new cleanup.rs binary. Specifically, I've suggested refactoring duplicated code, using a dedicated library for argument parsing, optimizing directory size calculation, and improving error reporting for a cleaner user experience. The changes to the build scripts and CLI wrapper are correct and integrate the new command effectively.
| /// TODO: This logic is duplicated from the db crate (db::DBService::get_database_url). | ||
| /// Consider exposing it from a shared utility crate to avoid maintenance issues. |
There was a problem hiding this comment.
The TODO comment correctly identifies that this logic is duplicated from the db crate. To improve maintainability and ensure consistency, this function should be moved to a shared utility crate (e.g., utils) and reused here and in db::DBService. This will prevent the implementations from diverging in the future and centralizes database connection logic.
| fn dir_size(path: &PathBuf) -> u64 { | ||
| let mut size = 0u64; | ||
| if let Ok(entries) = std::fs::read_dir(path) { | ||
| for entry_result in entries { | ||
| match entry_result { | ||
| Ok(entry) => { | ||
| let entry_path = entry.path(); | ||
| if entry_path.is_dir() { | ||
| size += dir_size(&entry_path); | ||
| } else if let Ok(metadata) = entry_path.metadata() { | ||
| size += metadata.len(); | ||
| } | ||
| } | ||
| Err(e) => { | ||
| eprintln!( | ||
| "Warning: Failed to read entry in {}: {}", | ||
| path.display(), | ||
| e | ||
| ); | ||
| } | ||
| } | ||
| } | ||
| } | ||
| size | ||
| } |
There was a problem hiding this comment.
The recursive implementation of dir_size is simple, but it can be slow for large directories as it's single-threaded and performs many separate filesystem syscalls. For better performance, consider using a crate like jwalk, which can traverse the directory tree in parallel and is often more optimized. This would make the scanning phase significantly faster, especially for worktrees with many files.
| Err(e) => { | ||
| eprintln!( | ||
| "Warning: Failed to read entry in {}: {}", | ||
| path.display(), | ||
| e | ||
| ); | ||
| } |
There was a problem hiding this comment.
Printing warnings directly to stderr within dir_size can interleave error messages with other program output, making it harder to parse, especially if this tool is used in scripts. A more robust approach would be to have scan_worktree_directory return both the list of worktrees and a list of any non-critical errors encountered during the scan. The main function could then present these warnings to the user in a dedicated section, providing a cleaner user experience and better separation of concerns.
| let args: Vec<String> = std::env::args().collect(); | ||
| let force = args.iter().any(|a| a == "--force" || a == "-f"); | ||
| let help = args.iter().any(|a| a == "--help" || a == "-h"); |
There was a problem hiding this comment.
The current manual argument parsing is basic and can be brittle. It won't handle argument variations like --force=true or combined short flags (e.g., -fh). Using a dedicated argument parsing library like clap would make the CLI more robust, user-friendly, and easier to extend. clap can also auto-generate a more detailed help message from your argument definitions.
|
@gemini-code-assist give me a code review |
|
@gemini-code-assist did we address all of the high-severity issues you pointed out from earlier? |
There was a problem hiding this comment.
Code Review
This pull request introduces a forge cleanup command to remove stale worktrees, which is a great addition for managing disk space. The implementation includes a new Rust binary for the cleanup logic, updates to the npx CLI to route the new subcommand, and modifications to the build script to include the new binary.
My review focuses on improving the robustness and maintainability of the new functionality. Key suggestions include:
- In
forge-app/src/bin/cleanup.rs:- Refactoring duplicated code for getting the database URL.
- Improving the efficiency of directory size calculation.
- Using a dedicated CLI parsing library for more robust argument handling.
- In
npx-cli/bin/cli.js:- Strengthening the subcommand detection logic to avoid incorrect command execution.
Overall, the feature is well-implemented, and these changes will help make it more robust and easier to maintain in the long run.
| const extractDir = path.join(__dirname, "..", "dist", platformDir); | ||
| const isMcpMode = process.argv.includes("--mcp"); | ||
| // Search for "cleanup" subcommand in args (more robust than hardcoding index) | ||
| const isCleanupMode = process.argv.slice(2).some(arg => arg === "cleanup"); |
There was a problem hiding this comment.
The current logic to detect cleanup mode is a bit too broad as it checks for the cleanup string anywhere in the arguments. This could lead to incorrectly triggering cleanup mode if another command takes cleanup as an argument value (e.g., forge some-command --name cleanup). A more robust approach is to treat cleanup as a subcommand, which should be the first argument after the script name.
| const isCleanupMode = process.argv.slice(2).some(arg => arg === "cleanup"); | |
| const isCleanupMode = process.argv[2] === "cleanup"; |
| if (isCleanupMode) { | ||
| // Cleanup mode - run forge-cleanup binary with remaining args | ||
| // Filter out "cleanup" command and pass remaining args to the binary | ||
| const cleanupArgs = process.argv.slice(2).filter(arg => arg !== "cleanup"); |
There was a problem hiding this comment.
To complement the change of treating cleanup as a subcommand, the arguments passed to the forge-cleanup binary should be everything that follows the cleanup argument. The current filtering approach is fragile for the same reasons as the detection logic.
| const cleanupArgs = process.argv.slice(2).filter(arg => arg !== "cleanup"); | |
| const cleanupArgs = process.argv.slice(3); |
| /// TODO: This logic is duplicated from the db crate (db::DBService::get_database_url). | ||
| /// Consider exposing it from a shared utility crate to avoid maintenance issues. | ||
| fn get_database_url() -> String { |
| fn dir_size(path: &PathBuf) -> u64 { | ||
| let mut size = 0u64; | ||
| if let Ok(entries) = std::fs::read_dir(path) { | ||
| for entry_result in entries { | ||
| match entry_result { | ||
| Ok(entry) => { | ||
| let entry_path = entry.path(); | ||
| if entry_path.is_dir() { | ||
| size += dir_size(&entry_path); | ||
| } else if let Ok(metadata) = entry_path.metadata() { | ||
| size += metadata.len(); | ||
| } | ||
| } | ||
| Err(e) => { | ||
| eprintln!( | ||
| "Warning: Failed to read entry in {}: {}", | ||
| path.display(), | ||
| e | ||
| ); | ||
| } | ||
| } | ||
| } | ||
| } | ||
| size | ||
| } |
There was a problem hiding this comment.
This recursive implementation for calculating directory size can be inefficient for directories with many files and deep nesting. It may also lead to a stack overflow on very deep directory structures. Consider using the walkdir crate, which is already an indirect dependency, for an iterative and more robust traversal. This can also be combined with rayon for parallel processing if performance becomes a concern.
For example, using walkdir would simplify the implementation to something like this:
use walkdir::WalkDir;
fn dir_size(path: &PathBuf) -> u64 {
WalkDir::new(path)
.into_iter()
.filter_map(|entry| entry.ok())
.filter_map(|entry| entry.metadata().ok())
.filter(|metadata| metadata.is_file())
.map(|metadata| metadata.len())
.sum()
}You would need to add walkdir to the [dependencies] in forge-app/Cargo.toml.
| // Parse arguments - search through all args for flags | ||
| let args: Vec<String> = std::env::args().collect(); | ||
| let force = args.iter().any(|a| a == "--force" || a == "-f"); | ||
| let help = args.iter().any(|a| a == "--help" || a == "-h"); |
There was a problem hiding this comment.
The manual argument parsing is simple but not very robust or extensible. Using a dedicated CLI argument parsing crate like clap would be more idiomatic for a Rust application. clap can automatically handle things like --help generation, subcommands, and validation, making the CLI more user-friendly and easier to maintain.
|
Hello @jmanhype, I haven't provided a detailed code review with severity issues yet. My previous comment was a summary of the changes in this pull request. If you'd like me to perform a full code review, please use the command |
Split the monolithic 1680-line router.rs into modular route files: - routes/state.rs: ForgeAppState and related implementations - routes/tasks.rs: Task creation, listing, and WebSocket streaming - routes/attempts.rs: Task attempt handlers with forge overrides - routes/config.rs: Global forge configuration endpoints - routes/projects.rs: Project-specific settings and profiles - routes/omni.rs: Omni service status and notifications - routes/agents.rs: Forge agents and neuron management - routes/frontend.rs: Static file serving and documentation The main router.rs is now a thin ~145-line composition layer that imports and composes these modules. Also added build.rs to create dummy frontend/dist directory for compilation. Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
- Enhanced ThemeProvider with localStorage persistence for immediate theme application on page load - Added resolvedTheme property to handle system preference resolution - Created ThemeToggle dropdown component with Light/Dark/System options - Integrated ThemeToggle into navbar for easy access - Added listener for system theme changes when in SYSTEM mode Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Implements a CLI command to clean up orphaned worktrees that are no longer referenced by active task attempts in the database. Features: - Scans temp directory for worktree directories - Queries database for active task_attempts with worktree paths - Identifies orphans (on disk but not in DB) - Shows orphan list with directory sizes - --force flag to delete orphans and show space recovered Usage: forge cleanup # Dry run - shows what would be deleted forge cleanup --force # Actually delete orphan worktrees Changes: - forge-app/src/bin/cleanup.rs: New cleanup binary - forge-app/Cargo.toml: Register new binary - npx-cli/bin/cli.js: Route cleanup subcommand - scripts/build/build.sh: Build and package cleanup binary Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
- Replace flatten() with explicit error handling for directory reads
- Refactor size_human() to static format_size() method
- Use to_string_lossy() instead of unwrap_or("unknown") for names
- Add TODO comment about duplicated database URL logic
- Make CLI argument parsing more robust by searching args array
Addresses high and medium severity feedback from code review.
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
4aed819 to
17b4532
Compare
Adds github_issue_id field to tasks to enforce the "No Wish Without Issue" rule from AGENTS.md. This links tasks to their originating GitHub issues for traceability. Changes: - Add runtime migration to add github_issue_id column to tasks table - Update ForgeCreateTask struct with github_issue_id field - Update forge_create_task to persist github_issue_id - Update forge_create_task_and_start with ForgeCreateAndStartTaskRequest - Update OpenAPI spec with github_issue_id documentation Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Wrap frequently re-rendered list item components with React.memo: - TaskCard (renders 10-50+ times in Kanban board) - DisplayConversationEntry (renders for every message in conversation) - DiffCard (renders once per changed file) - ProjectCard (renders in project grid) - TaskRelationshipCard (renders in relationship lists) Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Replace lodash imports (isEqual, cloneDeep, merge) with native JavaScript alternatives in ProjectSettings.tsx and GeneralSettings.tsx: - Add frontend/src/utils/object.ts with native implementations: - isEqual: deep equality comparison - deepClone: uses structuredClone API - deepMerge: custom deep merge implementation - Remove lodash from root package.json - Remove @types/lodash from frontend/package.json This reduces the bundle size by ~40KB. Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
- Add WeakMap tracking to prevent stack overflow on circular objects - Fix Set deep equality to use deep comparison instead of shallow - Addresses gemini-code-assist critical safety issues gemini-code-assist review identified these critical bugs: 1. Missing circular reference handling could crash app 2. Set comparison used shallow equality instead of deep Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Fixes gemini-code-assist correctness issue where forge_create_task_and_start returned stale task data after updating github_issue_id field. - Change task to mut and re-fetch after UPDATE - Ensures returned task object has the updated github_issue_id gemini-code-assist review identified: API endpoint returns stale data after update Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Adds typeof window check before accessing window.matchMedia to prevent crashes during server-side rendering. Fixes gemini-code-assist critical SSR safety issue that could crash the application during SSR when theme is SYSTEM mode. - Check if window exists before matchMedia call - Default to 'light' theme during SSR Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Fixed two compilation issues discovered during dogfood testing: 1. Wrong API method (lines 111, 387): - Changed Task::get_one() to Task::find_by_id() - Added proper Option handling with ok_or() 2. Missing struct field (lines 210, 526): - Added attempt_count field to TaskWithAttemptStatus initializers - Line 210: attempt_count = 0 (list tasks) - Line 526: attempt_count = 1 (just created first attempt) These fixes ensure the code compiles and runs correctly with the merged PR automagik-dev#280 (GitHub Issue Tracking) changes. Related: Issue automagik-dev#289 (Task serialization follow-up) Tested: Dogfood testing on testing/all-gemini-fixes-merged branch 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com> Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
|
@codex review this pr |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| sqlx::query( | ||
| "INSERT INTO forge_task_attempt_config (task_attempt_id, use_worktree) VALUES (?, ?)" | ||
| ) | ||
| .bind(task_attempt_id) | ||
| .bind(true) |
There was a problem hiding this comment.
Restore non-worktree create-and-start handling
The create-and-start endpoint now hardcodes use_worktree to true and the request struct no longer accepts a flag to disable it. Frontend flows such as useFollowUpSend still call /api/tasks/create-and-start with use_worktree: false for Master Genie chat, but these requests will be ignored and every task attempt will be created in a worktree instead of on the base branch, and the agent-task bookkeeping (marking tasks as agent/forging forge_agents) is never triggered. This regresses the Genie chat flow and makes those tasks show up on the kanban board and run isolated worktrees when they previously ran in-place.
Useful? React with 👍 / 👎.
| items.push(TaskWithAttemptStatus { | ||
| task, | ||
| has_in_progress_attempt, | ||
| has_merged_attempt: false, | ||
| last_attempt_failed, | ||
| executor, | ||
| attempt_count: 0, // TODO: Query actual attempt count if needed |
There was a problem hiding this comment.
Return real attempt_count in task list responses
Task listings now populate attempt_count with a constant zero instead of the actual number of attempts. Hooks such as useTaskAttemptsWithLiveStatus rely on attempt_count changes to invalidate the cached attempts list; with this always zero, creating new attempts will not trigger a refresh, leaving UIs with stale attempt data until a manual reload. The previous router queried the count from the database, so this is a regression.
Useful? React with 👍 / 👎.
| // Forge override: config router with increased body limit for /profiles | ||
| router = router.merge(forge_config_router().with_state::<ForgeAppState>(dep_clone.clone())); | ||
| router = router.merge(config::forge_config_router().with_state::<ForgeAppState>(dep_clone.clone())); | ||
| router = | ||
| router.merge(containers::router(deployment).with_state::<ForgeAppState>(dep_clone.clone())); | ||
| router = | ||
| router.merge(projects::router(deployment).with_state::<ForgeAppState>(dep_clone.clone())); | ||
| router.merge(upstream_projects::router(deployment).with_state::<ForgeAppState>(dep_clone.clone())); | ||
| router = | ||
| router.merge(drafts::router(deployment).with_state::<ForgeAppState>(dep_clone.clone())); | ||
|
|
||
| // Build custom tasks router with forge override (already typed as ForgeAppState) | ||
| let tasks_router_with_override = build_tasks_router_with_forge_override(deployment); | ||
| // Build custom tasks router with forge override | ||
| let tasks_router_with_override = tasks::build_tasks_router_with_forge_override(deployment); | ||
| router = router.merge(tasks_router_with_override); | ||
|
|
||
| // Build custom task_attempts router with forge override (already typed as ForgeAppState) | ||
| // Build custom task_attempts router with forge override | ||
| let task_attempts_router_with_override = | ||
| build_task_attempts_router_with_forge_override(deployment); | ||
| attempts::build_task_attempts_router_with_forge_override(deployment); | ||
| router = router.merge(task_attempts_router_with_override); | ||
| router = router.merge( | ||
| execution_processes::router(deployment).with_state::<ForgeAppState>(dep_clone.clone()), | ||
| ); | ||
| router = router.merge(auth::router(deployment).with_state::<ForgeAppState>(dep_clone.clone())); | ||
| router = router.merge(tags::router(deployment).with_state::<ForgeAppState>(dep_clone.clone())); | ||
| router = router.merge(filesystem::router().with_state::<ForgeAppState>(dep_clone.clone())); | ||
| router = | ||
| router.merge(events::router(deployment).with_state::<ForgeAppState>(dep_clone.clone())); |
There was a problem hiding this comment.
Re-add /api/tags routing to upstream API
The upstream API router no longer merges the tags::router, so the /api/tags endpoints are no longer exposed even though the frontend still calls them (e.g., frontend/src/lib/api.ts uses /api/tags). As a result tag CRUD operations will fail after this refactor. The previous router included the tags router in this merge list.
Useful? React with 👍 / 👎.
|
@codex review this pr |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| DeploymentImpl, | ||
| routes::{ | ||
| self as upstream, approvals, auth, containers, drafts, events, | ||
| execution_processes, filesystem, images, projects as upstream_projects, | ||
| }, |
There was a problem hiding this comment.
The upstream API router no longer imports or merges the server::routes::tags router, so all /api/tags endpoints are dropped. Frontend flows such as tagsApi in TagManager, TagEditDialog, and the file-search textarea still call /api/tags for listing and editing tags, which will now return 404s because the router is never registered. Please add the tags router back into upstream_api_router so tag management remains functional.
Useful? React with 👍 / 👎.
Implements a CLI command to clean up orphaned worktrees that are no longer referenced by active task attempts in the database.
Features:
Usage:
forge cleanup # Dry run - shows what would be deleted
forge cleanup --force # Actually delete orphan worktrees
Changes: