Skip to content

feat: add forge cleanup command for stale worktrees#277

Open
jmanhype wants to merge 20 commits intoautomagik-dev:devfrom
jmanhype:claude/forge-cleanup-command-01PQxPGPZEvxqP5E8TeEPVtd
Open

feat: add forge cleanup command for stale worktrees#277
jmanhype wants to merge 20 commits intoautomagik-dev:devfrom
jmanhype:claude/forge-cleanup-command-01PQxPGPZEvxqP5E8TeEPVtd

Conversation

@jmanhype
Copy link

Implements a CLI command to clean up orphaned worktrees that are no longer referenced by active task attempts in the database.

Features:

  • Scans temp directory for worktree directories
  • Queries database for active task_attempts with worktree paths
  • Identifies orphans (on disk but not in DB)
  • Shows orphan list with directory sizes
  • --force flag to delete orphans and show space recovered

Usage:
forge cleanup # Dry run - shows what would be deleted
forge cleanup --force # Actually delete orphan worktrees

Changes:

  • forge-app/src/bin/cleanup.rs: New cleanup binary
  • forge-app/Cargo.toml: Register new binary
  • npx-cli/bin/cli.js: Route cleanup subcommand
  • scripts/build/build.sh: Build and package cleanup binary

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @jmanhype, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request adds a new cleanup command to the forge CLI, providing a mechanism to automatically identify and remove orphaned worktree directories. This helps in maintaining a clean development environment by reclaiming disk space occupied by temporary worktrees that are no longer associated with active tasks, improving system hygiene and resource management.

Highlights

  • New forge cleanup command: Introduces a new command-line utility forge cleanup designed to manage and remove stale worktrees.
  • Orphaned Worktree Identification: The command scans the temporary directory for worktree folders and compares them against active task attempts recorded in the database to identify worktrees that are no longer referenced.
  • Dry Run and Forced Deletion: By default, forge cleanup performs a dry run, listing the orphaned worktrees and their sizes without deleting them. Users can use the --force flag to proceed with the actual deletion and recover disk space.
  • Rust-based Implementation: The core cleanup logic is implemented in a new Rust binary, forge-cleanup, ensuring efficient and robust file system and database operations.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a forge cleanup command to remove stale worktrees, which is a valuable addition for managing disk space. The implementation is well-structured, but there are several areas for improvement regarding robustness, error handling, and code duplication. I've identified some high-severity issues related to silently ignoring file system errors and duplicating database connection logic, which could lead to incorrect behavior or maintenance challenges. I've also included some medium-severity suggestions to improve code clarity, design, and the robustness of argument parsing.

Comment on lines +18 to +39
fn get_database_url() -> String {
if let Ok(db_url) = std::env::var("DATABASE_URL") {
if db_url.starts_with("sqlite://") {
let path_part = db_url.strip_prefix("sqlite://").unwrap();
if PathBuf::from(path_part).is_absolute() {
db_url
} else {
let abs_path = std::env::current_dir()
.unwrap_or_else(|_| PathBuf::from("."))
.join(path_part);
format!("sqlite://{}", abs_path.to_string_lossy())
}
} else {
db_url
}
} else {
format!(
"sqlite://{}",
utils::assets::asset_dir().join("db.sqlite").to_string_lossy()
)
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The comment on line 17 indicates that this function's logic is duplicated from the db crate. This violates the DRY (Don't Repeat Yourself) principle and can lead to maintenance issues if the logic needs to be updated in the future, as it would have to be changed in multiple places. It would be better to expose this functionality from a shared utility crate or from the db crate itself so it can be reused here.

Comment on lines 71 to 78
for entry in entries.flatten() {
let path = entry.path();
if path.is_dir() {
size += dir_size(&path);
} else if let Ok(metadata) = path.metadata() {
size += metadata.len();
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Using flatten() on the iterator from read_dir will silently ignore any I/O errors that occur while iterating over directory entries. This could lead to dir_size returning an incorrect (smaller) size, which would be misleading to the user. It would be more robust to handle the Result inside the loop and report any errors.

        for entry_result in entries {
            if let Ok(entry) = entry_result {
                let path = entry.path();
                if path.is_dir() {
                    size += dir_size(&path);
                } else if let Ok(metadata) = path.metadata() {
                    size += metadata.len();
                }
            } else if let Err(e) = entry_result {
                eprintln!("Warning: Failed to read an entry in directory {}: {}", path.display(), e);
            }
        }

Comment on lines 94 to 109
for entry in entries.flatten() {
let path = entry.path();
if path.is_dir() {
let name = path
.file_name()
.and_then(|n| n.to_str())
.unwrap_or("unknown")
.to_string();
let size_bytes = dir_size(&path);
worktrees.push(WorktreeInfo {
path,
name,
size_bytes,
});
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

Using flatten() here will silently ignore errors when reading directory entries. This could cause the cleanup tool to miss some worktree directories entirely, leaving them on disk without the user's knowledge. It's better to explicitly handle the Result from the iterator to ensure all potential worktrees are scanned or that errors are reported.

    for entry_result in entries {
        if let Ok(entry) = entry_result {
            let path = entry.path();
            if path.is_dir() {
                let name = path
                    .file_name()
                    .map(|n| n.to_string_lossy().into_owned())
                    .unwrap_or_else(|| "unknown".to_string());
                let size_bytes = dir_size(&path);
                worktrees.push(WorktreeInfo {
                    path,
                    name,
                    size_bytes,
                });
            }
        } else if let Err(e) = entry_result {
            eprintln!(
                "Warning: Failed to read a worktree directory entry in {}: {}",
                base_dir.display(),
                e
            );
        }
    }

Comment on lines 50 to 64
fn size_human(&self) -> String {
const KB: u64 = 1024;
const MB: u64 = 1024 * KB;
const GB: u64 = 1024 * MB;

if self.size_bytes >= GB {
format!("{:.2} GB", self.size_bytes as f64 / GB as f64)
} else if self.size_bytes >= MB {
format!("{:.2} MB", self.size_bytes as f64 / MB as f64)
} else if self.size_bytes >= KB {
format!("{:.2} KB", self.size_bytes as f64 / KB as f64)
} else {
format!("{} bytes", self.size_bytes)
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The size_human method is useful for formatting sizes not just for WorktreeInfo instances, but also for total sizes as seen in main. Creating dummy WorktreeInfo instances just to call this method is awkward. Consider refactoring this into a static method and a separate instance method. This will allow you to call WorktreeInfo::format_size(total_size) in main and improve code clarity.

    fn format_size(size_bytes: u64) -> String {
        const KB: u64 = 1024;
        const MB: u64 = 1024 * KB;
        const GB: u64 = 1024 * MB;

        if size_bytes >= GB {
            format!("{:.2} GB", size_bytes as f64 / GB as f64)
        } else if size_bytes >= MB {
            format!("{:.2} MB", size_bytes as f64 / MB as f64)
        } else if size_bytes >= KB {
            format!("{:.2} KB", size_bytes as f64 / KB as f64)
        } else {
            format!("{} bytes", size_bytes)
        }
    }

    fn size_human(&self) -> String {
        Self::format_size(self.size_bytes)
    }

Comment on lines 97 to 101
let name = path
.file_name()
.and_then(|n| n.to_str())
.unwrap_or("unknown")
.to_string();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Using unwrap_or("unknown") can be problematic if multiple directories have non-UTF-8 names, as they would all be indistinguishable in the output. Using to_string_lossy() would provide a more informative, albeit potentially garbled, name by replacing invalid UTF-8 sequences with the replacement character ().

            let name = path
                .file_name()
                .map(|n| n.to_string_lossy().into_owned())
                .unwrap_or_else(|| "unknown".to_string());

Comment on lines +163 to +165
let args: Vec<String> = std::env::args().collect();
let force = args.iter().any(|a| a == "--force" || a == "-f");
let help = args.iter().any(|a| a == "--help" || a == "-h");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The manual argument parsing is simple but can be fragile and hard to extend. Using a dedicated CLI argument parsing crate like clap is idiomatic in Rust and would make the code more robust. clap can also automatically generate the help message, removing the need for the manual print_usage function.

const platformDir = getPlatformDir();
const extractDir = path.join(__dirname, "..", "dist", platformDir);
const isMcpMode = process.argv.includes("--mcp");
const isCleanupMode = process.argv[2] === "cleanup";
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Hardcoding the argument index with process.argv[2] is fragile. If the way the script is called changes (e.g., with additional node options), this will break. A more robust approach would be to use a simple argument parsing library (like minimist or yargs-parser) or at least search for the cleanup command in the arguments array instead of assuming its position.

@jmanhype
Copy link
Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a forge cleanup command to remove stale worktrees, which is a valuable addition for managing disk space. The implementation is well-contained in a new Rust binary. The core logic for identifying and removing orphaned worktrees is sound. My review focuses on improving the robustness, maintainability, and performance of the new cleanup.rs binary. Specifically, I've suggested refactoring duplicated code, using a dedicated library for argument parsing, optimizing directory size calculation, and improving error reporting for a cleaner user experience. The changes to the build scripts and CLI wrapper are correct and integrate the new command effectively.

Comment on lines +19 to +20
/// TODO: This logic is duplicated from the db crate (db::DBService::get_database_url).
/// Consider exposing it from a shared utility crate to avoid maintenance issues.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The TODO comment correctly identifies that this logic is duplicated from the db crate. To improve maintainability and ensure consistency, this function should be moved to a shared utility crate (e.g., utils) and reused here and in db::DBService. This will prevent the implementations from diverging in the future and centralizes database connection logic.

Comment on lines +77 to +101
fn dir_size(path: &PathBuf) -> u64 {
let mut size = 0u64;
if let Ok(entries) = std::fs::read_dir(path) {
for entry_result in entries {
match entry_result {
Ok(entry) => {
let entry_path = entry.path();
if entry_path.is_dir() {
size += dir_size(&entry_path);
} else if let Ok(metadata) = entry_path.metadata() {
size += metadata.len();
}
}
Err(e) => {
eprintln!(
"Warning: Failed to read entry in {}: {}",
path.display(),
e
);
}
}
}
}
size
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The recursive implementation of dir_size is simple, but it can be slow for large directories as it's single-threaded and performs many separate filesystem syscalls. For better performance, consider using a crate like jwalk, which can traverse the directory tree in parallel and is often more optimized. This would make the scanning phase significantly faster, especially for worktrees with many files.

Comment on lines +90 to +96
Err(e) => {
eprintln!(
"Warning: Failed to read entry in {}: {}",
path.display(),
e
);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Printing warnings directly to stderr within dir_size can interleave error messages with other program output, making it harder to parse, especially if this tool is used in scripts. A more robust approach would be to have scan_worktree_directory return both the list of worktrees and a list of any non-critical errors encountered during the scan. The main function could then present these warnings to the user in a dedicated section, providing a cleaner user experience and better separation of concerns.

Comment on lines +193 to +195
let args: Vec<String> = std::env::args().collect();
let force = args.iter().any(|a| a == "--force" || a == "-f");
let help = args.iter().any(|a| a == "--help" || a == "-h");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The current manual argument parsing is basic and can be brittle. It won't handle argument variations like --force=true or combined short flags (e.g., -fh). Using a dedicated argument parsing library like clap would make the CLI more robust, user-friendly, and easier to extend. clap can also auto-generate a more detailed help message from your argument definitions.

@jmanhype
Copy link
Author

@gemini-code-assist give me a code review

@jmanhype
Copy link
Author

@gemini-code-assist did we address all of the high-severity issues you pointed out from earlier?

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a forge cleanup command to remove stale worktrees, which is a great addition for managing disk space. The implementation includes a new Rust binary for the cleanup logic, updates to the npx CLI to route the new subcommand, and modifications to the build script to include the new binary.

My review focuses on improving the robustness and maintainability of the new functionality. Key suggestions include:

  • In forge-app/src/bin/cleanup.rs:
    • Refactoring duplicated code for getting the database URL.
    • Improving the efficiency of directory size calculation.
    • Using a dedicated CLI parsing library for more robust argument handling.
  • In npx-cli/bin/cli.js:
    • Strengthening the subcommand detection logic to avoid incorrect command execution.

Overall, the feature is well-implemented, and these changes will help make it more robust and easier to maintain in the long run.

const extractDir = path.join(__dirname, "..", "dist", platformDir);
const isMcpMode = process.argv.includes("--mcp");
// Search for "cleanup" subcommand in args (more robust than hardcoding index)
const isCleanupMode = process.argv.slice(2).some(arg => arg === "cleanup");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The current logic to detect cleanup mode is a bit too broad as it checks for the cleanup string anywhere in the arguments. This could lead to incorrectly triggering cleanup mode if another command takes cleanup as an argument value (e.g., forge some-command --name cleanup). A more robust approach is to treat cleanup as a subcommand, which should be the first argument after the script name.

Suggested change
const isCleanupMode = process.argv.slice(2).some(arg => arg === "cleanup");
const isCleanupMode = process.argv[2] === "cleanup";

if (isCleanupMode) {
// Cleanup mode - run forge-cleanup binary with remaining args
// Filter out "cleanup" command and pass remaining args to the binary
const cleanupArgs = process.argv.slice(2).filter(arg => arg !== "cleanup");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

To complement the change of treating cleanup as a subcommand, the arguments passed to the forge-cleanup binary should be everything that follows the cleanup argument. The current filtering approach is fragile for the same reasons as the detection logic.

Suggested change
const cleanupArgs = process.argv.slice(2).filter(arg => arg !== "cleanup");
const cleanupArgs = process.argv.slice(3);

Comment on lines +19 to +21
/// TODO: This logic is duplicated from the db crate (db::DBService::get_database_url).
/// Consider exposing it from a shared utility crate to avoid maintenance issues.
fn get_database_url() -> String {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

As noted in the TODO, this logic for determining the database URL is duplicated. To improve maintainability and avoid potential inconsistencies, this function should be moved to a shared utility crate and reused here and in the db crate.

Comment on lines +77 to +101
fn dir_size(path: &PathBuf) -> u64 {
let mut size = 0u64;
if let Ok(entries) = std::fs::read_dir(path) {
for entry_result in entries {
match entry_result {
Ok(entry) => {
let entry_path = entry.path();
if entry_path.is_dir() {
size += dir_size(&entry_path);
} else if let Ok(metadata) = entry_path.metadata() {
size += metadata.len();
}
}
Err(e) => {
eprintln!(
"Warning: Failed to read entry in {}: {}",
path.display(),
e
);
}
}
}
}
size
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This recursive implementation for calculating directory size can be inefficient for directories with many files and deep nesting. It may also lead to a stack overflow on very deep directory structures. Consider using the walkdir crate, which is already an indirect dependency, for an iterative and more robust traversal. This can also be combined with rayon for parallel processing if performance becomes a concern.

For example, using walkdir would simplify the implementation to something like this:

use walkdir::WalkDir;

fn dir_size(path: &PathBuf) -> u64 {
    WalkDir::new(path)
        .into_iter()
        .filter_map(|entry| entry.ok())
        .filter_map(|entry| entry.metadata().ok())
        .filter(|metadata| metadata.is_file())
        .map(|metadata| metadata.len())
        .sum()
}

You would need to add walkdir to the [dependencies] in forge-app/Cargo.toml.

Comment on lines +192 to +195
// Parse arguments - search through all args for flags
let args: Vec<String> = std::env::args().collect();
let force = args.iter().any(|a| a == "--force" || a == "-f");
let help = args.iter().any(|a| a == "--help" || a == "-h");
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The manual argument parsing is simple but not very robust or extensible. Using a dedicated CLI argument parsing crate like clap would be more idiomatic for a Rust application. clap can automatically handle things like --help generation, subcommands, and validation, making the CLI more user-friendly and easier to maintain.

@gemini-code-assist
Copy link
Contributor

Hello @jmanhype, I haven't provided a detailed code review with severity issues yet. My previous comment was a summary of the changes in this pull request.

If you'd like me to perform a full code review, please use the command /gemini review in a new comment. I'll then analyze the changes and provide feedback, including any high-severity issues I might find.

claude and others added 5 commits November 28, 2025 12:29
Split the monolithic 1680-line router.rs into modular route files:
- routes/state.rs: ForgeAppState and related implementations
- routes/tasks.rs: Task creation, listing, and WebSocket streaming
- routes/attempts.rs: Task attempt handlers with forge overrides
- routes/config.rs: Global forge configuration endpoints
- routes/projects.rs: Project-specific settings and profiles
- routes/omni.rs: Omni service status and notifications
- routes/agents.rs: Forge agents and neuron management
- routes/frontend.rs: Static file serving and documentation

The main router.rs is now a thin ~145-line composition layer that
imports and composes these modules. Also added build.rs to create
dummy frontend/dist directory for compilation.

Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
- Enhanced ThemeProvider with localStorage persistence for immediate theme
  application on page load
- Added resolvedTheme property to handle system preference resolution
- Created ThemeToggle dropdown component with Light/Dark/System options
- Integrated ThemeToggle into navbar for easy access
- Added listener for system theme changes when in SYSTEM mode

Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Implements a CLI command to clean up orphaned worktrees that are no
longer referenced by active task attempts in the database.

Features:
- Scans temp directory for worktree directories
- Queries database for active task_attempts with worktree paths
- Identifies orphans (on disk but not in DB)
- Shows orphan list with directory sizes
- --force flag to delete orphans and show space recovered

Usage:
  forge cleanup         # Dry run - shows what would be deleted
  forge cleanup --force # Actually delete orphan worktrees

Changes:
- forge-app/src/bin/cleanup.rs: New cleanup binary
- forge-app/Cargo.toml: Register new binary
- npx-cli/bin/cli.js: Route cleanup subcommand
- scripts/build/build.sh: Build and package cleanup binary

Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
- Replace flatten() with explicit error handling for directory reads
- Refactor size_human() to static format_size() method
- Use to_string_lossy() instead of unwrap_or("unknown") for names
- Add TODO comment about duplicated database URL logic
- Make CLI argument parsing more robust by searching args array

Addresses high and medium severity feedback from code review.

Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
@jmanhype jmanhype force-pushed the claude/forge-cleanup-command-01PQxPGPZEvxqP5E8TeEPVtd branch from 4aed819 to 17b4532 Compare November 28, 2025 18:41
claude and others added 15 commits November 28, 2025 12:54
Adds github_issue_id field to tasks to enforce the "No Wish Without Issue"
rule from AGENTS.md. This links tasks to their originating GitHub issues
for traceability.

Changes:
- Add runtime migration to add github_issue_id column to tasks table
- Update ForgeCreateTask struct with github_issue_id field
- Update forge_create_task to persist github_issue_id
- Update forge_create_task_and_start with ForgeCreateAndStartTaskRequest
- Update OpenAPI spec with github_issue_id documentation

Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Wrap frequently re-rendered list item components with React.memo:
- TaskCard (renders 10-50+ times in Kanban board)
- DisplayConversationEntry (renders for every message in conversation)
- DiffCard (renders once per changed file)
- ProjectCard (renders in project grid)
- TaskRelationshipCard (renders in relationship lists)

Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Replace lodash imports (isEqual, cloneDeep, merge) with native JavaScript
alternatives in ProjectSettings.tsx and GeneralSettings.tsx:

- Add frontend/src/utils/object.ts with native implementations:
  - isEqual: deep equality comparison
  - deepClone: uses structuredClone API
  - deepMerge: custom deep merge implementation
- Remove lodash from root package.json
- Remove @types/lodash from frontend/package.json

This reduces the bundle size by ~40KB.

Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
- Add WeakMap tracking to prevent stack overflow on circular objects
- Fix Set deep equality to use deep comparison instead of shallow
- Addresses gemini-code-assist critical safety issues

gemini-code-assist review identified these critical bugs:
1. Missing circular reference handling could crash app
2. Set comparison used shallow equality instead of deep

Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Fixes gemini-code-assist correctness issue where forge_create_task_and_start
returned stale task data after updating github_issue_id field.

- Change task to mut and re-fetch after UPDATE
- Ensures returned task object has the updated github_issue_id

gemini-code-assist review identified: API endpoint returns stale data after update

Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Adds typeof window check before accessing window.matchMedia to prevent
crashes during server-side rendering.

Fixes gemini-code-assist critical SSR safety issue that could crash
the application during SSR when theme is SYSTEM mode.

- Check if window exists before matchMedia call
- Default to 'light' theme during SSR

Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
Fixed two compilation issues discovered during dogfood testing:

1. Wrong API method (lines 111, 387):
   - Changed Task::get_one() to Task::find_by_id()
   - Added proper Option handling with ok_or()

2. Missing struct field (lines 210, 526):
   - Added attempt_count field to TaskWithAttemptStatus initializers
   - Line 210: attempt_count = 0 (list tasks)
   - Line 526: attempt_count = 1 (just created first attempt)

These fixes ensure the code compiles and runs correctly with the
merged PR automagik-dev#280 (GitHub Issue Tracking) changes.

Related: Issue automagik-dev#289 (Task serialization follow-up)
Tested: Dogfood testing on testing/all-gemini-fixes-merged branch

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>

Co-authored-by: Automagik Genie 🧞 <genie@namastex.ai>
@namastex888
Copy link
Contributor

@codex review this pr

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +430 to +434
sqlx::query(
"INSERT INTO forge_task_attempt_config (task_attempt_id, use_worktree) VALUES (?, ?)"
)
.bind(task_attempt_id)
.bind(true)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Restore non-worktree create-and-start handling

The create-and-start endpoint now hardcodes use_worktree to true and the request struct no longer accepts a flag to disable it. Frontend flows such as useFollowUpSend still call /api/tasks/create-and-start with use_worktree: false for Master Genie chat, but these requests will be ignored and every task attempt will be created in a worktree instead of on the base branch, and the agent-task bookkeeping (marking tasks as agent/forging forge_agents) is never triggered. This regresses the Genie chat flow and makes those tasks show up on the kanban board and run isolated worktrees when they previously ran in-place.

Useful? React with 👍 / 👎.

Comment on lines +210 to +216
items.push(TaskWithAttemptStatus {
task,
has_in_progress_attempt,
has_merged_attempt: false,
last_attempt_failed,
executor,
attempt_count: 0, // TODO: Query actual attempt count if needed

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Return real attempt_count in task list responses

Task listings now populate attempt_count with a constant zero instead of the actual number of attempts. Hooks such as useTaskAttemptsWithLiveStatus rely on attempt_count changes to invalidate the cached attempts list; with this always zero, creating new attempts will not trigger a refresh, leaving UIs with stale attempt data until a manual reload. The previous router queried the count from the database, so this is a regression.

Useful? React with 👍 / 👎.

Comment on lines 106 to 129
// Forge override: config router with increased body limit for /profiles
router = router.merge(forge_config_router().with_state::<ForgeAppState>(dep_clone.clone()));
router = router.merge(config::forge_config_router().with_state::<ForgeAppState>(dep_clone.clone()));
router =
router.merge(containers::router(deployment).with_state::<ForgeAppState>(dep_clone.clone()));
router =
router.merge(projects::router(deployment).with_state::<ForgeAppState>(dep_clone.clone()));
router.merge(upstream_projects::router(deployment).with_state::<ForgeAppState>(dep_clone.clone()));
router =
router.merge(drafts::router(deployment).with_state::<ForgeAppState>(dep_clone.clone()));

// Build custom tasks router with forge override (already typed as ForgeAppState)
let tasks_router_with_override = build_tasks_router_with_forge_override(deployment);
// Build custom tasks router with forge override
let tasks_router_with_override = tasks::build_tasks_router_with_forge_override(deployment);
router = router.merge(tasks_router_with_override);

// Build custom task_attempts router with forge override (already typed as ForgeAppState)
// Build custom task_attempts router with forge override
let task_attempts_router_with_override =
build_task_attempts_router_with_forge_override(deployment);
attempts::build_task_attempts_router_with_forge_override(deployment);
router = router.merge(task_attempts_router_with_override);
router = router.merge(
execution_processes::router(deployment).with_state::<ForgeAppState>(dep_clone.clone()),
);
router = router.merge(auth::router(deployment).with_state::<ForgeAppState>(dep_clone.clone()));
router = router.merge(tags::router(deployment).with_state::<ForgeAppState>(dep_clone.clone()));
router = router.merge(filesystem::router().with_state::<ForgeAppState>(dep_clone.clone()));
router =
router.merge(events::router(deployment).with_state::<ForgeAppState>(dep_clone.clone()));

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Re-add /api/tags routing to upstream API

The upstream API router no longer merges the tags::router, so the /api/tags endpoints are no longer exposed even though the frontend still calls them (e.g., frontend/src/lib/api.ts uses /api/tags). As a result tag CRUD operations will fail after this refactor. The previous router included the tags router in this merge list.

Useful? React with 👍 / 👎.

@jmanhype
Copy link
Author

@codex review this pr

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines +22 to +26
DeploymentImpl,
routes::{
self as upstream, approvals, auth, containers, drafts, events,
execution_processes, filesystem, images, projects as upstream_projects,
},

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Re-expose /api/tags routes

The upstream API router no longer imports or merges the server::routes::tags router, so all /api/tags endpoints are dropped. Frontend flows such as tagsApi in TagManager, TagEditDialog, and the file-search textarea still call /api/tags for listing and editing tags, which will now return 404s because the router is never registered. Please add the tags router back into upstream_api_router so tag management remains functional.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants

Comments