Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Condenser for Browser Output Observations #6578

Open
wants to merge 28 commits into
base: main
Choose a base branch
from

Conversation

adityasoni9998
Copy link
Contributor

End-user friendly description of the problem this fixes or functionality that this introduces

  • Include this change in the Release Notes. If checked, you must provide an end-user friendly description for your change below
    Developed a condenser that allows the user to only keep the most recent attention_window number of browser outputs in the LLM's context.

Give a summary of what the PR does, explaining any non-trivial design decisions
Designed the BrowserOutputCondenser class for this functionality. This is helpful for long trajectories involving (possibly screenshot-based) web navigation to avoid context window exceeded errors and control inference cost. Previously implemented condensers do not allow masking a specific type of observation. Since, browser observations are generally very large, this might be helpful.


Link of any specific issues this addresses

@adityasoni9998 adityasoni9998 marked this pull request as ready for review February 2, 2025 02:51
@xingyaoww xingyaoww requested a review from csmith49 February 2, 2025 05:24
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant