-
Notifications
You must be signed in to change notification settings - Fork 39
Description
When the used context exceeds a certain proportion of the maximum limit, the quality of AI-generated responses drops rapidly. Since there’s currently no automated mechanism to clean or reset the context, after several interactions the model can easily drift — ignoring constraints and producing “polluted” or inconsistent outputs.
This issue worsens over time: once the context becomes contaminated, the ratio of polluted content increases with each response, eventually leading to unusable results.
Proposal:
Introduce an automatic mechanism that triggers a compact or new operation when context usage surpasses a specified threshold. This would truncate older context before degradation occurs, preventing the model from losing coherence or control.
Typical signs of such degradation include frequent repetition of phrases like “Perfect!”, premature celebration before tasks are completed, and excessive use of emojis — all indicators that pollution is growing. The model may also start violating explicit constraints (e.g., adding unwanted features).
Benefit:
An automatic compaction or restart mechanism would help maintain output quality, prevent runaway behavior, and ensure the model stays aligned with user instructions during longer sessions.