You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
# TODO: this just means there was nothing to pop, but
# we should handle this more gracefully.
logger.debug(f"Nothing to pop: {e}")
pass
exceptExceptionase:
logger.error(
f"Error pulling from repo {self._reposlug}: {e}. "
"Falling back on DESTRUCTION!!!"
)
This costs us an extra round-trip to the LLM, or, at worst, an endless loop of human edits overruling AI's opportunistic editing. I think this is just because I don't know how to use the git python library that well; surely there must be a way to stash and 3-way.
The text was updated successfully, but these errors were encountered:
Quick footnote: from the perspective of #10 , note that overleaf internally appears to handle "live" edits using operational transforms. Source line might be this one?
Upshot is that their internals allow them to gracefully handle rapid-fire edits by multiple users in the same line of a document. This gets around git-style merge errors.
Unclear at this stage how much the long delay in LLM calls affects our strategy here. Git merge may still be the natural thing to use if the text is a few seconds old by the time the LLM is ready.
Especially if we can stream text from the model into the editor (which is roughly the same OOM speed as a human typing), I imagine that resolves merge issues entirely!
Right now, if we encounter more recent changes, we fail completely and nuke the repo:
llm4papers/llm4papers/models.py
Lines 134 to 148 in 15d5a7f
This costs us an extra round-trip to the LLM, or, at worst, an endless loop of human edits overruling AI's opportunistic editing. I think this is just because I don't know how to use the
git
python library that well; surely there must be a way to stash and 3-way.The text was updated successfully, but these errors were encountered: