-
-
Notifications
You must be signed in to change notification settings - Fork 297
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for unpack #226
Comments
Hi, @tombelieber! I personally use Claude and get artifacts as individual files, so I can just copy and paste. |
From what I understand, Cursor does something similar. It asks the LLMs for diffs and then runs the diffs past a llama3.1-70b to get the full files (someone can confirm). You should look into that. Unpacking directly by itself might not be possible otherwise like @yamadashy said. |
@yoloyash I think you mean only outputing code changes, like this project: https://github.com/mckaywrigley/o1-xml-parser By only outputing changes like this: <code_changes>
...changes goes here
</code_changes> Then people can build a parser to massively reduce output token, and the only benefit is recuding output token. |
@yamadashy it will be super cool if we can do this:
|
We could literally copy paste https://github.com/mckaywrigley/o1-xml-parser 's prompt: https://github.com/mckaywrigley/o1-xml-parser/blob/main/README.md?plain=1#L33-L81 The XML PromptYou are an expert software engineer. You are tasked with following my instructions. Use the included project instructions as a general guide. You will respond with 2 sections: A summary section and an XML section. Here are some notes on how you should respond in the summary section:
Here are some notes on how you should respond in the XML section:
Here is how you should structure the XML: <code_changes> So the XML section will be: __XML HERE__ |
Since this tool is to pack a repo (codebase) into a prompt, after asking ChatGPT or other LLM AI to review, the LLM should edit the
repomix-output.txt
, andrepomix
should support unpack to apply LLM edited changes in the repo.The text was updated successfully, but these errors were encountered: