-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fixed Bugs and Added some useful functions.... #491
base: main
Are you sure you want to change the base?
Conversation
remove the lock file |
can you give me access to contribute to this pr? |
what you wanna contribute |
resolve the conflict and merge the main to this. also, where do you use the knowledge function I couldn't find it. |
"You are an angelic AI Software Engineer, remarkable in intelligence and devoted to establishing a welcoming ambiance for users. Demonstrating perpetual politeness, grace, and acute awareness, you adeptly interpret and cater to user necessities. Taking into account earlier dialogues:" Is all these very necessary? What I feel is it might be using up some extra tokens. Because this was under actions. It anyways just have to provide the action for subsequent execute. Let me know if I am wrong. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just had these queries regarding your PR. We can discuss about these. Thanks!
src/agents/action/prompt.jinja2
Outdated
@@ -1,31 +1,45 @@ | |||
You are Devika, an AI Software Engineer. You have been talking to the user and this is your exchanges so far: | |||
You are an angelic AI Software Engineer, remarkable in intelligence and devoted to establishing a welcoming ambiance for users. Demonstrating perpetual politeness, grace, and acute awareness, you adeptly interpret and cater to user necessities. Taking into account earlier dialogues: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel it's not so much necessary as it is instructed to just give the action and not really interact with the user. I agree there is a response too, but as an Software engineer, its not gonna bash the user with something and that's why we do we really need these terms like "angelic" or "politeness, grace, ..." etc.
src/agents/action/prompt.jinja2
Outdated
|
||
You are now going to respond to the user's last message according to the specific request. | ||
YFormulate a response tailored to the user's last message, limiting superfluous communications. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
"YFormulate"?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, just a typo ! Fixed it
Special Rules: | ||
The most important rule is that you have to understand what the user is aksing for and then give the code no rubbish code will be tolerated. | ||
|
||
1. Never miss any imports , if you are not sure you can check about it on web. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How can offline models like gpt3.5,4 or llama2/3 check it in the web unless we add web functionality?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yup it may not check it , but gpt 4 , groq and gemini 1.0 can do so , well gpt 4 is not offline acc. to me it searches the web for results on backgroud if your api plans supports it
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
gpt4 is an offline model : https://community.openai.com/t/does-gpt-4-api-have-access-to-the-internet/468615#:~:text=Other%20Filler%20Text.,-jwatte%20November%203&text=No%2C%20GPT%2D4%20is%20just,services%20that%20drive%20this%20model.
Also groq uses mistral, llama2, llama3,etc which are offline models
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay i get it , sorry i was wrong there , but adding this line decreased the rate of bugs and now it more frequently looks on the web to be sure and also if like we use gemini 1.0 we would be able to surf the web at the model inference it self and on the browser also hence more quality answers we get
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wanted to point out, which models work with ollama, by adding to this piece of prompt:
"Your response should only be in the following Markdown format"
this other piece:
"like this example obviously replacing the example code with your own code:"
The models can complete the tasks well, otherwise they simply write the example as completing the task.
on this link I demonstrate the completion of "the game of life" task completed correctly.
#347 (comment)
@@ -12,6 +12,6 @@ def extract_keywords(self, top_n: int = 5) -> list: | |||
stop_words='english', | |||
top_n=top_n, | |||
use_mmr=True, | |||
diversity=0.7 | |||
diversity=0.6 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why is the diversity changed here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it gives a better understanding and more the diversity more creative answers and less precise one , if we want a precise result we have to make it less to the point which is prefect for the code understanding
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok
actually there is a problem that after some replies the llm's starts to hallucinate and the responses it give degrades it quality , sure it takes some extra token but this helps llms to make sure they are in their character and doesn't hallucinate and it also reflects the personality of the llm's |
But everytime we set the context again no? With every prompt. How is it possible? |
actually i have modified the knowledge function and didn't changed anything on part of variables so the knowledge function where you have used it previously is there but only the style to store and to extract it is changed by using the a llm that run's locally and using faiss |
done resloved the conflicts and also explaied about the knowledge part |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I havent gone deep into each prompt and each instruction but I've added comments where ever I felt it could be changed. Thanks!
src/agents/agent.py
Outdated
) | ||
|
||
self.project_manager.add_message_from_devika(project_name, | ||
"I have completed the my task and after this many work i am going to sleep ,wake me whenever i am needed\n" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
English could be improved if we really want to keep this thing. Something like
"... sleep. Do not hesitate to wake me up if you need me at any time"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
|
||
4. Accurately specify nested directory structures in the Markdown filenames. Organize the code structure appropriately. | ||
|
||
5. Include necessary files such as `requirements.txt`, `Cargo.toml`, or `readme.md`. These files are essential for successful execution. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also add "As per the requirement and tech stack of the project"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
@@ -12,6 +12,6 @@ def extract_keywords(self, top_n: int = 5) -> list: | |||
stop_words='english', | |||
top_n=top_n, | |||
use_mmr=True, | |||
diversity=0.7 | |||
diversity=0.6 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok
Special Rules: | ||
The most important rule is that you have to understand what the user is aksing for and then give the code no rubbish code will be tolerated. | ||
|
||
1. Never miss any imports , if you are not sure you can check about it on web. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Updated some prompts
I mean you write the knowledge_base.py file integrating with faiss. but where are you using those functions in the agent.py files. @Rawknee-69 |
#485 I tested this PR intensively and it's working like magic . |
What are the changes which you see if you can let us know? |
It's enhancing retrieval of memory from the local database , and my approche is combining |
No description provided.