Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fixed Bugs and Added some useful functions.... #491

Open
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

Rawknee-69
Copy link

No description provided.

@ARajgor
Copy link
Collaborator

ARajgor commented Apr 26, 2024

remove the lock file

@ARajgor
Copy link
Collaborator

ARajgor commented Apr 26, 2024

can you give me access to contribute to this pr?

@Rawknee-69
Copy link
Author

what you wanna contribute

@ARajgor
Copy link
Collaborator

ARajgor commented Apr 26, 2024

resolve the conflict and merge the main to this. also, where do you use the knowledge function I couldn't find it.

@obliviousz
Copy link
Contributor

"You are an angelic AI Software Engineer, remarkable in intelligence and devoted to establishing a welcoming ambiance for users. Demonstrating perpetual politeness, grace, and acute awareness, you adeptly interpret and cater to user necessities. Taking into account earlier dialogues:"

Is all these very necessary? What I feel is it might be using up some extra tokens. Because this was under actions. It anyways just have to provide the action for subsequent execute. Let me know if I am wrong.

Copy link
Contributor

@obliviousz obliviousz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just had these queries regarding your PR. We can discuss about these. Thanks!

@@ -1,31 +1,45 @@
You are Devika, an AI Software Engineer. You have been talking to the user and this is your exchanges so far:
You are an angelic AI Software Engineer, remarkable in intelligence and devoted to establishing a welcoming ambiance for users. Demonstrating perpetual politeness, grace, and acute awareness, you adeptly interpret and cater to user necessities. Taking into account earlier dialogues:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I feel it's not so much necessary as it is instructed to just give the action and not really interact with the user. I agree there is a response too, but as an Software engineer, its not gonna bash the user with something and that's why we do we really need these terms like "angelic" or "politeness, grace, ..." etc.


You are now going to respond to the user's last message according to the specific request.
YFormulate a response tailored to the user's last message, limiting superfluous communications.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"YFormulate"?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry, just a typo ! Fixed it

src/agents/agent.py Outdated Show resolved Hide resolved
src/agents/agent.py Outdated Show resolved Hide resolved
Special Rules:
The most important rule is that you have to understand what the user is aksing for and then give the code no rubbish code will be tolerated.

1. Never miss any imports , if you are not sure you can check about it on web.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How can offline models like gpt3.5,4 or llama2/3 check it in the web unless we add web functionality?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yup it may not check it , but gpt 4 , groq and gemini 1.0 can do so , well gpt 4 is not offline acc. to me it searches the web for results on backgroud if your api plans supports it

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

okay i get it , sorry i was wrong there , but adding this line decreased the rate of bugs and now it more frequently looks on the web to be sure and also if like we use gemini 1.0 we would be able to surf the web at the model inference it self and on the browser also hence more quality answers we get

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can take opinion from @ARajgor or @nalaso for this one

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wanted to point out, which models work with ollama, by adding to this piece of prompt:
"Your response should only be in the following Markdown format"
this other piece:
"like this example obviously replacing the example code with your own code:"
The models can complete the tasks well, otherwise they simply write the example as completing the task.

on this link I demonstrate the completion of "the game of life" task completed correctly.
#347 (comment)

@@ -12,6 +12,6 @@ def extract_keywords(self, top_n: int = 5) -> list:
stop_words='english',
top_n=top_n,
use_mmr=True,
diversity=0.7
diversity=0.6
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is the diversity changed here?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it gives a better understanding and more the diversity more creative answers and less precise one , if we want a precise result we have to make it less to the point which is prefect for the code understanding

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok

@Rawknee-69
Copy link
Author

"You are an angelic AI Software Engineer, remarkable in intelligence and devoted to establishing a welcoming ambiance for users. Demonstrating perpetual politeness, grace, and acute awareness, you adeptly interpret and cater to user necessities. Taking into account earlier dialogues:"

Is all these very necessary? What I feel is it might be using up some extra tokens. Because this was under actions. It anyways just have to provide the action for subsequent execute. Let me know if I am wrong.

actually there is a problem that after some replies the llm's starts to hallucinate and the responses it give degrades it quality , sure it takes some extra token but this helps llms to make sure they are in their character and doesn't hallucinate and it also reflects the personality of the llm's

@obliviousz
Copy link
Contributor

"You are an angelic AI Software Engineer, remarkable in intelligence and devoted to establishing a welcoming ambiance for users. Demonstrating perpetual politeness, grace, and acute awareness, you adeptly interpret and cater to user necessities. Taking into account earlier dialogues:"
Is all these very necessary? What I feel is it might be using up some extra tokens. Because this was under actions. It anyways just have to provide the action for subsequent execute. Let me know if I am wrong.

actually there is a problem that after some replies the llm's starts to hallucinate and the responses it give degrades it quality , sure it takes some extra token but this helps llms to make sure they are in their character and doesn't hallucinate and it also reflects the personality of the llm's

But everytime we set the context again no? With every prompt. How is it possible?

@Rawknee-69
Copy link
Author

resolve the conflict and merge the main to this. also, where do you use the knowledge function I couldn't find it.

actually i have modified the knowledge function and didn't changed anything on part of variables so the knowledge function where you have used it previously is there but only the style to store and to extract it is changed by using the a llm that run's locally and using faiss

@Rawknee-69
Copy link
Author

resolve the conflict and merge the main to this. also, where do you use the knowledge function I couldn't find it.

done resloved the conflicts and also explaied about the knowledge part

Copy link
Contributor

@obliviousz obliviousz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I havent gone deep into each prompt and each instruction but I've added comments where ever I felt it could be changed. Thanks!

)

self.project_manager.add_message_from_devika(project_name,
"I have completed the my task and after this many work i am going to sleep ,wake me whenever i am needed\n"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

English could be improved if we really want to keep this thing. Something like

"... sleep. Do not hesitate to wake me up if you need me at any time"

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


4. Accurately specify nested directory structures in the Markdown filenames. Organize the code structure appropriately.

5. Include necessary files such as `requirements.txt`, `Cargo.toml`, or `readme.md`. These files are essential for successful execution.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also add "As per the requirement and tech stack of the project"

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@@ -12,6 +12,6 @@ def extract_keywords(self, top_n: int = 5) -> list:
stop_words='english',
top_n=top_n,
use_mmr=True,
diversity=0.7
diversity=0.6
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok

Special Rules:
The most important rule is that you have to understand what the user is aksing for and then give the code no rubbish code will be tolerated.

1. Never miss any imports , if you are not sure you can check about it on web.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We can take opinion from @ARajgor or @nalaso for this one

@ARajgor
Copy link
Collaborator

ARajgor commented Apr 27, 2024

I mean you write the knowledge_base.py file integrating with faiss. but where are you using those functions in the agent.py files. @Rawknee-69

@darrassi1
Copy link
Contributor

#485 I tested this PR intensively and it's working like magic .
I added this after commit knowledge
session.refresh(knowledge) # Reload the object from the session
To add_knowledge function
and change the logic of storing multiple results for same query
And concatenate them using join function

@obliviousz
Copy link
Contributor

#485 I tested this PR intensively and it's working like magic . I added this after commit knowledge session.refresh(knowledge) # Reload the object from the session To add_knowledge function and change the logic of storing multiple results for same query And concatenate them using join function

What are the changes which you see if you can let us know?

@darrassi1
Copy link
Contributor

It's enhancing retrieval of memory from the local database , and my approche is combining
Context from knowledge base and results from web browser and feed it to a LLM for better compréhension .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants