-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/llm responses #376
base: dev
Are you sure you want to change the base?
Feat/llm responses #376
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It appears this PR is a release PR (change its base from master
if that is not the case).
Here's a release checklist:
- Update package version
- Update
poetry.lock
- Change PR merge option
- Update template repo
- Search for objects to be deprecated
… type annotations
…alog_flow_framework into feat/llm_responses
I got an idea for more complex prompts: we can allow passing responses as prompts instead of just strings. And then it'd be possible to incorporate slots into a prompt: model = LLM_API(prompt=rsp.slots.FilledTemplate("You are an experienced barista in a local coffeshop."
"Answer your customers questions about coffee and barista work.\n"
"Customer data:\nAge {person.age}\nGender: {person.gender}\nFavorite drink: {person.habits.drink}"
)) |
… "prompt" to "message" in Prompt class
class PositionConfig(BaseModel): | ||
system_prompt: float = 0 | ||
history: float = 1 | ||
misc_prompt: float = 2 | ||
call_prompt: float = 3 | ||
last_request: float = 4 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Allow None positions to disable certain prompts.
call_prompt_message = await message_to_langchain(call_prompt_text, ctx, source="human") | ||
prompts.append(([call_prompt_message], call_prompt.position or position_config.call_prompt)) | ||
|
||
prompts.append(([await message_to_langchain(ctx.last_request, ctx, source="human")], position_config.last_request)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove last turn from history; add last turn here instead of last request.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aside from the comments attached to this review, there are 4 comments from Rami that I did not mark as resolved.
I think it might be a good idea to run the tutorials through an llm to check it if they are clear and ask for improvements.
|
||
pattern: str | ||
""" | ||
pattern that will be searched in model_result. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Capitalize
:param str target_token: token to check (e.g. `"TRUE"`) | ||
:param float threshold: threshold to bypass. by default `-0.5` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Move these out of the class docstring as well.
chatsky/llm/llm_api.py
Outdated
result.annotations = {"__generated_by_model__": self.name} | ||
return result | ||
|
||
async def condition(self, prompt: str, method: BaseMethod, return_schema=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This still not resolved.
return self.__dict_to_extracted_slots(nested_result) | ||
|
||
# Convert nested dict to ExtractedGroupSlot structure | ||
def __dict_to_extracted_slots(self, d): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make the name start with a single underscore.
chatsky/llm/filters.py
Outdated
raise NotImplemented | ||
|
||
def __call__(self, ctx, request, response, model_name): | ||
return self.call(ctx, request, model_name) + self.call(ctx, response, model_name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't find tests for that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Group test cases in classes.
E.g.
async def test_llm_slot(pipeline, context):
..
async def test_llm_group_slot(pipeline, context):
..
====>
class TestSlots:
async def test_llm_slot(self, pipeline, context):
..
async def test_llm_group_slot(self, pipeline, context):
..
# misc_prompt is the default position for misc prompts | ||
# Misc prompts may override it and be ordered in a different way |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This needs clarification. And needs to be clearly different from the previous sentence (e.g. separated with a period).
Description
Added functionality for calling LLMs via langchain API for utilizing them in responses and conditions.
Checklist
List here tasks to complete in order to mark this PR as ready for review.
To Consider