Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat/llm responses #376

Open
wants to merge 206 commits into
base: dev
Choose a base branch
from
Open

Feat/llm responses #376

wants to merge 206 commits into from

Conversation

NotBioWaste905
Copy link
Collaborator

@NotBioWaste905 NotBioWaste905 commented Jul 24, 2024

Description

Added functionality for calling LLMs via langchain API for utilizing them in responses and conditions.

Checklist

  • I have performed a self-review of the changes

List here tasks to complete in order to mark this PR as ready for review.

To Consider

  • Add tests
  • Update API reference / tutorials / guides
  • Update CONTRIBUTING.md

@NotBioWaste905 NotBioWaste905 requested a review from RLKRo July 24, 2024 12:22
Copy link

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It appears this PR is a release PR (change its base from master if that is not the case).

Here's a release checklist:

  • Update package version
  • Update poetry.lock
  • Change PR merge option
  • Update template repo
  • Search for objects to be deprecated

@NotBioWaste905 NotBioWaste905 changed the base branch from master to dev July 24, 2024 12:22
chatsky/llm/methods.py Outdated Show resolved Hide resolved
chatsky/llm/methods.py Outdated Show resolved Hide resolved
chatsky/llm/filters.py Outdated Show resolved Hide resolved
chatsky/llm/wrapper.py Outdated Show resolved Hide resolved
chatsky/llm/wrapper.py Outdated Show resolved Hide resolved
chatsky/llm/wrapper.py Outdated Show resolved Hide resolved
chatsky/llm/wrapper.py Outdated Show resolved Hide resolved
tests/llm/test_model_response.py Outdated Show resolved Hide resolved
tests/llm/test_model_response.py Outdated Show resolved Hide resolved
tests/llm/test_model_response.py Outdated Show resolved Hide resolved
@RLKRo
Copy link
Member

RLKRo commented Aug 8, 2024

I got an idea for more complex prompts: we can allow passing responses as prompts instead of just strings.

And then it'd be possible to incorporate slots into a prompt:

model = LLM_API(prompt=rsp.slots.FilledTemplate("You are an experienced barista in a local coffeshop."
"Answer your customers questions about coffee and barista work.\n"
"Customer data:\nAge {person.age}\nGender: {person.gender}\nFavorite drink: {person.habits.drink}"
))

tutorials/llm/2_prompt_usage.py Outdated Show resolved Hide resolved
tutorials/llm/2_prompt_usage.py Outdated Show resolved Hide resolved
Comment on lines +6 to +11
class PositionConfig(BaseModel):
system_prompt: float = 0
history: float = 1
misc_prompt: float = 2
call_prompt: float = 3
last_request: float = 4
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Allow None positions to disable certain prompts.

call_prompt_message = await message_to_langchain(call_prompt_text, ctx, source="human")
prompts.append(([call_prompt_message], call_prompt.position or position_config.call_prompt))

prompts.append(([await message_to_langchain(ctx.last_request, ctx, source="human")], position_config.last_request))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove last turn from history; add last turn here instead of last request.

Copy link
Member

@RLKRo RLKRo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aside from the comments attached to this review, there are 4 comments from Rami that I did not mark as resolved.

I think it might be a good idea to run the tutorials through an llm to check it if they are clear and ask for improvements.


pattern: str
"""
pattern that will be searched in model_result.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Capitalize

Comment on lines +61 to +62
:param str target_token: token to check (e.g. `"TRUE"`)
:param float threshold: threshold to bypass. by default `-0.5`
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move these out of the class docstring as well.

result.annotations = {"__generated_by_model__": self.name}
return result

async def condition(self, prompt: str, method: BaseMethod, return_schema=None):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This still not resolved.

return self.__dict_to_extracted_slots(nested_result)

# Convert nested dict to ExtractedGroupSlot structure
def __dict_to_extracted_slots(self, d):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make the name start with a single underscore.

raise NotImplemented

def __call__(self, ctx, request, response, model_name):
return self.call(ctx, request, model_name) + self.call(ctx, response, model_name)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't find tests for that.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Group test cases in classes.

E.g.

async def test_llm_slot(pipeline, context):
  ..
async def test_llm_group_slot(pipeline, context):
  ..

====>

class TestSlots:
  async def test_llm_slot(self, pipeline, context):
    ..
  async def test_llm_group_slot(self, pipeline, context):
    ..

Comment on lines +74 to +75
# misc_prompt is the default position for misc prompts
# Misc prompts may override it and be ordered in a different way
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This needs clarification. And needs to be clearly different from the previous sentence (e.g. separated with a period).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants