Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix incorrect arguments and resulting prompts in prompts.py files for lessons 5, 6, and 9 of prompt_evaluations course #48

Open
wants to merge 3 commits into
base: master
Choose a base branch
from

Conversation

docmparker
Copy link

In the prompt evaluations course, lessons 5, 6, and 9 use a prompts.py file to have promptfoo compose the prompts. However, the existing versions incorrectly assume that the needed variable is directly passed; instead, promptfoo passes a context dict with a 'vars' key, from which the desired variable can be extracted. This affects the prompts passed to the models and the resulting eval scores. The prompt that is actually passed can be seen by clicking on the magnifying glass in any of the cells in the promptfoo viewer. This is a somewhat insidious error, given that everything runs and one wouldn't notice unless drilling down to see the actual final prompts passed to the models.

In lesson 5, by my runs, Haiku goes from 0% passing with the simple_prompt, to 75% passing after the fix, and from 66.67% passing to 75% passing with the better_prompt after the fix. There are changes in eval metrics in lessons 6 and 9 as well after fixing the prompts, albeit less dramatic than for lesson 5. For lesson 5, the new outcomes would require a small change to the narrative in the notebook (happy to do this, but I assumed you'd prefer someone at Anthropic make those changes) and perhaps to the simple_prompt.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant