Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

better acceptance test metadata feedback #118

Open
1 task
mikeAdamss opened this issue Apr 23, 2024 · 0 comments
Open
1 task

better acceptance test metadata feedback #118

mikeAdamss opened this issue Apr 23, 2024 · 0 comments

Comments

@mikeAdamss
Copy link
Contributor

mikeAdamss commented Apr 23, 2024

What is this

We have an acceptance test that compares the metadata we're generating with a fixture of expected metadata.

It works well, but the feedback on a failure just prints the contens of a json file, which in some cases will be quite big, making it hard to drill down to where the failure is.

This task is to see if we can get more specific feedback from it.

What to do

so this line:

And the metadata should match 'fixtures/correct_metadata.json'

calls this function:
def step_impl(context, correct_metadata):

We need more specific feedback than just printing the dict.

Take some time to investigate the options. One (possible) option I stumbeld accross is https://github.com/inveniosoftware/dictdiffer, but there might be others.

Note - use make feature to run the. acceptance tests.

Acceptance Criteria

  • Comparing two dictionaries with the acceptance tests results in some sort of feedback telling you what is different.
@mikeAdamss mikeAdamss changed the title better acceptance metadata better acceptance metadata feedback Apr 23, 2024
@mikeAdamss mikeAdamss changed the title better acceptance metadata feedback better acceptance test metadata feedback Apr 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant