You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We have an acceptance test that compares the metadata we're generating with a fixture of expected metadata.
It works well, but the feedback on a failure just prints the contens of a json file, which in some cases will be quite big, making it hard to drill down to where the failure is.
This task is to see if we can get more specific feedback from it.
What is this
We have an acceptance test that compares the metadata we're generating with a fixture of expected metadata.
It works well, but the feedback on a failure just prints the contens of a json file, which in some cases will be quite big, making it hard to drill down to where the failure is.
This task is to see if we can get more specific feedback from it.
What to do
so this line:
dp-data-pipelines/features/data_ingress_v1.feature
Line 21 in c134415
calls this function:
dp-data-pipelines/features/steps/data.py
Line 111 in c134415
We need more specific feedback than just printing the dict.
Take some time to investigate the options. One (possible) option I stumbeld accross is https://github.com/inveniosoftware/dictdiffer, but there might be others.
Note - use
make feature
to run the. acceptance tests.Acceptance Criteria
The text was updated successfully, but these errors were encountered: