Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make it possible to also check if token values are from expected pool #141

Closed
wants to merge 14 commits into from
Closed

Conversation

jhutar
Copy link
Member

@jhutar jhutar commented May 16, 2019

No description provided.

Copy link
Contributor

@mirekdlugosz mirekdlugosz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@jhutar thanks for working on this. Looks good overall.

But one issue I have is that this new method is quite similar to already existing validate_docstring_report. It seems to duplicate some code, too. I wonder if we could incorporate changes into validate_docstring_report instead of creating whole new command. What do you think?

If you don't want to work on this PR anymore, I can take your commits and submit new PR with this change. But I would like to get your opinion first.

Other than that, there are also some variable names not exactly to my liking, but I'll leave them up to @elyezer to decide.

testimony/__init__.py Outdated Show resolved Hide resolved
testimony/cli.py Outdated Show resolved Hide resolved
tests/sample_output.txt Show resolved Hide resolved
tests/sample_output.txt Outdated Show resolved Hide resolved
tests/sample_output.txt Outdated Show resolved Hide resolved
echo "= validate_values report ="
echo "=========================="
echo
testimony -n --tokens="Status,Feature" --minimum-tokens="Status,Feature" --token-values=tests/validate-values.yaml validate-values tests
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As a next step it would be great to have a yaml config file for entire testimony. So the tokens, minimum tokens and the validation rules could live on a single place.

Thank you for bringing the initial version of this

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It @Mirzal agrees here (and whoever else could be considered a stakeholder for testimony), we can create a issue and I can ask one or our interns to work on it.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good. I can create the issue if you want.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Regarding "roadmap" for Testimony, I would put following things:

  • full implementation of Improve Testimony token validation #121 (add support for type like unique, int, ranges etc.)
  • config.py:TokenConfig begs for factory pattern (each type should have their own subclass, with custom validate implementation)
  • printing reports (__init__.py lines 419 to 472) is a mess. Probably each check should be their own method, and we should just run them in a loop.

I believe these to be nice examples for CS students who would like to try out some design patterns and cleaning up code, but, honestly, I would consider them very low on Satellite priorities list. @jhutar do you think this is something we can assign to interns, or could be better use their help elsewhere?

tests/validate-values.yaml Show resolved Hide resolved
@jhutar
Copy link
Member Author

jhutar commented May 19, 2019

@Mirzal Ad "incorporate values validation into validate_docstring_report": IMO it depends whether we want to have testimony doing all the checks at once or to have multiple separate checks. Merging all the checks would be out of scope of this PR I think. But, well, I do not have strong opinion.

@mirekdlugosz
Copy link
Contributor

IMO it depends whether we want to have testimony doing all the checks at once or to have multiple separate checks.

I prefer to run all checks at once. There are couple of reasons:

  1. You can control number and scope of checks to perform through config file. If you want to check if only expected tokens are provided, and if each test has minimum set of tokens defined (basically, mimic current behavior), you may pass config file like this:
---
CaseImportance:
    required: true
CaseAutomation:
Component:
    required: true

If you also want to check if correct values are provided, then you may use config file like this:

---
CaseImportance:
    required: true
    type: choice
    choices:
        - Critical
        - High
        - Medium
        - Low
CaseAutomation:
Component:
    required: true

If you want to do atomic checks, you only need to keep bunch of config files around. If you want to have all checks done in single pass, you dump everything in single config file.

  1. With little benchmarking that I did, I am confident that parsing metadata out of docstrings is the slowest part of testimony. On my machine, it takes about 1 second to parse single file, and about 45 seconds to parse entire robottelo/tests/foreman. Running validate or validate-values on all tests takes 0.01 second. For me, it's clear that we should try to do as much as possible on single reading of tests docstrings.

  2. It seems to be more in line with initial design decisions and intentions of testimony authors.

I am happy to take this PR over from here and working on this further. You did a great deal of very valuable work already.

@jhutar
Copy link
Member Author

jhutar commented May 20, 2019

@Mirzal OK, thank you! I consider it done from my point of view then.

@elyezer
Copy link
Collaborator

elyezer commented May 20, 2019

I agree with @Mirzal about having a single command to validate. I kinda see Testimony as a linter for docstrings, that said usually the linters will check everything with a single command. For testimony that command would be the validate. We can create specialized commands later but I think having a command that checks for everything is a good thing to have.

Copy link
Collaborator

@elyezer elyezer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this needs a couple more changes and then we should be fine to merge. Can you please update?

Also about having a single command to validate everything like @Mirzal said. Maybe worth improving the validate command to report on the values and avoid adding a new command that may go away in the future.

Edit: I meant to request the changes here but I am also approving that it is really close to be able to merge.

testimony/cli.py Show resolved Hide resolved
testimony/config.py Show resolved Hide resolved
testimony/config.py Show resolved Hide resolved
token, testcase.tokens[token])
)
invalid_token_value_count += 1
if missing_token_count_local > 0:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about instead of counting creating a list of missing tokens so the message could provide which tokens are missing?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this is done in #142 :

ConfigurationFileTestCase::test_multiple_invalid_keys:202
---------------------------------------------------------

* Tokens with invalid values:
  Caseimportance: Lowest (type: 'choice')
  Status: Invalid (type: 'choice')

If I misunderstood, please let me know in other PR.

testimony/__init__.py Show resolved Hide resolved
@mirekdlugosz
Copy link
Contributor

@elyezer I will take these changes, update it according to your latest review and incorporate other changes that we discussed earlier (i.e. let's do everything as part of validate instead of adding new command). I will create new PR from my repo, as I can't commit to @jhutar fork :) .

So, let's not merge this yet. We can close this once new PR is submitted.

@elyezer
Copy link
Collaborator

elyezer commented May 20, 2019

@Mirzal sounds good. Please let me know if you need any help and thank you very much for handling it.

@mirekdlugosz
Copy link
Contributor

Closing, as these changes were merged as part of #142 .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants