Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

coverage: include tests #39

Open
wants to merge 9 commits into
base: master
Choose a base branch
from
Open

coverage: include tests #39

wants to merge 9 commits into from

Conversation

blueyed
Copy link
Contributor

@blueyed blueyed commented Jan 20, 2019

No description provided.

@Delgan
Copy link
Owner

Delgan commented Jan 20, 2019

I don't think we need coverage of the tests/ folder.

As you see, it fails the tests because of implementation details and others hacks.

As long as loguru/ actual source code reaches 100%, I think it's enough. I see little added value in ensuring coverage of tests itself, it's not worth the hassle in my opinion.

@blueyed
Copy link
Contributor Author

blueyed commented Jan 20, 2019

I don't think we need coverage of the tests/ folder.

It is useful to see if tests are not executed/run accidentally.

@Delgan
Copy link
Owner

Delgan commented Jan 20, 2019

Yeah, you are right, this can be useful to avoid some mistakes.

I need to see how well it integrates. For example, here: tests/test_datetime.py#L81
The line is never executed but it should not mater. It's just an implementation detail of Loguru.
That means I need to somehow refactor the tests just to avoid the not covered line.
Alternatively, I could lower the required "100%" coverage, but then it loses it purpose as I will not be notified if a test is missed inadvertently. 😕

Also, the branch = True configuration seems to prevent me for doing things which are fine in my opinion:

if foo:
    bar()
baz()

@blueyed
Copy link
Contributor Author

blueyed commented Jan 20, 2019

Regarding the test example, you could use an assert for the attribute name and raise AttributeError always? Given the extra assertion this would improve the test.

As for if foo: if this is only covered for the true case it is either not needed or a test for the false case is missing. Apart from that there is also # pragma: no branch.

@blueyed
Copy link
Contributor Author

blueyed commented Jan 20, 2019

We could maybe split this into two PRs: one for covering tests and then another for the branch coverage, which could be worked on over time then?

@Delgan
Copy link
Owner

Delgan commented Jan 20, 2019

Regarding the test example, you could use an assert for the attribute name and raise AttributeError always? Given the extra assertion this would improve the test.

Sorry, I struggle visualizing your solution. I need to simulate the fact that tm_gmtoff and tm_zone doesn't exist on some paltform, but other attributes are perfectly fine. How would I implement this if I always raise AttributeError?

As for if foo: if this is only covered for the true case it is either not needed or a test for the false case is missing.

Oh, yes that makes sense, I see. Thanks!

Apart from that there is also # pragma: no branch.

I very much would prefer to avoid such comments in the source code.

We could maybe split this into two PRs: one for covering tests and then another for the branch coverage, which could be worked on over time then?

Sure, why not. That will be easier to work on.

Again, thanks for all your quality improvements. 👍

@blueyed
Copy link
Contributor Author

blueyed commented Jan 20, 2019

Sorry, I struggle visualizing your solution. I need to simulate the fact that tm_gmtoff and tm_zone doesn't exist on some paltform, but other attributes are perfectly fine.

Then it appears to just be a matter of the other attributes not being tested/covered, isn't it?

I've changed this PR/branch to only include tests in coverage, and use tox.ini already.

@blueyed
Copy link
Contributor Author

blueyed commented Jan 20, 2019

Let's wait for enabling branch coverage after this is merged.

@blueyed blueyed changed the title coverage: branch coverage, include tests coverage: include tests Jan 20, 2019
@Delgan
Copy link
Owner

Delgan commented Jan 20, 2019

Then it appears to just be a matter of the other attributes not being tested/covered, isn't it?

Yes, but theorically, others attributes doesn't need to be tested. They are just here in case some function within Loguru needs to use it. It appears that this is not the case, but it's an implementation detail. We don't know it from the point of view of the tests.
So we can remove the return getattr(self._struct, attr) of course. But if, in the future, some function decides to access struct_time.some_attr, then it will break the test for the wrong reason.
I'm just worried about tweaking tests implementation just to satisfy coverage.

Also, line here is unreachable anywa as the test is about wrong function arguments: tests/test_add_option#L364.

I guess this is a situation when there is no other solution than using # pragma: no cover, right? I'm fine with that as long as it doesn't appear in loguru/, but for tests it's ok.

@blueyed
Copy link
Contributor Author

blueyed commented Jan 21, 2019

For unreachable code I suggest using raise NotImplementedError, which can be ignored by default. Tox uses this config: https://github.com/tox-dev/tox/blob/f67064e3e6941883a69602ed6fcf66071e008f21/tox.ini#L119-L125

Please feel free to push anything you have already while investigating - I've not looked closely at the coverage report yet myself.

@Delgan
Copy link
Owner

Delgan commented Jan 21, 2019

Nice trick. I will update your branch when I have some time.

@Delgan
Copy link
Owner

Delgan commented Jan 21, 2019

I changed my mind again.

As explained in this StackOverflow comment, I think coverage is not the appropriate tool to detect problems in the test suite.

My trouble with this:

  • We don't care about coverage of tests, we care about tests functions not being missed
  • I do not want to have to write my tests carefully watching that every line is executed
  • I do not want to introduce workaround just to satisfy the coverage
  • I do not want to add more constraints to potential contributors
  • Detecting missing tests only work here because there is a "100% coverage" target, otherwise this metric would not suffice
  • The inconveniences seems to outweigh the benefits, errors in tests are rare and would generally be spotted by a decrease in coverage of actual source code

So, ideally there would be some tool to ensure that all tests functions are run. Unfortunately, I didn't find any such tool. So, maybe we can use coverage of tests as a clue for noticing problems, but I think it should not fail CI checks.

@blueyed
Copy link
Contributor Author

blueyed commented Feb 17, 2019

I can see that it's not a perfect solution, but it appears to be the only option there is so far.

A possible option might be to report coverage twice, once with and then witout tests, and using flags on codecov to report this (e.g. "code" and "tests"). This will not help with the overall coverage report (e.g. for diffs), but would display them differently.

@blueyed
Copy link
Contributor Author

blueyed commented Feb 17, 2019

Detecting missing tests only work here because there is a "100% coverage" target, otherwise this metric would not suffice

It also works without 100% coverage, because you can browse/see the coverage for tests themselves.

@blueyed
Copy link
Contributor Author

blueyed commented Feb 17, 2019

We don't care about coverage of tests, we care about tests functions not being missed

It also helps with dead code there in general, e.g. compatibility code that is not being used anymore after dropping support for older versions of Python etc.

I do not want to have to write my tests carefully watching that every line is executed

Why not? You do not have to be too careful about this, but then you get feedback if something is not being executed.

I do not want to introduce workaround just to satisfy the coverage

I think using raise NotImplementedError instead of pass or pass # pragma: no cover is a good workaround for a function that is not expected to be called, for example. (this can be achieved by extending the exclude_lines setting).

I do not want to add more constraints to potential contributors

Mostly they would not be affected, but if so, it is easy to fix - and it is about maintainability in general, so also in their interest.

@Delgan
Copy link
Owner

Delgan commented Feb 17, 2019

Thanks for taking the time to answer my concerns.

I understand and I see how this can be useful.
I'm interested in integrating it to the CI, but this should stay an informative metric, not an goal.
I don't know how this should be done, by running coverage twice, or configuring coveralls adequately.

@blueyed
Copy link
Contributor Author

blueyed commented Feb 17, 2019

Are PRs not built on Travis anymore? https://travis-ci.org/Delgan/loguru/pull_requests

I don't think we can configure codecov, but I've pushed some trivial fixes for now.

@Delgan
Copy link
Owner

Delgan commented Feb 17, 2019

Oops, I played with the Travis webhook the other day, it seems I broke it.
I updated the settings so that PR should trigger the CI tests.
I could not find how to trigger the PR manually, I will look at it tomorrow. Ohterwise, you may try to re-push your previous commit to see if it works now.

Thanks for the improvements. 👍

@blueyed blueyed closed this Feb 17, 2019
@blueyed blueyed reopened this Feb 17, 2019
@blueyed
Copy link
Contributor Author

blueyed commented Feb 17, 2019

Cool!
A workaround usually is to close and re-open them.

@@ -28,5 +28,6 @@ def a(x):
f(0)
except ZeroDivisionError:
logger.exception("")
raise
except Exception:
logger.exception("")
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The outer exception is not covered, but tests fail with this.
I think that might indicate some missing test (i.e. you want to also test the exception being re-raised / another exception being caught in the outer block).

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, the whole nested try blocks could be removed. This is not testing anything useful. Previously these test cases were generated automatically through some kind of template strings. This is no longer the case, I did not realize I could remove it.

@blueyed
Copy link
Contributor Author

blueyed commented Feb 17, 2019

btw: in general a good metric for PRs is if a diff is covered, and that there are no unexpected changes (and coverage does not decrease), and not if the coverage is at 100% (but we're almost there anyway again).

@blueyed
Copy link
Contributor Author

blueyed commented Feb 17, 2019

Only two places are left: one win32 specific branch, and the one commented on above already.
See https://codecov.io/gh/Delgan/loguru/tree/87854669325269b65153973306cde98eeb863728/tests

@blueyed
Copy link
Contributor Author

blueyed commented Feb 18, 2019

What about the win32 branch?
I'd say it is good to have it as uncovered.

But it could maybe also be covered by simulating win32, until there is something like AppVeyor to test it for real.

@Delgan
Copy link
Owner

Delgan commented Feb 20, 2019

@blueyed Sorry, I missed your comments. Thanks for the hard work.

It's ok if some branches in the tests are not covered. As I said, I don't plan to make sure that every line is executed each time I publish a new commit, so there is no reason to push for 100% here. I may take a look at the reports from time to time to make sure there is no problem or missed test, though.

The only thing left to settle is to find a way for the coverage reports of loguru/ and tests/ to be separated, and that CI checks do not fail regardless of the tests coverage value.

@blueyed
Copy link
Contributor Author

blueyed commented Feb 21, 2019

The only thing left to settle is to find a way for the coverage reports of loguru/ and tests/ to be separated, and that CI checks do not fail regardless of the tests coverage value.

That is not really possible.

I've added a change to report it twice to codecov, with flags "loguru" and "tests", so it would at least show this separate - but there is only a single total coverage still.

@Delgan
Copy link
Owner

Delgan commented Mar 3, 2019

Hey @blueyed.

So, with these changes, coverage of the project is no longer checked? That's problematic. :/

It seems to me that we are moving away from the primary purpose of this suggestion. Quoting you:

It is useful to see if tests are not executed/run accidentally.

So, what about some kind of pytest plugin or fixture that would do just that: ensure that all functions in tests are run? Is that possible, first?
I know pytest as some hooks functionalities that user could use to perform checks at initialization or while collecting tests.

@blueyed
Copy link
Contributor Author

blueyed commented Mar 4, 2019

So, with these changes, coverage of the project is no longer checked? That's
problematic. :/

That was a bug, should be fixed now.

So, what about some kind of pytest plugin or fixture that would do just
that: ensure that all functions in tests are run? Is that possible, first?
I know pytest as some hooks functionalities that user could use to perform
checks at initialization or while collecting tests.

I do not think that is really possible without using coverage.

But using a pytest plugin sounds like a nice idea for this, maybe this could be
integrated with pytest-cov.

OTOH coverage report -m --include 'loguru/*' --fail-under=100 could be used for this also, with the downside that it would fail CI. Maybe a custom GitHub status could be used here (it is rather easy to do using curl, but requires a token to be setup, which would/should not be visible with pull requests).

@blueyed
Copy link
Contributor Author

blueyed commented Mar 4, 2019

It would be a useful feature for codecov to allow for an extra status, that would allow to handle tests separately: codecov/project would only look at "loguru/" then, but "codecov/tests" could be configured to look at "tests/".

@blueyed
Copy link
Contributor Author

blueyed commented Mar 4, 2019

As for now, using the flags this would get reported in the comment differently, similar to how it looks on https://codecov.io/gh/Delgan/loguru/compare/acaeeb31689b75a003bb378f46de12f3ebfcacd2...820e180bafd1d252f1fb66d10eaa5600e478ee76 already.

@Delgan
Copy link
Owner

Delgan commented Mar 5, 2019

I do not think that is really possible without using coverage.

I don't understand. How does using coverage relate to writing a custom pytest plugin checking called functions?

But using a pytest plugin sounds like a nice idea for this, maybe this could be
integrated with pytest-cov.

Maybe, this could be proposed. But a first proof-of-concept restricted to Loguru package would be fine too.

OTOH coverage report -m --include 'loguru/*' --fail-under=100 could be used for this also, with the downside that it would fail CI. Maybe a custom GitHub status could be used here (it is rather easy to do using curl, but requires a token to be setup, which would/should not be visible with pull requests).

This sounds too complicated, it doesn't worth the trouble in my opinion.

It would be a useful feature for codecov to allow for an extra status, that would allow to handle tests separately: codecov/project would only look at "loguru/" then, but "codecov/tests" could be configured to look at "tests/".
As for now, using the flags this would get reported in the comment differently, similar to how it looks on https://codecov.io/gh/Delgan/loguru/compare/acaeeb31689b75a003bb378f46de12f3ebfcacd2...820e180bafd1d252f1fb66d10eaa5600e478ee76 already.

That would be nice, but I don't expect codecov to implement such feature anytime soon. 😕

@blueyed
Copy link
Contributor Author

blueyed commented Mar 5, 2019

I don't understand. How does using coverage relate to writing a custom pytest plugin checking called functions?

How do you define "called functions"?
Is it about test functions only?
I have seen projects that had testFoo, where only test_* was collected by pytest.

While a pytest plugin would be possible that e.g. wraps every found function in collected test files to track if it was called, it would still not catch try / except things, that are not necessary anymore (compat related).

OTOH coverage report -m --include 'loguru/*' --fail-under=100 could be used for this also, with the downside that it would fail CI. Maybe a custom GitHub status could be used here (it is rather easy to do using curl, but requires a token to be setup, which would/should not be visible with pull requests).

This sounds too complicated, it doesn't worth the trouble in my opinion.

Ok, apart from the custom Github status - would you be comfortable with failing the CI job then?
I think that would be OK: the pytest/tox run is good, but then there is a message that the code itself (without tests) is not covered completely. It would still report coverage to codecov before this.

@blueyed
Copy link
Contributor Author

blueyed commented Mar 5, 2019

But for now I think this PR is good by itself, and we should iterate based on what comes up.

@blueyed
Copy link
Contributor Author

blueyed commented Mar 5, 2019

..but the codecov config should be changed to not require 100%, but just the current percentage (i.e. it does not decrease).
I think you have this configured for the project to be in the range of 100 only, correct?

@Delgan
Copy link
Owner

Delgan commented Mar 10, 2019

How do you define "called functions"?
Is it about test functions only?
I have seen projects that had testFoo, where only test_* was collected by pytest.

While a pytest plugin would be possible that e.g. wraps every found function in collected test files to track if it was called, it would still not catch try / except things, that are not necessary anymore (compat related).

This is not exactly what I had in mind. pytest is in charge of collecting, selecting and executing test functions. So, I would expect pytest to provide hooks corresponding to these actions. Using these hooks, I was thinking to check for functions which was collected but not executed, and raise an exception if such a function was detected (unless the function was somehow explicitly marked has beeing an utility function not intended to be executed).

Ok, apart from the custom Github status - would you be comfortable with failing the CI job then?
I think that would be OK: the pytest/tox run is good, but then there is a message that the code itself (without tests) is not covered completely. It would still report coverage to codecov before this.

Sorry, I think I'm a little bit lost. To which case are you referring? If the code of loguru/ is not properly covered, the CI job has to fail, like this is done currently.

But for now I think this PR is good by itself, and we should iterate based on what comes up.

..but the codecov config should be changed to not require 100%, but just the current percentage (i.e. it does not decrease).
I think you have this configured for the project to be in the range of 100 only, correct?

Indeed, the project is configured to fail the job if loguru/ coverage doesn't reach 100%. This is the behavior I expect, so I don't want to change the settings as it would not reflect what I wish.

I think the "pytest plugin" solution should be explored, this seems to satisfy all we need.

@blueyed
Copy link
Contributor Author

blueyed commented Mar 10, 2019

If the code of loguru/ is not properly covered, the CI job has to fail, like this is done currently.

Travis does not fail - only codecov complains; like with a custom Github status.

@Delgan
Copy link
Owner

Delgan commented Mar 11, 2019

Travis does not fail - only codecov complains; like with a custom Github status.

Oh, yeah I see, thanks. I was refering to the "CI job" as a whole, like "whatever is automatically run at push and returns an error or success status". Not sure this is an appropriate definition, though.

Failing the Travis job because of coverage problem seems acceptable. Still, I would prefer the "pytest plugin" solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants