Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

running counter of test results at -2 #12800

Open
xmo-odoo opened this issue Sep 10, 2024 · 3 comments
Open

running counter of test results at -2 #12800

xmo-odoo opened this issue Sep 10, 2024 · 3 comments
Labels
topic: reporting related to terminal output and user-facing messages and errors type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature

Comments

@xmo-odoo
Copy link

xmo-odoo commented Sep 10, 2024

Currently, at both -1 and -2 the test cases reporting uses the "sea of dots".

For test suites which are both very large and relatively slow, this representation is not really helpful as counting dozens of dots is difficult, and the only easy to understand progress reporting is at end of line.

The "default" progress reporting yields a more precise view as it checkpoints the percentage every file, but that's at the cost of enormous amounts of space if the suite uses a large number of files (which is a good reason to use the dots).

I would propose an alternative representation, which is a lot terser and easier to track for slow test suites (though not necessarily for very small or fast ones): display a running counter in the same format as the summary stats line (maybe with the number of selected tests added) as the sole tally line e.g.

collected 5 items
1 passed / 5 selected

which on the next test would become

2 passed / 5 selected

and on failure

1 failed / 2 passed / 5 selected

This provides for a much improved "at a glance" summary when running tests interactively.

@naveens800
Copy link

Hey i would like to work on this issue, but i need context about -1 and -2 (did not get this point). If this feature is approved, I request to assign this to me.

@xmo-odoo
Copy link
Author

xmo-odoo commented Oct 9, 2024

@naveens800 those are the internal verbosity levels (specifically verbosity_test_cases), default is 0, -v increases by 1 and -q reduces by 1, so -1 is what you get when you run pytest -q and -2 on -qq.

I started some work locally, IIRC the tests fail and then other stuff got in the way, but I can post the diff here or push it to a branch to link to if you want a starting point.

@RonnyPfannschmidt
Copy link
Member

Replacing the sea of dots with a self rewriting line of stats may help, this needs some discussion and potentially a opt in

@Zac-HD Zac-HD added type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature topic: reporting related to terminal output and user-facing messages and errors labels Oct 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
topic: reporting related to terminal output and user-facing messages and errors type: proposal proposal for a new feature, often to gather opinions or design the API around the new feature
Projects
None yet
Development

No branches or pull requests

4 participants