Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add failfast option #207

Open
masayukig opened this issue Dec 11, 2018 · 6 comments
Open

Add failfast option #207

masayukig opened this issue Dec 11, 2018 · 6 comments

Comments

@masayukig
Copy link
Collaborator

Issue description

When we run tests with stestr, stestr execute all of the tests even if the test fails during the execution.
I think this is enough for the most of the CI/CD cases. However, if we run a lot of test cases (it takes long time) manually, it might be a good option. It can save the time for users to wait the end of the tests.

NOTE: This is not a bug report but a feature request.

Expected behavior and actual behavior

Expected: A test fails as soon as possible when a test case fails with the failfast option.

Steps to reproduce the problem

N/A

Specifications like the version of the project, operating system, or hardware

N/A

System information

N/A

Additional information

@masayukig
Copy link
Collaborator Author

This might be a related testtools API?
https://testtools.readthedocs.io/en/latest/api.html#testtools.TestResult
class testtools.TestResult(failfast=False, tb_locals=False)

I don't dig it that much yet though.

@mtreinish
Copy link
Owner

Well part of of the complexity for an option like this is how do we handle multiple workers? Since it's a subprocess we don't have a lot of control over the workers. If one worker fails when we have failfast set how to we gracefully stop running all the other tests.

@masayukig
Copy link
Collaborator Author

yeah, that's a good question. I think we don't have a smart way to do that. So, how about making this --failfast option work only with the --serial option?

@ssbarnea
Copy link
Contributor

ssbarnea commented Jan 4, 2019

This feature is more important than it seems because without it a failure to setup the test environment can easily produce desasters where huge amount of failures with a lot of extra debug information would flood console and logs.

It happened to me on multiple occasions with stestr, on multiple projects. At least in one case it also put the worker to its knees because each test was tryin to start a heavy service: 10gb ram used and all threads maxed forever.

@masayukig
Copy link
Collaborator Author

@ssbarnea sorry for my delay. I'll revisit here and implement this feature anyway. But if you have any code already, please feel free to submit a PR :)

And #230 looks like almost similar issue? At least, the title is almost the same :) Do you think we need to keep both issues? Or can we merge them?

@aspiers
Copy link

aspiers commented Jun 17, 2019

I'm feeling the need for this feature right now :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants