-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tree format 'Fixture list' and grouping 'Outcome' or 'Duration' #1135
Comments
When I designed the three strategies, I came up with them as follows:
A died-in-the-wool NUnit guy coming to Visual Studio's test window misses the concept that Fixtures are tests too. For VS only test cases are tests, which is the source of a lot of problems in interfacing with them. Anything modern VS does with fixtures doesn't even come from NUnit but is deduced from the names of the test methods. As you know, NUnit doesn't even use the names for anything important. All your tests could have the same name and NUnit would work so long as the ids were different. IMO that's a feature. So, if I use fixture list with either of those two groupings, I want to see each fixture only once, in one category only. The fixture is a test and it either passes or fails (or something else) but it can't do both. Also, a fixture is either slow or fast, not both. It looks to me as if we are using the test times rather than the fixture times. And we probably need different cutoff values for fixtures and tests. This takes us into the logic of the strategy. However, that's only how I designed it. If the way it works doesn't make sense, then we have a failure in design. Let's discuss how it should be before coding. I suggest renaming this issue to be only about Outcome because the Duration issue feels bigger to me. I suggest also looking at and comparing FixtureList with TestList grouped by Fixture. When would you use one or the other? Is one or the other redundant? Is FixtureList completely redundant? Here's a little thing few people remember... The NUnit GUI, back in the V2 days, had an option to display the full tree with namespaces or just a fixture list. I'm not sure which versions had it but that's another reason I put a FixtureList in the TC GUI. |
Question: Should the slow/fast cutoff times be the same for all fixtures or should it depend on how many cases are in the fixture. Clearly a fixed time is simpler. Maybe we could make those cutoffs for both test cases and fixtures editable in a future release. :-) Use case: I want to decide which fixture to work on to improve performance. So I'd like to see the timing for each fixture, not the individual cases. It's often better to work on one fixture at a time, especially if there is a lot of shared code. |
I can imagine that this discussion will be tricky, as we would have to anticipate the expectations of the users. Of course I understand the considerations that led to the design decisions in the past and of course (as always) there are pro and cons for each decision. In my opinion, the Nunit Tree is still a good option if you simply want to run all tests in one assembly. And I need a good reason to switch to a different view. I also believe that the more test cases there are in the assembly, the greater the advantage of the other views. For example if there are only 10 test cases, then I immediately have an overview of all faulty tests. But that's different if I have 1000 test cases. In addition to the NUnit view, I also like the Fixture List view. However I would rarely use the Test List view, because I'm missing any structure in this view - it's just a very long list of test cases. But that's my personal opinion. I will continue to describe my expected behavior for 'Duration' and 'Outcome' grouping in the next comments. |
I'll try to describe my expectations regarding the 'Duration' grouping: I'll start with something simple: I believe that our hard coded boundaries for the categories 'Slow', 'Medium' and 'Fast' won't fit for every user or every use case. One user executes pure 'Unit' tests which will run probably fast, another user will run some UI tests which will run slower. But both like to get an overview about which are the fast tests and which are the slow tests in their test suite. I believe if a user switches to the 'Duration' grouping he wants to get some insights about the test performance. So that he can quickly identify which tests are the bottleneek in his test assembly. After some thoughts, I like the current system behavior more. :-) While thinking about this use case, I propose to go one step further: I think showing the exact time in the tree is not only possible for test cases, but also for test fixtures, isn't it? In that case I would show it also there - so if some OneTimeSetup method takes some long time, it can be identified too. So all in all, I am happy with the 'Duration' grouping and we can keep the current system behavior unchanged. If you like my considerations and you also see it as an improvement, we can create a new issue about it and try to solve/discuss it in the future. |
I'll try to describe my expectations regarding the 'Outcome' grouping: I think I'll use this grouping if I have a large test assembly with a huge amount of test cases and several test cases are categorized as 'Failed' or 'Ignored'. These tests are also distributed across the entire assembly, so that it's hard to get an overview with the default grouping. By switching to the 'Outcome' view, I have a quick overview of all Failed/Ignored tests. And I can debug/fix all of them one-by-one. Or in other words: I don't want a list of Fixtures which contain at least one Failed/Ignored test case, but instead I want a list of all failed/ignored test cases grouped by their Fixture. I also imagine that the user would like to run all failed/ignored tests again: by clicking on the 'Failed' or 'Ignored' tree node or the Fixture node. Now I would find it strange if those test cases that were previously successful are also executed now. So overall, my expectation to the 'outcome' grouping is not to assign each fixture to exactly one category. A fixture might contain 'passed', 'ignored' and 'failed' test cases. Instead I like to view these test cases in their related category grouped by their fixture. |
Long day and I'm tired, so I won't comment at length toniight. One idea comes to mind. Maybe Duration is not really a group at all but a selection criterion or filter. Like, show all tests taking more than 5 seconds. Or maybe it's a sort. Like list all fixtures, sorted by duration, longest first. Not saying we should forget the grouping right now, but maybe it's not the final iteration and will evolve into something else. More tomorrow. |
To help keep my head straight, I'll answer inline. :-)
Makes sense. It's in the nature of failures that they occur where you least expect them. :-)
Hmm. Should we allow more than one grouping level? If yes, it's probably a future thing though.
So would you want the fixture to potentially appear four or five times, under different outcomes, but only show the testcases that match the particular outcome in each case? That begins to make sense.
OK. I see that.
So a fixture may appear multiple times but a test case would appear only under one category. I assume that the fixture would keep its correct color based on overall failure/success and would not be different in each appearance. I can buy this. :-) |
And again, for Duration:
Agreed, but lets create a future issue for this. The "where to store it" question can be answered then.
But we aren't timing test fixtures right now, which is what I imagined we should do. That is, we should identify slow test fixtures as test fixtures that take more than a certain amount of time rather than as fixtures containing one or more slow cases. As a first cut, we could try some multiple, e.g. 5 x the level we use for test cases. Then we could try it out on our own tests and adjust. As with cases, user settings could come later.
Totally agree!
Would you do this via a context menu or a hover? If it's a hover, how to keep the tooltip out of the way for right clicking?
Am I mistaken? I'm not in the code as I type this but AFAIK the current behavior is NOT based on the duration of the fixture but of the individual cases. I prefer to be wrong here. :-) |
All right then. I must have done that at some point. What about the cutoff levels for the fixture duration groups? Do they make sense to you in comparison to the levels for TestCases? Will you do a PR on this? |
I propose that I create two new issues to handle the 'Duration' improvements:
I would start to work on the second one and the first one can be handled at a later point in time (maybe I label it with 'Idea'). And I would use this issue for the 'outcome' grouping improvement which we discussed and agreed upon (see above). I would try to prepare a PR for that one. |
Sounds good! Please go ahead. On my side, i'll prepare some general "Design Issues" For example how we support categories, how we support durations, etc. They will be outside of any milestone and just serve as a point for us (and anyone else) to discuss questions about the best way to do something. Durations are particularly interesting to me because I am wondering if we actually need to group the tests at all. Maybe queries are better, or filters, or highlighting slow tests, etc. In fact, hold off on the issue for users setting the boundaries for now. Maybe we don't need boundaries, i.e. "NUnit list the ten slowest tests for me" If the groups are boxes, then this is literally thinking outside the box. :-) |
Describe the bug
I have the feeling that the system behavior is not quite right yet, when working with tree format 'Fixture list' and grouping 'Outcome' or 'Duration'. The system behavior is great if all tests of a fixture fall into the same group. But it becomes unexpected when some tests fall in one group and other tests in another group.
Here's an example:
One test case of a test fixture fails, two other test cases succeeds. If I use 'Fixture list' and 'Outcome' grouping, this will be the result:
All of the test cases are listed beneath the 'Failed' group node - also the successful ones.
I expect that the two successful test cases of the Test fixture are listed under 'Passed' and only the failing test case under 'Failed'. So that I can rerun all failed tests of a TestFixture easily. By the way that's also the behavior how Visual Studio or Resharper handle this use case:
The same observation can also be made with the 'Duration' grouping. There's one slow test case in the Testfixture and two fast test cases. Now all of them are listed beneath the 'Slow' node:
Again Visual Studio or Resharper handle this use case differently:
Expected behavior
I expect that only those test cases which matches the criteria (Passed, Slow...) are listed in the category. The test cases in that category are grouped by TestFixtures.
So overall I have an overview about the matching category ('Passed'; 'slow'...) of my test cases and I can easily excute that category.
Environment (please complete the following information):
The text was updated successfully, but these errors were encountered: