Replies: 14 comments 1 reply
-
I like the idea. JUnit has something similar: |
Beta Was this translation helpful? Give feedback.
-
@Aliazzzz I've assigned this task to you. You're most welcome to implement it. |
Beta Was this translation helpful? Give feedback.
-
@Aliazzzz Do you see this as:
|
Beta Was this translation helpful? Give feedback.
-
I regard it as 3) |
Beta Was this translation helpful? Give feedback.
-
Well, that's the hardest one. But at least the issue is better defined now. |
Beta Was this translation helpful? Give feedback.
-
@Aliazzzz I don't see any test case status of "aborted" in the standard. I only see failed, skipped, passed, error. We need to be careful and not implement our own standard. Edit: Also, is it possible to see how others have done it? (JUnit/Google Test)? |
Beta Was this translation helpful? Give feedback.
-
I don't understand the need for a timeout. A unit test that can hang must
have an external dependancy. Generally such a dependancy (FileOpen, for
example) will have a timout associated with it. If it times out, the test
fails.
In any case, the timeout should be part of the test implementation, not the
framework, as it is test-specific. Most tests will not need it.
…On Sun, 31 May 2020, 19:35 Jakob Sagatowski, ***@***.***> wrote:
@Aliazzzz <https://github.com/Aliazzzz> I don't see any test case status
of "aborted" in the standard.
https://llg.cubic.org/docs/junit/
I only see failed, skipped, passed, error. We need to be careful and not
implement our own standard.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<https://github.com/tcunit/TcUnit/issues/54#issuecomment-636510315>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AH3VKDV7W72KFZMV64GIQNTRUKPPNANCNFSM4IWXXXUA>
.
|
Beta Was this translation helpful? Give feedback.
-
Maybe the code you are testing accidentally introduced an infinite loop, and you want to know the rest of the unit testing results? So I could say "if any test runs for 10 seconds, kill it and report as error." |
Beta Was this translation helpful? Give feedback.
-
Agreed, we should stick to the standard..... At this moment the progress feedback is sub-optimal and could be improved upon. Therefore, maybe we can log some (debug level?) messages on the progress of the tests. This point was/is my question behind the previous question (I should have been more clear on it from the beginning. ..) The PyTest test framework has got a good visual progress counter as it counts up from 0 to 100% and gives very good feedback; . for success (per succes a dot is printed) An output example: ...............s............................. [ 2%] EDIT |
Beta Was this translation helpful? Give feedback.
-
Question is how are we going to present the status of the test without polluting the event-logger? TwinCAT doesn't have any standard output in the classical sense. Only way I see it is to add a variable that presents the overall status, which I think would be of very little use. |
Beta Was this translation helpful? Give feedback.
-
Agreed, a console output is not the same as a log output, we should think of a middleground. |
Beta Was this translation helpful? Give feedback.
-
A small visualisation defined in the library would work. It only needs to
reference the test results and can be inserted into a frame in the client
code. The results could be shown in a page-able table, or a string of
............F.........F....... characters. This can then also have buttons
to re-run the tests, etc if desired.
…On Mon, 1 Jun 2020, 21:15 Aliazzzz, ***@***.***> wrote:
Agreed, a console output is not the same as a log output, we should think
of a middleground.
A progress counter is a good idea, just how do we implement it in a clear
way.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<https://github.com/tcunit/TcUnit/issues/54#issuecomment-637078731>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AH3VKDQJPTWEG4ER5WPTNRDRUQD6FANCNFSM4IWXXXUA>
.
|
Beta Was this translation helpful? Give feedback.
-
@Aliazzzz Do you have time to implement it and sketch a PR? |
Beta Was this translation helpful? Give feedback.
-
Implemented in #181 |
Beta Was this translation helpful? Give feedback.
-
Hi,
I had this idea: it can be very beneficial to specify a predefined maximum time to where in a test should finish.
The normal working of the assert is adhered.
However, if an assert is not finished with the predefined time-out time, the assert fails, provides a time-out message in the log and resumes to the next test.
This way, indefinite loops are gracefully broken.
Hope this idea is considered.
Beta Was this translation helpful? Give feedback.
All reactions