You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm running > 250 tests which compare large table structures (ASTs) using assert.same(expected, actual). The tests work great and the deep compare actually does the correct thing in this case. The problem is the output is so verbose when they fail that it's very difficult to make sense of it.
I've tried running with --no-verbose but large chunks of both the expected and actual tables are still dumped to the output (and as this is the default that wasn't really a surprise). Likewise -o gtest and -o tap both do the same thing as well.
I would like some way to suppress the expected vs. actual output and just note which test failed.
The text was updated successfully, but these errors were encountered:
@Tieske I'm wondering if I didn't communicate clearly what I was after. In this case I don't care about a nice diffed output, I want no output at all. I just want the name of the failed test (as specified by it(), etc.). I actually have an external tool to do diffs (by dumping all the Lua arrays to JSON, using a special sort algoryth, then comparing the trees). All I want from busted is a pass/fail list without all the noise about what it thinks is different. It actually seems like that should be an easy fix or even something already implemented that I'm just missing.
I'm running > 250 tests which compare large table structures (ASTs) using
assert.same(expected, actual)
. The tests work great and the deep compare actually does the correct thing in this case. The problem is the output is so verbose when they fail that it's very difficult to make sense of it.I've tried running with
--no-verbose
but large chunks of both the expected and actual tables are still dumped to the output (and as this is the default that wasn't really a surprise). Likewise-o gtest
and-o tap
both do the same thing as well.I would like some way to suppress the expected vs. actual output and just note which test failed.
The text was updated successfully, but these errors were encountered: