Skip to content

Releases: boxine/pentf

2.0.0

26 Jan 10:24

Choose a tag to compare

All click functions have been reworked, so that they simulate an actual mouse click. Elements that are behind other elements won't be clicked automatically anymore (unless it's possible to scroll them into the viewport). This allows us to catch errors when input elements are overlayed with other elements which would block an actual user from using it. This change may lead to having to adapt a few test cases.

1.4.0

13 Sep 17:35

Choose a tag to compare

  • Add watch mode --watch, -w #245
  • Update usage section in README #256
  • Add support for --config FILE flag #255
  • Add support for loading pentf config from package.json #254
  • Fix --ci not working in non-CI environments #257
  • Store fileName in test cases #253

1.3.8

03 Sep 09:22

Choose a tag to compare

  • Normalise file generation code (log files, result files) to always write to the current cwd by default

1.3.5

03 Sep 07:09

Choose a tag to compare

  • Add more verbose debug logging around rendering results #249

1.3.4

02 Sep 13:07

Choose a tag to compare

  • Add more verbose debug logging around result generation #247

1.3.3

25 Aug 11:16

Choose a tag to compare

  • Fix regression with module paths that was introduced in #239

1.3.2

25 Aug 10:00

Choose a tag to compare

  • Support full config object in pentf.main() #242
  • Guard against "Execution Context destroyed" error #243
  • Export single type for suites #240
  • Fix missing newline in output #241

1.3.1

21 Aug 20:47

Choose a tag to compare

  • Fix require hooks failing on import() statement #239
  • Fix missing sub-module types #238

1.3.0

21 Aug 14:54

Choose a tag to compare

  • Fix tests which called expectedToFail as a function during runtime not being marked as expectedToFail (#237)
  • Add pentf cli binary (#203)

1.2.0 - Flakyness detection

18 Aug 10:32

Choose a tag to compare

This release adds a new --repeat-flaky COUNT command line flag to detect flaky tests. It works by repeating any failing test cases until they either pass or the limit is reached. When the result of all runs of a test is inconsistent, it will be marked as flaky.

Pseudo examples with --repeat-flaky 3:

Test A -> error // failed, run test again
Test A 2 -> error // failed, run test again
Test A 3 -> success // result changed, this test must be flaky

Test B -> error // failed, run test again
Test B 2 -> success // result changed, this test must be flaky. No need to run again

Test C -> error // failed, run test again
Test C 2 -> error // failed, run test again
Test C 3 -> error // result consistent -> we have a real error

Test D -> success // Success, nothing to do here

Note, that the runner status total number of tasks may increase when --repeat-flaky is test. It displays the number of pending tasks in relation to total one.

10/100 done, 1 failed, 1 expected to fail, 3 flaky
^     ^          ^           ^                 ^
|     |        tests       tests             tests
tasks tasks