You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Development of new methods and classes is normally done whilst solving a specific problem and only once the code works is it added to the code-base. Still changes and depreciations may cause parts of the code to begin to fail, or perhaps the code does not work for all cases (e.g. edge cases exist and are not accounted for).
In order to help protect against this, the examples are designed to utilise as much of the code-base in realistic scenarios. They then function as tests and are run (at least) prior to the release of a new version and any errors may be fixed.
Concerns
The examples can be quite slow to run, and being Jupyter Notebooks, might be difficult to run in an automated fashion, and feedback might be limited
Full coverage with unit tests of all methods and classes might be difficult due to the requirement of extensive mocking, and may not accurately represent a realistic test, or capture interdependence of functions
My experience with unit testing, though, is only a 3-month industrial secondment, i.e. not extensive. Perhaps approaches exist to better capture interdependence.
If examples are used as tests, then as the code-base grows, so must the range of examples
Examples by their nature will focus only on common application cases - they may miss edge cases
Whilst code may run correctly, some changes may lead to slow-down. This can be difficult to spot without continually monitoring of timing on a fixed task
Similarly code may run correctly, but changes may lead to loss of performance. This can be difficult to spot without continually monitoring of performance on a fixed task
Proposals
Create test versions (as .py) of the examples which step through each stage of the code, these are then used as continuous integration tests
Allows for better coverage of functions and edge cases
Timing and performance of each function can be recorded to check for slow-down & degradation in code-base
Faster feedback on breaking changes, rather than just prior to deployment
Check how other frameworks, like FastAI, approach testing
The text was updated successfully, but these errors were encountered:
Current state
Development of new methods and classes is normally done whilst solving a specific problem and only once the code works is it added to the code-base. Still changes and depreciations may cause parts of the code to begin to fail, or perhaps the code does not work for all cases (e.g. edge cases exist and are not accounted for).
In order to help protect against this, the examples are designed to utilise as much of the code-base in realistic scenarios. They then function as tests and are run (at least) prior to the release of a new version and any errors may be fixed.
Concerns
Proposals
The text was updated successfully, but these errors were encountered: