You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I don't think we should add more data to the benchmark, because it will invalidate any old results from before we added the data. After changing the benchmark data we would need to clear the scoreboard.
should [not] add more data to the benchmark, because it will invalidate any old results from before we added the data.
well, I would say the point of NAB is to offer a framework for "anomaly detection benchmarks for algorithms on time-series datasets (with focus to HTM)".
I have some regards to the current data presented:
human annotations (can be prone to err)
no syntetic datasets for benching behavior on controlled conditions (eg boost effect on "flat line")
no multi-modal datasets (the advantage of HTM compared to (simple) threshold-based approaches is detection of irregularities in co-occuring patterns)
add more well-estabilished AD datasets (eg the ECG from Physionet)
After changing the benchmark data we would need to clear the scoreboard.
Yes, but there are the options:
tagged version. Even Numenta suggests keep NAB tagged, so results are reproducible
should that be a problem, would it make sense to separate NAB+detectors -and- "datasets+results+scoreboard"? It could be a sub-repo for NAB, even Numata could share it. And users would easily run any version desired
we can re-run the algos on new dataset and update overall results
that's what we want, keep and updated, overall comparison between the detectors
for detectors that are not reproducible (not OSS, or not runnable by us), I'd say scratch them. Not able to reproduce renders the results untrustworthy.
TL;DR: Suggested approaches:
keep NAB git-tagged
separate NAB-datasets repo
just update the results with all detectors re-run on the new datasets
If it's not already provided(?)
htmcore
detector results on (some) datasetsThe text was updated successfully, but these errors were encountered: