You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This proposal comes after reading the article On the “usefulness” of the Netflix Prize. In this article, it is highlighted that complicated ensemble methods may not be suitable for production ready application. We have experienced a similar situation with the Digital Mammography DREAM Challenge where the final method was an ensemble of 11 tools/docker images. This strategy is actually adopted in most DREAM challenges, where the final model published is an ensemble of the best performing models submitted during the competitive or collaborative phase.
There are two bits of information that we may want to capture, possibly as properties of the Tool schemas:
Whether the "tool" - the docker image submitted - is an ensemble of the output of multiple algorithms.
This may also flag tools that may take a long time to run, though we will ultimately capture and report on tool runtime in the future.
Whether the submitted tool can be trained, re-trained and/or fine-tuned
Distinguish between a tool that has not yet been trained and must absolutely be trained before being used for inference, and a tool that has been previously trained and can be further re-trained or fine-tuned, for example periodically on new data.
Enabling submitted tools to train on private data from data sites should be possible before the end of 2021.
The text was updated successfully, but these errors were encountered:
I'm wondering if there are existing schemas that describe the "trainability"/training state of a model that we could reuse for the NLP Sandbox Tool schema.
This proposal comes after reading the article On the “usefulness” of the Netflix Prize. In this article, it is highlighted that complicated ensemble methods may not be suitable for production ready application. We have experienced a similar situation with the Digital Mammography DREAM Challenge where the final method was an ensemble of 11 tools/docker images. This strategy is actually adopted in most DREAM challenges, where the final model published is an ensemble of the best performing models submitted during the competitive or collaborative phase.
There are two bits of information that we may want to capture, possibly as properties of the
Tool
schemas:The text was updated successfully, but these errors were encountered: