You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Some things are handled however slightly differently in both classes and need to addressed before we can attempt merging the classes
dropout: The single row predictor object applies it directly to the network upon initialization, in the current model object dropout is handled by the sparsechem predict function. So the question is here if we apply dropout directly to the network upon initialization, whether doing it a second time in the sparsechem predict function has some unwanted effect. If not we could go ahead by simply applying it at the initialzation stage
The single row predictor object doesn't support catalogue heads. But then also in your case the catalogue head mapping y_cat_columns needs to be provided externally, so the solution could be to do the same for the single row predictor. One could possibly create a function extracting this from a T8c file.
Both classes memorize the statistics for inverse nromalization in slightly different way, especially with respect to when they are converted to a numpy array (immediately upon model initialization in the single row predictor, class versus upon calling predict in the Model class. In terms of keeping the prediction of the single row predictor as lean as possible, I would prefer here doing everything that can be done upon initialization and model loading at this stage, including computing the standard deviation from the variance.
The text was updated successfully, but these errors were encountered:
Cf #20 (comment)
We could merge those 2 classes
Some things are handled however slightly differently in both classes and need to addressed before we can attempt merging the classes
The text was updated successfully, but these errors were encountered: