You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think the difficulty with chaining WNNs is that a WNN basically remembers the patterns seen in the training set and therefore always achieves a perfect score on the training set (unless the same pattern appears in multiple classes, which means that the image is ambiguous).
One way around that could be:
Split the training set into two sets, ts1 and ts2
Train a WNN on ts1 -> will achieve perfect scores on ts1, but not on ts2
Train a second WNN on ts2, taking the outputs of the first (and maybe the image again?) as input
This way, the second WNN could learn to correct typical mistakes of the first WNN.
Explore what can this mean, how could it work, and may be try it out.
For example, chain small WNN together.
The text was updated successfully, but these errors were encountered: