-
Notifications
You must be signed in to change notification settings - Fork 100
Description
Hi,
first off: thanks for the concise and easy to follow implementation, and congrats for the work building on it, I really enjoyed it. :)
I util.py you write:
Line 49 in 2a2b4e4
| contentConv = torch.mm(cF,cF.t()).div(cFSize[1]-1) + torch.eye(cFSize[0]).double() |
Line 61 in 2a2b4e4
| styleConv = torch.mm(sF,sF.t()).div(sFSize[1]-1) |
First: why not regularize both computations? I had this fail on some occasions.
The regularization term eye() is YUGE compared to the normalized covariance matrix. I am more used to seeing eye() * 1e-6 or thelike for regularization, and changing the term to that actually does make a noticable difference in the stylization outcome.
Since we're talking about artistic style transfer here, it's hard to judge which version is better, but out of principle, the smaller the regularization, the closer to the actual whitening/coloring transform we are, no?