You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I was looking at the networks present in binary models directory and also at their usage and implementation in image_classification.py. I would like to know where exactly is the input getting quantized. None of the networks seem to use QActivation() function. Only the weights seem to get quantized.
Thanks
The text was updated successfully, but these errors were encountered:
So far we only correctly converted the following models (the others were mostly copied from gluon, but not correctly adapt, as you noted - we should probably (re-)move them):
resnet.py
resnet_e.py
densenet.py
meliusnet.py
naivenet.py
These models are binarized (both weights and activations) with the nn.activated_conv function, e.g. here. This function adds both QAct and QConv layers, by adding e.g. a BinaryConvolution.
If you want to train one of the other models as a binary one (which we have not done so far), you would currently need to adapt them accordingly (replace the QConv with nn.activated_conv).
Hi,
I was looking at the networks present in
binary models
directory and also at their usage and implementation inimage_classification.py
. I would like to know where exactly is the input getting quantized. None of the networks seem to useQActivation()
function. Only the weights seem to get quantized.Thanks
The text was updated successfully, but these errors were encountered: