Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Input Quantization #6

Open
abhivandit opened this issue Jun 28, 2020 · 1 comment
Open

Input Quantization #6

abhivandit opened this issue Jun 28, 2020 · 1 comment

Comments

@abhivandit
Copy link

Hi,

I was looking at the networks present in binary models directory and also at their usage and implementation in image_classification.py. I would like to know where exactly is the input getting quantized. None of the networks seem to use QActivation() function. Only the weights seem to get quantized.

Thanks

@Jopyth
Copy link
Collaborator

Jopyth commented Jun 28, 2020

Hi,

So far we only correctly converted the following models (the others were mostly copied from gluon, but not correctly adapt, as you noted - we should probably (re-)move them):

  • resnet.py
  • resnet_e.py
  • densenet.py
  • meliusnet.py
  • naivenet.py

These models are binarized (both weights and activations) with the nn.activated_conv function, e.g. here. This function adds both QAct and QConv layers, by adding e.g. a BinaryConvolution.

If you want to train one of the other models as a binary one (which we have not done so far), you would currently need to adapt them accordingly (replace the QConv with nn.activated_conv).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants