Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about the model parameters after quantization #9

Open
liwenwei123 opened this issue Feb 9, 2020 · 1 comment
Open

Question about the model parameters after quantization #9

liwenwei123 opened this issue Feb 9, 2020 · 1 comment

Comments

@liwenwei123
Copy link

Dear Yury,
I have run the code successfully as the setps in readme.Thanks for your work! And I have some questions about the model after quantization.
I try to print the parameters after quantization, I chose 'qtype=int4' and qweigh=int8, but the parameters seem to be float but not int? such as :
'conv1.weight', Parameter containing:
tensor([[[[-2.4899e-03, -1.2449e-03, 0.0000e+00, ..., 1.3694e-02,
3.7348e-03, -2.4899e-03],
[ 2.4899e-03, 2.4899e-03, -2.6144e-02, ..., -6.3492e-02,
-2.9879e-02, 1.2449e-03],
[-1.2449e-03, 1.3694e-02, 6.8472e-02, ..., 1.2076e-01,
5.9758e-02, 1.4939e-02],.....
I try to save the model by 'torch.save(self.model.stat_dict(),'resnet18_qm.pkl)', but the size of the model is as same as the original resnet18 pretrained model file. I thought it would be much smaller after quantization.
Is there any steps I missed or haven't understand the meaning if code correctly?
Thanks again and Looking forward to your reply!

Anna

@xieydd
Copy link

xieydd commented Feb 20, 2020

Why the weight is not int8? @liwenwei123

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants