-
Notifications
You must be signed in to change notification settings - Fork 40
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add DETR Example #47
base: main
Are you sure you want to change the base?
Add DETR Example #47
Conversation
cf37a4b
to
9eafb44
Compare
9eafb44
to
9768dfc
Compare
9b211e5
to
2ae66e8
Compare
321ff8e
to
64c9260
Compare
|
||
import util.misc as utils | ||
from datasets.coco_eval import CocoEvaluator | ||
from datasets.panoptic_eval import PanopticEvaluator |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
where are datasets?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sys.path.append("./detr")
is added to main.py, so datasets
used here is detr.datasets
13717ca
to
b022b39
Compare
b022b39
to
be2c84f
Compare
@@ -202,3 +198,97 @@ def forward(self, x_in): | |||
x_in = self.input_quantizer(x_in) | |||
out = F.hardsigmoid(x_in, inplace=self.inplace) | |||
return out | |||
|
|||
@register_qmodule(sources=[nn.MultiheadAttention]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- You should build a new file to describe the QMHSA.
- the module name -- transformer.py
return attn_output, attn_output_weights | ||
|
||
def prepare_input_quantizer(self, node, model): | ||
# only qkv should be quantized. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
matmul is not quant in here?
89f64e5
to
4a70ee0
Compare
WIP WIP WIP Can test version Can test version modify for dump onnx ready version ready version ready version ready version ready version ready version update_readme add detr qat example fix bugs for qat update readme reformat dirs reformat dirs reformat dirs update readme rectify sparsebit move qat to another branch modify aciq observer and use aciq laplace for last 2 detr bbox embed weights update readme modifications for hugging DETR simplify MR simplify MR add detr as submodule detr as submodule detr as submodule detr as submodule detr as submodule rm qdropout rm redundant clean-up rebase modifications rebase modifications rebase modifications not finished yet not finished yet wrong version write but low_acc version finished version finished version finished version
e0417a3
to
d1db231
Compare
DETR model from: https://github.com/facebookresearch/detr
Modifications to original model:
torch.fx
doesn't support.to(device)
in forward function. (Details)