-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add binary fully connected operator #53
Comments
this boils down to implementing a fast binary matrix-vector multiplication |
@rameshKrSah we haven't made any specific efforts towards implementing a binary dense layer. I think the most obvious and easiest way we could support binary dense layers would actually not involve adding a new op at all, but instead mapping Larq binary dense layers to an equivalent LCE 1x1 binary convolution. This would be an automated way of doing the penultimate bullet point from this page of our docs. This wouldn't be particularly fast (though it would hopefully be faster than float dense layers), because our optimised binary convolution kernels operate on 'chunks' of four input pixels at a time, whereas this 'equivalent' convolution here would have only one input pixel. This is, however, something that we could very easily solve once we switch over to using the (currently experimental) indirect bgemm kernels, by adding a micro-kernel that operates on one input pixel at a time. |
Hi, there @AdamHillier @Tombana @arashb @lgeiger
It seems that the actual 1x1 binary convolution is 4x slower than its fully optimized version. Is there any guidelines or instructions on how to bridge this gap? |
Binary fully connected operator is in essence doing binary matrix matrix multiplication (BGemm). Assume that the input is M × N , the weight is N×K (M is the batch size,N is the number of neurons of the previous layer, K is the number of neurons of the current layer)
The text was updated successfully, but these errors were encountered: