Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Missing attention mask #158

Open
LaurentMazare opened this issue Nov 21, 2024 · 0 comments
Open

Missing attention mask #158

LaurentMazare opened this issue Nov 21, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@LaurentMazare
Copy link
Member

Backend impacted

The MLX implementation

Operating system

Linux

Hardware

Metal with MLX

Description

There is an issue with the current mlx transformer implementation, the attention mask used in the transformer here turns out never to be set.
This is not an issue with the current inference set up as only the big transformer and the depformer use this implementation (and not the codec) and in these cases data is processed one step at a time so there is no need for any mask.

Extra information

.

Environment

.

@LaurentMazare LaurentMazare added the bug Something isn't working label Nov 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant