Skip to content

Comments

MLA#789

Open
quic-mamta wants to merge 8 commits intomainfrom
mla_fusion
Open

MLA#789
quic-mamta wants to merge 8 commits intomainfrom
mla_fusion

Conversation

@quic-mamta
Copy link
Contributor

  • caching compressed kv
  • also online/offline mla k,Q up projection absorption

The export hash needs to different for different mla absorption config-> this needs to be fixed.

ochougul and others added 6 commits January 28, 2026 07:37
Signed-off-by: Onkar Chougule <ochougul@qti.qualcomm.com>
Signed-off-by: Onkar Chougule <ochougul@qti.qualcomm.com>
Signed-off-by: Onkar Chougule <ochougul@qti.qualcomm.com>
Signed-off-by: Onkar Chougule <ochougul@qti.qualcomm.com>
…e sorted

Signed-off-by: Onkar Chougule <ochougul@qti.qualcomm.com>
Signed-off-by: Mamta Singh <mamtsing@qti.qualcomm.com>
Signed-off-by: Mamta Singh <mamtsing@qti.qualcomm.com>
Signed-off-by: Mamta Singh <mamtsing@qti.qualcomm.com>
enable_chunking = kwargs.get("enable_chunking", False)

# TODO: HACK handle better
if enable_mla := kwargs.get("enable_mla", False):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why do we need this boolean in kwargs?
if model has MLA, it should just be enabled.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants