Skip to content

[QEff.finetune] Integrated test for HF Trainer#800

Open
tchawada wants to merge 38 commits intoquic:ft_experimentalfrom
tchawada:ft_integrated
Open

[QEff.finetune] Integrated test for HF Trainer#800
tchawada wants to merge 38 commits intoquic:ft_experimentalfrom
tchawada:ft_integrated

Conversation

@tchawada
Copy link
Contributor

Added the end to end integrated test for HF_trainer stack

ochougul and others added 30 commits January 6, 2026 14:57
carry over patch   quic#693

Signed-off-by: Onkar Chougule <ochougul@qti.qualcomm.com>
Signed-off-by: Mohit Soni <mohisoni@qti.qualcomm.com>
Signed-off-by: vtirumal <vtirumal@qti.qualcomm.com>
Co-authored-by: Mohit Soni <mohisoni@qti.qualcomm.com>
Co-authored-by: vtirumal <vtirumal@qti.qualcomm.com>
Signed-off-by: Vahid Janfaza <vjanfaza@qti.qualcomm.com>
Updating README, custom script for 2-layer instruction for Wan

Signed-off-by: vtirumal <vtirumal@qti.qualcomm.com>
Added step wise instructions for MULTI NODE Finetuning.

---------

Signed-off-by: Ann Kuruvilla <akuruvil@qti.qualcomm.com>
Add support for multi-node Distributed Data Parallel (DDP) training to
the QEfficient finetuning pipeline. This enables scaling training across
multiple nodes while keeping the existing single-node behavior
unchanged.

Commands for DDP across 2 servers:
For the Master Addr or the Primary Machine, use node-rank as 0:
QAIC_VISIBLE_DEVICES=0,1,2,3 torchrun --nnodes=2 --nproc-per-node=4
--seed 0 --node-rank=0 --master_addr=<MASTER_NODE_IP> --master_port=8000
-m QEfficient.cloud.finetune --device qaic --enable_ddp --model_name
"meta-llama/Llama-3.2-1B" --dataset alpaca_dataset --train_batch_size 1
--val_batch_size 1 --num_epochs 1 --max_train_step 200 --max_eval_step
50

For Node 1, use node-rank as 1:
QAIC_VISIBLE_DEVICES=0,1,2,3 torchrun --nnodes=2 --nproc-per-node=4
--seed 0 --node-rank=1 --master_addr=<MASTER_NODE_IP> --master_port=8000
-m QEfficient.cloud.finetune --device qaic --enable_ddp --model_name
"meta-llama/Llama-3.2-1B" --dataset alpaca_dataset --train_batch_size 1
--val_batch_size 1 --num_epochs 1 --max_train_step 200 --max_eval_step
50

---------

Signed-off-by: Sharvari Medhe <smedhe@qti.qualcomm.com>
QEfficient should ignore providing `-mdp-load-partition-config` when
`-mdp-dump-partition-config` is provided in compiler_options of compile
API.

---------

Signed-off-by: Asmita Goswami <asmigosw@qti.qualcomm.com>
Signed-off-by: Ann Kuruvilla <quic_akuruvil@quicinc.com>
Handled the edge case where num samples in a dataset are less than 20.
Corrected the dataset link in grammar_dataset.py

Signed-off-by: Sharvari Medhe <smedhe@qti.qualcomm.com>
Since CCL is deactivated by default, the value of CCL lists (ccl_prefill
and ccl_decode) should be None by default. In infer.py script the value
of these lists wasn't None and it caused the problem of ccl activation
by default. In this PR we addressed this issue.

---------

Signed-off-by: Vahid Janfaza <vjanfaza@qti.qualcomm.com>
In this PR:
1) We have modified the code to support PP+DDP on multi-server setup
2) Added preprocessing file for grammar dataset
3) Modified the naming convention for output dir to include the node
rank of the server

---------

Signed-off-by: Sharvari Medhe <smedhe@qti.qualcomm.com>
Added default NPI file for Gemma3. 

1. Eliminates the need to provide NPI file as an extra argument by user.
NPI file added as default, no need to provide it explicitly in the
example script

---------

Signed-off-by: Ann Kuruvilla <akuruvil@qti.qualcomm.com>
Signed-off-by: Ann Kuruvilla <quic_akuruvil@quicinc.com>
Signed-off-by: Abukhoyer Shaik <abukhoye@qti.qualcomm.com>
Signed-off-by: vtirumal <vtirumal@qti.qualcomm.com>
Signed-off-by: Amit Raj <amitraj@qti.qualcomm.com>
Co-authored-by: Abukhoyer Shaik <abukhoye@qti.qualcomm.com>
Co-authored-by: Amit Raj <amitraj@qti.qualcomm.com>
…WQ and FP8 models. (quic#735)

Signed-off-by: Dhiraj Kumar Sah <dhirajku@qti.qualcomm.com>
Removed OpenGVLab/InternVL2_5-1B and OpenGVLab/InternVL3_5-1B test due
to a compiler issue to unblock the CI

---------

Signed-off-by: Rishin Raj <rishinr@qti.qualcomm.com>
Updated Qeff version to mainline

---------

Signed-off-by: Rishin Raj <rishinr@qti.qualcomm.com>
Reverts quic#741

Signed-off-by: Rishin Raj <rishinr@qti.qualcomm.com>
Signed-off-by: Abhishek Kumar Singh <sabhis@qti.qualcomm.com>
Signed-off-by: abhishek-singh591 <sabhis@qti.qualcomm.com>
Signed-off-by: Abhishek Kumar Singh <sabhis@qti.qualcomm.com>
Signed-off-by: abhishek-singh591 <sabhis@qti.qualcomm.com>
Signed-off-by: Abhishek kumar singh <sabhis@qti.qualcomm.com>
The decode‑only GPT‑OSS model was failing when executing subfunctions
due to somehow considering a dynamic dim value during reduced‑sum
calculation. This caused incorrect tensor reduction and resulted in
compilation errors.
The fix replaces the reduction logic with an einsum-based computation,
ensuring stable and deterministic summation regardless of dimension
shape.

---------

Signed-off-by: asmigosw <asmigosw@qti.qualcomm.com>
- updated the random sampling gold text, ids for InternVL2_5-1B

Signed-off-by: vtirumal <vtirumal@qti.qualcomm.com>
Support to skip export, compilation if qpc already exists
 - Updated Flux, wan configs, pipelines with qpc_path changes

---------

Signed-off-by: vtirumal <vtirumal@qti.qualcomm.com>
The SW issue came with prompt + generation length > SW.

Fix
1. Cache updated with HybridSlidingWindowCache in cache utils

---------

Signed-off-by: Dipankar Sarkar <dipankar@qti.qualcomm.com>
Signed-off-by: Ann Kuruvilla <quic_akuruvil@quicinc.com>
Fix gemma3 to support cb with new SW code

Signed-off-by: Dipankar Sarkar <dipankar@qti.qualcomm.com>
This PR fixes subfunction-based export issues for the following models:

1. `bigcode/starcoder`  
2. `ibm-granite/granite-20b-code-base-8k`  
3. `ibm-granite/granite-20b-code-instruct-8k`  
4. `Qwen3-30B-A3B-Instruct-2507`  
5. `Mixtral-8x7B`

In addition, it updates the Causal LM subfunction test file to make it
more robust and resilient across models.

---------

Signed-off-by: Abhishek Kumar Singh <sabhis@qti.qualcomm.com>
Updated the mainline version to 1.22.0.dev0

Signed-off-by: Rishin Raj <rishinr@qti.qualcomm.com>
qaic-exec is going to be deprecated. Updated the code to use new
qaic-compile for compile API.

---------

Signed-off-by: Asmita Goswami <asmigosw@qti.qualcomm.com>
- skip subfn handling in export utils for diffusers, we handle this in
export() of diffuser models

---------

Signed-off-by: vtirumal <vtirumal@qti.qualcomm.com>
Signed-off-by: Abhishek Kumar Singh <sabhis@qti.qualcomm.com>
Co-authored-by: Abhishek Kumar Singh <sabhis@qti.qualcomm.com>
abhishek-singh591 and others added 8 commits February 12, 2026 14:26
Signed-off-by: Abhishek Kumar Singh <sabhis@qti.qualcomm.com>
Added support of model
[Llama-Prompt-Guard-2-22M](https://huggingface.co/meta-llama/Llama-Prompt-Guard-2-22M).
PyTorch vs AIC MAD -> 0.0031892061233520508

---------

Signed-off-by: Amit Raj <amitraj@qti.qualcomm.com>
Split Run Non-CLI Non-QAIC Tests to LLMs and Features tests, added
Duration for checking the top 10 slowest tests in Jenkins, Updated few
slowest tests

---------

Signed-off-by: Rishin Raj <rishinr@qti.qualcomm.com>
Signed-off-by: Abukhoyer Shaik <abukhoye@qti.qualcomm.com>
Co-authored-by: Abukhoyer Shaik <abukhoye@qti.qualcomm.com>
Signed-off-by: Tanisha Chawada <tchawada@qti.qualcomm.com>
Signed-off-by: Tanisha Chawada <tchawada@qti.qualcomm.com>
Signed-off-by: Tanisha Chawada <tchawada@qti.qualcomm.com>
Signed-off-by: Tanisha Chawada <tchawada@qti.qualcomm.com>
# HuggingFace Dataset Names
HF_DATASET_ALPACA = "tatsu-lab/alpaca"
HF_DATASET_GSM8K = "openai/gsm8k"
HF_DATASET_GSM8K_CONFIG = "main"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is it mentioned as main?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There are two subsets of gsm8k dataset, to load main , we need this configuration

"""
assert train_result is not None
assert hasattr(train_result, "training_loss")
logger.warning(f"Training loss: {train_result.training_loss:.4f}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Compare train and eval loss here, but allowing some atol threshold

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Comments