Skip to content

Conversation

@shangguanqituan
Copy link
Collaborator

Description

This PR fixes two issues in the training configuration and scheduler:

  1. WarmupLR_withStepDecay:

    • Fixed a bug where warmup_step was treated as iterations instead of epochs. It is now correctly multiplied by epoch_iter.
  2. Config Compatibility:

    • Modified train.py to use .pop('initial_lr'). This prevents KeyError/TypeError when initial_lr is not defined in scheduler_args (e.g., in W2V tasks), while maintaining compatibility for other tasks.
  3. CMVN Default:

    • Fixed the default value logic for CMVN in executor.py.


# apply cmvn
if test_conf.get('cmvn', True):
if test_conf.get('cmvn', False):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

其他任务上默认cmvn是开的,为了保持兼容性,这里建议通过配置文件指定cmvn = false

with torch.cuda.amp.autocast(enabled=configs['enable_amp']):
# apply cmvn
if configs['dataset_args'].get('cmvn', True):
if configs['dataset_args'].get('cmvn', False):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

同上

@cdliang11 cdliang11 merged commit 8f53b64 into wenet-e2e:master Dec 31, 2025
4 checks passed
@JiJiJiang JiJiJiang mentioned this pull request Jan 9, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants