You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I try to run inference using a checkpoint I created: python .\inference.py -c .\config\default.yaml -p .\checkpoints\output\output_fastspeech_d7ef3cf_1k_steps.pyt --out output --text "ModuleList can be indexed like a regular Python list but modules it contains are properly registered."
I get the following error: RuntimeError: Calculated padded input size per channel: (8). Kernel size: (9). Kernel size can't be greater than actual input size
I trained using the following setting in the Default.yaml file: positionwise_conv_kernel_size : 9
When I attempt to train with positionwise_conv_kernel_size : 8 instead of 9, I get a training error. Any help would be appreciated.
The text was updated successfully, but these errors were encountered:
When I try to run inference using a checkpoint I created:
python .\inference.py -c .\config\default.yaml -p .\checkpoints\output\output_fastspeech_d7ef3cf_1k_steps.pyt --out output --text "ModuleList can be indexed like a regular Python list but modules it contains are properly registered."
I get the following error:
RuntimeError: Calculated padded input size per channel: (8). Kernel size: (9). Kernel size can't be greater than actual input size
I trained using the following setting in the Default.yaml file:
positionwise_conv_kernel_size : 9
When I attempt to train with
positionwise_conv_kernel_size : 8
instead of 9, I get a training error. Any help would be appreciated.The text was updated successfully, but these errors were encountered: