-
Notifications
You must be signed in to change notification settings - Fork 516
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some improvements / bug fixes #476
base: master
Are you sure you want to change the base?
Conversation
… for each validation batch
…messages due to amplitude issues
I'm already using this PR on a local training. @mitchelldehaven could you explain more about the usaged of |
I don't know where it could be fixed, but it would look better to have a space or line break between '2' and 'Metric', it's printing the logs after the progress like:
and I also don't see the matching ']' to close the the opening one after '8/48'. Not sure if this is intended. |
one more update... |
The The idea is essentially that if you achieve a low validation loss epoch and cannot beat it for several successive epochs, likely your validation loss improvements have plateaued, and further training will not yield improvements. You could try higher values, like maybe 20 or something, however I only have ~ 2000 training samples, so overfitting happens somewhat quickly when starting from a checkpoint. I can take a look at the preprocessing issues after my workday today, likely the fix is simply changing |
Hi, |
These look like really good changes. Is Piper abandoned? There are a lot of no-response questions over in Discussions, and good PRs like this one are still open. |
Hello @mitchelldehaven , I’ve tried changing enabled=True to enabled=False in the autocast contexts, but unfortunately, it didn’t resolve the issue. Could you suggest any alternative approaches? Thank you very much for your help! |
Thanks for the repo, its really awesome to use!
Using it I noticed a few problems / personal preferences that I made to improve it. I thought I would create a PR with these changes:
on_validation_end
so it only runs once. Likely the test audio process could be batched looking at the code, but this already saves quite a bit of time as is.abs
needs to be taken beforemax
so that the maximum negative amplitude is taken into consideration. Currently `abs is essentially a no op. This is the cause of the warnings about amplitudes being clipped.16
to be used as a precision. From what I can tell, its only marginally faster for some reason (usually I see ~ 2x improvement in speed). There may be more going on here. Also, Pytorch 1.13 didn't have the best support forbf16
, so it is actually slower than using full precision.If you want some of these changes pulled in, but not others, I can remove the commits you don't want pulled in, if you want any of these changes at all (the only actual bug commits are 379bee1, e6af4c6, de98f64).
Below is the reported time difference, likely all due to e6af4c6 on a small 10 epoch finetuning run when using 200 test samples and ~ 200 validation samples
EDIT: I don't have a multiple GPU setup, so I didn't test any of this on DP or DDP, specifically 8eda759 could cause issues to multi-gpu setups. I also haven't tested not using a validation set, but I can test to make sure it still works fine.