-
Notifications
You must be signed in to change notification settings - Fork 339
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Soft DTW with ignore_padding_token #515
Comments
Hello @anupsingh15, For now I do not know how to help you to deal with the padding to compute the DTW loss efficiently. In If you want, I can try to look for a better solution if you send in this conversation your codes with a small dataset detailing what you would like to do and your current solution. |
Hi @YannCabanes, I have two sets, each containing N sequences, X = {x_i} and Y={y_i}, i=1,.., N. Each x_i and y_i are of different lengths lx_i and ly_i, respectively, but they are padded (left and right for my task) to the same lengths. Since
This way of computing the loss over the batch is slow. I wonder if you can create a functionality which accepts the X and Y along with masks M_X and M_Y that indicates padding indices for each sequence x_i and y_i so that |
Hello,
I have a batch of pairs of sequences. Each pair contains sequences of different lengths, which are padded to equal lengths. Is there a way to ignore these padded elements to compute the soft-dtw alignment? For example, https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html provides a feature to ignore a particular class index to compute cross-entropy loss.
Do you suggest any workaround to compute the dtw loss efficiently in such a case? I can only think of processing (removing paddings) each pair sample individually, but this will be too slow.
The text was updated successfully, but these errors were encountered: