You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Usually, the batch size in the last step is smaller than the batch size we have set.
Therefore, it seems that the loss in the last steps during training is smaller than in the previous steps.
I was confused by the small loss values because the logs only display the loss from the last step when outputting the loss for each epoch.
The text was updated successfully, but these errors were encountered:
tfrs.tasks.Retrieval uses the tf.keras.losses.CategoricalCrossentropy loss function by default.
I think that the number of classes in tf.keras.losses.CategoricalCrossentropy equals the batch size when we don't set the num_hard_negatives.
Usually, the batch size in the last step is smaller than the batch size we have set.
Therefore, it seems that the loss in the last steps during training is smaller than in the previous steps.
I was confused by the small loss values because the logs only display the loss from the last step when outputting the loss for each epoch.
The text was updated successfully, but these errors were encountered: