You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here is an explanation I found A Tensorflow Tutorial:
The MNIST dataset contains vectorized images of 28X28. Therefore we define a new function to reshape each batch of MNIST images to 28X28 and then resize to 32X32. The reason of resizing to 32X32 is to make it a power of two and therefore we can easily use the stride of 2 for downsampling and upsampling.
Hello there,
I don't understand why the dataset MNIST from Torch Tutorial dataset MNIST is made of 32x32 images instead of 28x28 as in the original dataset.
Do I have to crop the images using narrow ?
Thanks
The text was updated successfully, but these errors were encountered: