-
Notifications
You must be signed in to change notification settings - Fork 18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
how to keep the resolution #4
Comments
Hi @darrylbobo we sample the input patches (of size 27x27x27) in a way that the predictions, of size 9x9x9, are spatially near to each other. This means that some regions in the input samples overlap between them. For example, the first input patch could be at (0,0,0) and the next one sampled at (0,0,9). Jose |
@josedolz Thanks a lot for the reply. But I am still a little confused. Maybe you can refer me to the corresponding code. Thanks! |
Hi @darrylbobo The ground truth is cropped according to the predicted patch, not the input. If you have an input patch of size 27x27x27 after all the convolutions you will end up with a patch of size 9x9x9, located at the center of the input patch, because you get rid off 9 voxels per side. So, basically, what you have to do for the GT is to take the center of the patch (i.e., the voxel 13,13,13) and crop the GT 9 voxels around (from 9,9,9 to 17,17,17). This GT is the one used in eq 4 of the paper, which has the same size as pˆv(x_s). Jose |
For the code, the sampling process is here:
|
Hi, I may misread or miss some details of the paper.
But from the network, the input spatial size is 272727, the output is 999. So how do you generate the segmentation results in original size?
Thanks!
The text was updated successfully, but these errors were encountered: