Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

how to keep the resolution #4

Open
darrylbobo opened this issue Feb 21, 2019 · 4 comments
Open

how to keep the resolution #4

darrylbobo opened this issue Feb 21, 2019 · 4 comments

Comments

@darrylbobo
Copy link

Hi, I may misread or miss some details of the paper.
But from the network, the input spatial size is 272727, the output is 999. So how do you generate the segmentation results in original size?

Thanks!

@josedolz
Copy link
Owner

Hi @darrylbobo

we sample the input patches (of size 27x27x27) in a way that the predictions, of size 9x9x9, are spatially near to each other. This means that some regions in the input samples overlap between them. For example, the first input patch could be at (0,0,0) and the next one sampled at (0,0,9).

Jose

@darrylbobo
Copy link
Author

@josedolz Thanks a lot for the reply.

But I am still a little confused.
For the input patch covering from 0,0,0 to 26, 26, 26 in the original volume, the corresponding ground truth segmentation will also be from 0,0,0 to 26, 26, 26. But the prediction from the model results in a volume which has the spatial size of 9, 9, 9. Then, my question is how to correlate each of 9 x 9 x 9 pixels in prediction with each of 27 x 27 x 27 pixels in the ground truth?
In other words, in the equation 4 of paper 1712.05319.pdf, the y^s_v is not one to one mapped to p^v (x_s).

Maybe you can refer me to the corresponding code. Thanks!

@josedolz
Copy link
Owner

Hi @darrylbobo

The ground truth is cropped according to the predicted patch, not the input. If you have an input patch of size 27x27x27 after all the convolutions you will end up with a patch of size 9x9x9, located at the center of the input patch, because you get rid off 9 voxels per side. So, basically, what you have to do for the GT is to take the center of the patch (i.e., the voxel 13,13,13) and crop the GT 9 voxels around (from 9,9,9 to 17,17,17). This GT is the one used in eq 4 of the paper, which has the same size as pˆv(x_s).

Jose

@josedolz
Copy link
Owner

For the code, the sampling process is here:

# Extract samples from computed coordinates

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants