Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use poisson noise instead of white noise #4

Open
Fedour opened this issue Jul 24, 2019 · 1 comment
Open

Use poisson noise instead of white noise #4

Fedour opened this issue Jul 24, 2019 · 1 comment

Comments

@Fedour
Copy link

Fedour commented Jul 24, 2019

Hi, I'm currently using your learned gradient tomography in a 3D case.
I would like to use a poisson noise instead of a white noise

So in your generate_data function I replace:

data = operator(phantom)
noisy_data = data + odl.phantom.white_noise(operator.range) * np.mean(np.abs(data)) * 0.05
fbp = pseudoinverse(noisy_data)

By:

data = operator(phantom)
noise = np.random.poisson(0.05,size=operator.range.shape)
noisy_data = data + noise
fbp = pseudoinverse(noisy_data)

But I don't get a good result, the image is only noisy in some places

After some researches I noticed that it was due to the following code:


# Ensure operator has fixed operator norm for scale invariance
opnorm = odl.power_method_opnorm(operator)
operator = (1 / opnorm) * operator
pseudoinverse = pseudoinverse * opnorm

But if I remove this part of code the network does not learn anymore.

I would like to know if there is a clean way to apply a good poisson noise on data.
Thank you in advance for your assistance with this.

@adler-j
Copy link
Owner

adler-j commented Jul 24, 2019

Hello, great to hear that you are interested.

With respect to poisson noise, you need to provide a data vector as input, e.g.

data = operator(phantom)
noisy_data = np.random.poisson(data)
fbp = pseudoinverse(noisy_data)

With respect to the operator norm scaling, this is needed to ensure that all values in the network are "approximately normal", you could have a look at e.g. this paper for more info http://proceedings.mlr.press/v9/glorot10a.html

One way to solve this is to apply the scaling inside the network rather than on the operator.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants