Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

combined uncertainty from multiple independent measurements? #38

Closed
den-run-ai opened this issue Jan 31, 2016 · 11 comments
Closed

combined uncertainty from multiple independent measurements? #38

den-run-ai opened this issue Jan 31, 2016 · 11 comments

Comments

@den-run-ai
Copy link

Can this package handle uncertainty calculations like this?

http://physics.stackexchange.com/questions/57317/multiple-measurements-of-the-same-quantity-combining-uncertainties

If not, why?

@lebigot
Copy link
Collaborator

lebigot commented Jan 31, 2016

The uncertainties package is meant to handle uncertainties like these, and even does this transparently when the uncertainties are correlated. In the particular case of the question, uncertainties would exactly give you the quoted uncertainty. If you apply the formula that transforms the measurements (with uncertainty) into the best (i.e. minimal chi-squared) estimate, you automatically get the uncertainty on the best estimate.

Does this answer your question?

@den-run-ai
Copy link
Author

@lebigot yes and no :)

is there any example I can follow for this particular combined uncertainty?

@lebigot
Copy link
Collaborator

lebigot commented Feb 2, 2016

Maybe I don't understand the question: it looks to me like everything is explained in the documentation, so I am not sure what the blocking point is. Let me have a stab at it, though: the question on Stack Overflow considers a few random variables: we can take for example

>>> from uncertainties import ufloat
>>> v1 = ufloat(1, 0.1)
>>> v2 = ufloat(2, 0.2)
>>> v3 = ufloat(1.5, 0.3)

If you calculate their average, uncertainties automatically calculates the uncertainty on the average:

>>> (v1+v2+v3)/3
1.5+/-0.12472191289246472

You can check that the formula from the question gives the same result (after you transform the incorrect denominator 1/N into the correct 1/N^2)—which you don't have to calculate, thanks to uncertainties.

uncertainties even transparently handles correlations: things like v1-v1 are always equal to 0, with no uncertainty (in this case the formula from the question does not apply, but uncertainties does the right thing):

>>> v1-v1
0.0+/-0

Does this clear things up?

@den-run-ai
Copy link
Author

Yes, definitely! Now let me look at this closer myself.

On Tue, Feb 2, 2016, 11:09 AM Eric O. LEBIGOT (EOL) <
[email protected]> wrote:

Maybe I don't understand the question: it looks to me like everything is
explained in the documentation http://pythonhosted.org/uncertainties/,
so I am not sure what the blocking point is. Let me have a stab at it,
though: the question on Stack Overflow considers a few random variables: we
can take for example

from uncertainties import ufloat
v1 = ufloat(1, 0.1)
v2 = ufloat(2, 0.2)
v3 = ufloat(1.5, 0.3)

If you calculate their average, uncertainties automatically calculates
the uncertainty on the average:

(v1+v2+v3)/3
1.5+/-0.12472191289246472

You can check that the formula from the question gives the same
result—which you don't have to calculate, thanks to uncertainties.

uncertainties even transparently handles things like v1-v1, which is
always equal to 0, with no uncertainty (in this case the formula from the
question does not apply, but uncertainties does the right thing):

v1-v1
0.0+/-0

Does this clear things up?


Reply to this email directly or view it on GitHub
#38 (comment)
.

@lebigot
Copy link
Collaborator

lebigot commented Feb 2, 2016

PS: The formula for the uncertainty in the question is incorrect: the denominator should be N^2, not N.

PPS: uncertainties works with averages but also any kind of function (like sin(v1/v2)).

@den-run-ai
Copy link
Author

What happens when uncorrelated uncertainties are combined?

@den-run-ai
Copy link
Author

Also your example does not seem correct, the combined value is not arithmetic average, the weights in the sum are based on corresponding uncertainties.

@lebigot
Copy link
Collaborator

lebigot commented Feb 2, 2016

As I was writing, I was translating for you the math form the question, not the math from the answer that you just referred to. If you want to translate the math from Steve B's answer, again, as I was writing in my first answer above, you simply replace the average (v1+v2+v3)/3 by the linear combination of v1, v2 and v3 that the usual chi^2 minimization method gives you. At this point, I can only recommend that you learn about chi^2 minimization and fitting values with uncertainty to a constant (this is a simpler case than linear regression with non-identical errors).

If you want to know in general how uncorrelated uncertainties are combined by uncertainties (and by most uncertainty calculations done by hand), you must learn about uncertainty propagation.

@lebigot lebigot closed this as completed Feb 2, 2016
@doronbehar
Copy link

doronbehar commented Oct 21, 2024

Sorry to wake up a 9 years old issue, but I was too wondering about this subject, and I too would have liked if a unumpy.mean function would have existed that would do the following:

$$ \mu = \frac{\sum_i (x_i/\sigma_i^2)}{\sum_i \sigma_i^{-2}}$$

$$ \sigma_\mu = 1/\sqrt{\sum_i \sigma_i^{-2}} $$

Would that be acceptable as a PR @lebigot?

doronbehar added a commit to doronbehar/uncertainties that referenced this issue Oct 21, 2024
doronbehar added a commit to doronbehar/uncertainties that referenced this issue Oct 21, 2024
doronbehar added a commit to doronbehar/uncertainties that referenced this issue Oct 21, 2024
doronbehar added a commit to doronbehar/uncertainties that referenced this issue Oct 21, 2024
@doronbehar doronbehar mentioned this issue Oct 21, 2024
5 tasks
@lebigot
Copy link
Collaborator

lebigot commented Oct 21, 2024

I have three points regarding this:

  1. What is the goal of such a calculation? I'm asking this question because I'm guessing that these formulas are mostly only meaningful in the particule case where the values are uncorrelated. I think that it would be dangerous to provide a formula that gives relatively meaningless results in the general case.
  2. If what you need is the maximum likelihood mean of (possibility correlated) variables with the same mean, then I'm guessing that the formula should be more general than this.
  3. There is already a function that almost immediately gives thesz values (from the nominal values and the uncertainties): numpy.average(), as it can take weights.

doronbehar added a commit to doronbehar/uncertainties that referenced this issue Oct 22, 2024
@doronbehar
Copy link

1/2. Yes I meant the maximum likelihood that is also mentioned here. You are correct that the formula I wrote assumes the correlations are all $0$, and I'm definitely up to generalize it!
3. Yes but it would be very nice if uncertainties would have taken care of all the details of this.

Here is the revised formulas for the uncertainty based on a complementary answer in the same SO Q&A, and with the same formula as before for the weighted mean:

$$ \mu = \frac{\sum_i (x_i/\sigma_i^2)}{\sum_i \sigma_i^{-2}}$$

$$\sigma_\mu = \frac{\sqrt{\sum_{i,j} \sigma_i^{-2} \sigma_j^{-2} \cdot Cov(x_i, x_j)}}{\sum_i \sigma_i^{-2}}$$

Where of course $Cov(x_i, x_i) = \sigma_i^2$.

I think that implementing the above is very non-trivial, and it is worth adding it therefor to uncertainties. I have also managed to generalized it to averaging over any set of axes in #265 .

doronbehar added a commit to doronbehar/uncertainties that referenced this issue Oct 22, 2024
doronbehar added a commit to doronbehar/uncertainties that referenced this issue Oct 24, 2024
doronbehar added a commit to doronbehar/uncertainties that referenced this issue Oct 25, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants