Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fitting with emcee works poorly #601

Open
newville opened this issue Nov 10, 2019 · 11 comments
Open

fitting with emcee works poorly #601

newville opened this issue Nov 10, 2019 · 11 comments

Comments

@newville
Copy link
Member

Fitting with emcee works poorly. Poorly enough that claiming to have an emcee "solver" that can be used to solve minimization problems is kind of a disservice to our users. I do not know if there is something wrong with the code that is new to emcee v3, but I suspect this may have actually never worked that well.

As a minimal, complete, example showing the problem, let's look at
https://lmfit.github.io/lmfit-py/examples/example_emcee_Model_interface.html?highlight=emcee

This shows emcee getting a solution: it looks like we're claiming that it works. And the corner plot is cool. But that solution is actually found by starting with the (much, much faster) solution from Nelder-Mead. In fact, that solution is not actually very close to the right answer itself, and emcee does not improve upon the problem stemming from the huge correlation in the double-exponential. If you change that example to have emcee start at the same starting point as Nelder-Mead, the results are much worse. And 100x slower.

I'm perfectly willing to believe that the code in lmfit using emcee has some error that is making it not work as well as it should. I don't see anything obvious, but I'm definitely not an MCMC expert. Some of the code did change for #600 and emcee3. So, suggestions for fixing the code would be most welcome.

Having submitted #600, I can definitely say that "fitting with emcee" takes up quite a bit of code and has been a fair amount of maintenance. So the cost is definitely not zero, and the benefit seems vanishingly small to me. I never use emcee. With ampgo and brute available, I'm not sure anyone should use this.

Is there a compelling reason to retain emcee in lmfit? Can anyone provide an example of fitting with emcee in which it finds a solution that is not derived from another solver or actually improves upon a previous fit result?

And, yes, to be blunt and clear: I would propose we drop emcee unless it can be shown to give correct results.

@reneeotten
Copy link
Contributor

I think that MCMC-methods like emcee certainly have a place in lmfit; being able to get posterior distributions for parameters and deal with data uncertainty seems useful to me. Sure, it might take some time to support it - but I think that's mainly because of the recent major version change/overhaul of the emcee codebase.

It's very well possible that the emce method is much slower and gives worse results when not starting from the Nelder-Mead results in the example we have currently included. But, did we, for example, check there that it the emcee method converged in that case? Perhaps we should include the example from the emcee tutorial and see how that works when comparing it do other minimizers present in lmfit. That would also be way of making sure that the implementation in lmfit is giving the same results a plainly calling emcee directly.

May I also recommend you send this to the mailinglist to solicit input from users who might use the emcee method in lmfit more extensively than at least both of us do?

@newville
Copy link
Member Author

@reneeotten OK, I basically agree with all of that, at least in principle. My main concern is that it is actually not working well. It seems like the main argument in favor of using emcee is that the error bars on your data might be systematically skewed. But it's not entirely clear to me if it is capable of finding a good solution to any non-trivial problem.

I was just looking at the one nearly-trivial-but-heavily-cooked example in the emcee docs. I believe it is heavily biased, actually to the point of being more or less wrong. There's a lot of mumbo-jumbo and then a curve labeled "truth". They start by making a line

y = m * x + b

but then added

y = y + abs(f*y) + noise

While they label "m*x+b" as "truth", they have also added an additional term to their model that is not linear in x. As it turns out their regularization term exactly compensates for a bias that is proportional to the magnitude of y. The word that comes to mind is "fraud". They call a linear fit "not bad" but are unhappy with the error bars based on deliberately-and-obviously bad errors on y. They could have properly scaled the uncertainties but don't.

I can believe it is useful in some contexts. I am also certain that I do not ever intend to use this code, and cannot really envision ever recommending using this to anyone.

@PetMetz
Copy link

PetMetz commented Dec 9, 2019

My formal mathematics and statistics are not really up to the task of articulating this well, but it seems that you may have a misapprehension of the case for employing the MCMC algorithm.

emcee and the MCMC algorithm itself are not solvers or optimizers, but rather a means of sampling a particular parameter space given a user-defined prior and target function. The benefit is not the ability of the algorithm to reveal a global optimum in parameter space, but the ability to estimate the parameter distributions themselves which provide a more rich context for understanding the model you are applying to your data.

I think this has great utility, separate from extracting correctly weighted uncertainties. What may be missing is the mathematical formalism for correctly constructing the target function.

Perhaps @dfm can make a comment here?

A discussion has also been started here.

@Immudzen
Copy link

Immudzen commented Dec 9, 2019

MCMC algorithms are not optimizers and should not be used as optimizers. Best practices would be to run an optimizer first and then start off the MCMC algorithm around any high probability points you find.

If you just start off emcee to randomly sample over some search space and want it to find the high probability areas it could take a VERY long time since that is not what MCMC is built for. I have seen systems where it is likely the sun will go out before it finds the high probability regions depending on the size of the search space.

I have used emcee in my own research and it does work very well. For many problems to get a good distribution you might needs to run chains thousands to tens of thousands in length. There are even metrics like the integrated autocorrelation time to determine when you have enough samples.

@newville
Copy link
Member Author

@PetMetz @Immudzen Thanks -- I think that we all basically agree that MCMC is really not designed for optimization itself. Here in lmfit the situation is that there has been an option to run a minimization using emcee. I myself did not write this code to optimize using emcee but did approve it going into the codebase. The developer who added the code was very good, adding many other improvements, but has since decided to step away from lmfit. I do not have any ill-will to any of the people involved, it's simply that the code does not work very well and I do not want to give the impression that we think it does.

We had a recent request for what I thought would be some minor changes with this emcee method that became simultaneous with the change form emcee 2 to 3. In going through these changes (see #594), I came to the conclusion that this code was not working well, and so opened this issue. I don't know if this is related to the change to emcee 3 or the loss of the PTSampler or not.

This issue is completely within the context of lmfit: Can we support an optimizer based on emcee within lmfit? We do say that we support such an optimizer. I now believe it does not work well in general. As you both suggest it may have been unwise for us to say it ever did.

Of course, MCMC has real merit in exploring parameter space and can be useful for understanding parameter uncertainties and correlations after a fit is complete. It turns out that we have other routines for such exploration. We are not opposed to supporting MCMC methods for doing this. Perhaps we could remove emcee as an optimization method but retain it as a way to explore uncertainties. At this point, no one has proposed doing that. The simplest thing for us to do might be to drop the use of emcee as an optimizer and view any other changes as possible in the future.

Given both of your assessments that "MCMC should not be used for optimization", is certainly curious that one of the few tutorial examples for emcee is labeled "Fitting a model to data" and starts with the sentence "If you’re reading this right now then you’re probably interested in using emcee to fit a model to some noisy data." It seems that the emcee folks are claiming that anyone reading their tutorial can use emcee to fit data. I had not looked at this example until after raising this issue. I see very serious problems with it and find it quite misleading.

To be clear, saying we may need to drop support for using emcee within lmfit is not the same as saying that MCMC is useless. We have already decided that fitting with emcee (whether it works well or not) will be in lmfit 1.0. I would not be surprised if it gets dropped after that, but this Issue is the start of that conversation and no decisions have been made. We would need to deprecate it first. I think that deprecation could be done for 1.0, and I am probably +0.5 on that deprecation, but I would defer to @reneeotten and I think getting other opinions on that is very helpful. So, thanks again @PetMetz @Immudzen -- those comments are very helpful.

@Gabriel-p
Copy link

@newville just to clarify, what do you mean by "does not work well". You mean that the emcee algorithm does not work (implying it either has some bug in the code, or that the maths that support it are wrong), or that MCMC in general does not "work well" for the intended purpose of this package?

@PetMetz
Copy link

PetMetz commented Dec 11, 2019

@newville Perhaps the solution here, if support for MCMC methods within lmfit are to continue into the future, is to delineate between optimizers and statistics tools in the lmfit codebase?

Without looking too closely at the lmfit source, the ideas that comes to mind:

  1. If emcee wasn't explicitly a class method of Minimizer it might dissuade users from naively applying it as an optimization algorithm

  2. One could conceive of a PostFitting class to house some statistical tools (like emcee) and perhaps some of the visualization odds-and-ends.

PostFitting could initialize from a MinimizerResult, and absorb all the non-lmfit pieces currently sprinkled into Minimizer.

@reneeotten
Copy link
Contributor

dear all,

I do use emcee myself quite often to get posterior distributions for parameters after a fit (so not as "real" optimizer) and -as I have said before- think it's very useful for that purpose. To be clear, I personally have no intentions to drop support for emcee in lmfit; in fact, I would be in favor of fully supporting all (new) features of version 3. It would just be helpful to have someone who uses this method a lot and knows more about its inner-workings involved in making this happen.

It's worth considering though, as is brought up here, to move emcee from an all-purpose optimizer to something more specialized, like we have for calculating confidence intervals. I would not be opposed to doing that; we would just need to think this through and consider how to (re)structure it. On the other hand, perhaps making our view on this more clear in the documentation could suffice as well - I don't know.

Finally, I will not have time myself to do any of this before the holidays and with the currently planned time-frame of releasing v1.0 ("within a few weeks"), it will probably not happen for that version.

@newville
Copy link
Member Author

@Gabriel-p Well, most of all I mean that the lmfit.Minimizer.emcee fitting method within lmfit does not work well. That is what this issue is about -- it's raised in lmfit and is about lmfit. The Minimizer.emcee method had been using emcee.PTSampler' which was dropped from the emceepackage for its 3.0 release, so nowlmfitusesemcee.EnsembleSampler. As I said in the original message of this Issue, the problem could certainly be in the lmfitcode. It could be due to the change fromPTSamplertoEnsembleSampler`. Or it could be that it never really worked well. I suspect this is the case, and that fitting with MCMC will not work well in general.

@PetMetz yes, thanks - those are all fine approaches to think about.

@reneeotten I agree: let's release 1.0 with no real code changes. Maybe we could document a warning about using lmfit.Minimizer.emcee for fitting or issue a deprecation warning? No more than that, and maybe not even that -- full parity with 0.9.15 was the original plan.

At some time in the nebulous future, we can try to think about dropping the emcee minimization method and adding an MCMC-based method for estimating confidence intervals.

@nedahejazi
Copy link

Hello,
I seriously need help with emcee. I am trying to fit model spectra to observed ones. After working hard on code, the results are very strange. The chain values of parameters are the same for each walker. I mean the initial values do not change over the chain. Do you have any clue? If needed, I can send the code.

@newville
Copy link
Member Author

@reneeotten OK, I'm back to: why in the world are we trying to support this code? #781 still states "fitting with emcee". Ugh.

I propose the removal of any code in minimizer.py that uses emcee. Moving it somewhere else so that one could explore parameter space after a fit would be OK, but I would not volunteer for that.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants