Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Same code in Python #2

Open
Sandy4321 opened this issue Aug 15, 2019 · 5 comments
Open

Same code in Python #2

Sandy4321 opened this issue Aug 15, 2019 · 5 comments
Assignees

Comments

@Sandy4321
Copy link

Great code thanks
But do you know similar approach coded in python

@ojdo
Copy link
Owner

ojdo commented Aug 22, 2019

Well, scipy's optimize.minimize gives you over 80% of this code in one line. All that you need to implement around is the probabilistic restart functionality and "select the best solution" scaffold around the giant for iRestart=1:options.maxRestarts Nelder-Mead block.

@Sandy4321
Copy link
Author

May you clarify what scaffold is

@ojdo
Copy link
Owner

ojdo commented Aug 27, 2019

This I meant by scaffold (mainly the if current < best part):

from math import inf
from scipy import optimize

best_solution = [0, 0, 0]  # actually n-dimensional
best_objective = inf

for r in range(num_restarts):
    initial_simplex = probabilistic_restart(...)
    current_solution, current_objective = optimize.minimize(initial_simplex)

    if current_objective < best_objective:
        best_solution = current_solution
        best_objective = current_objective

@Sandy4321
Copy link
Author

I see thanks
but this key line
initial_simplex = probabilistic_restart(...)
need to sample some grid in space

grid choosing is the art...

@ojdo
Copy link
Owner

ojdo commented Oct 1, 2019

The GBNM way of "sampling the grid" is this (as implemented here):

  • Create N (nPoints) uniformly distributed random points in the search space (box between xmin, xmax)
  • For each of those random points, determine their "newness" by evaulating gauss(...) (less probable == better).

What gauss does: erect a gaussian probability distribution around the previously visited points (named usedPoints) here. This is a list of all previous initial and final simplex points of the previous restarts. Each such visited point leads to a single gaussian spike in my function. That way, points farer from already visited regions are favoured in a restart, slightly biasing the algorithm towards exploring uncharted regions of the search space.

But yes, this algorithm is very parameterisable (e.g. sigmainv for the width of a single gaussian spike, that means how far an already visited point's location pushes into the search space).

@ojdo ojdo self-assigned this Oct 28, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants