Since user equilibrium and system optimum have been introduced, this folder mainly concerns about the exact line search algorithm for solving user equilibrium.
Besides , a course project in Network Optimization and Modelling lectured by Dr. Wang Xiaolei will be introduced.
https://www.cnblogs.com/jeromeblog/p/3801025.html
In general gradient method, we should find a search direction meet the following requirement:
When
- Fast gradient descent:
$B_k=I$ - Newton Method:
$B_k = H(x_k)^(-1)$ and$H(x_k)=\nabla^2f(x_k)$ is positive definite matrix
However, determine the step size is also a important problem. Neither too small nor too big will make convergence a hard thing. So, many line search algorithms are proposed
def graidentDescent(fun,dfun,theta,_type="WolfLineSearch",show=False,maxiter=1e4,):
x,y,y_ = [theta[0]],[fun(theta)[0]],[dfun(theta)[0]]
i = 0
eps = 1e-6
while i < maxiter:
last_theta = deepcopy(theta)
d = -dfun(theta)
# 通过某个方式获得步长
if _type == "WolfLineSearch":
stepsize = LineSearch.WolfeLineSearch(fun,dfun,theta,d)
elif _type == "ArmijoBackTrack":
stepsize = LineSearch.ArmijoBacktrack(fun,dfun,theta,d)
elif _type == "ArmijoLineSearch":
stepsize = LineSearch.ArmijoLineSearch(fun,dfun,theta,d)
else:
stepsize = LineSearch.WolfeLineSearch(fun,dfun,theta,d)
if abs(d) < eps or stepsize < eps: break
theta = last_theta + stepsize*d
i = i + 1
x.append(theta[0]),y.append(fun(theta)[0]),y_.append(dfun(theta)[0])
Source Code: Backtracking
First of all, the armijo condition is a very useful method to find the range of stepsize.
def ArmijoBacktrack(fun,dfun,theta,d,args=np.array([]),stepsize=1,tau=0.5,c1=1e-3):
slope = np.sum(dfun(theta,args)*d.T)
obj_old = fun(theta,args)
theta_new = theta + stepsize*d
obj_new = fun(theta_new,args)
while obj_new > obj_old + c1*stepsize*slope:
stepsize *= tau
theta_new = theta + stepsize*d
obj_new = fun(theta_new,args)
return stepsize
Use backtracking to find the exact stepsize is very efficient way. For example, the left plot is
# Iteration Process
x [-5.000000, -3.006738, -1.056191, 0.596031, 0.781130, 0.689161, 0.697118, 0.693139, 0.693147]
f(x) [10.006738, 6.062929, 2.460159, 0.622839, 0.621679, 0.613722, 0.613721, 0.613706, 0.613706]
f(x)' [-1.993262, -1.950547, -1.652222, -0.185098, 0.183938, -0.007957, 0.007957, -0.000016, -0.000000]
Source Code: Line Search.py
- Wolf Search
- Newtons Method
- Quasi-Newton Method
- ......
They will be added to this readme.
After beckmann transformation, the problem is converted to convex optimization problem. Then, we can use Frank-Wolf algorithm to find the descent direction. Just determine the stepsize using line search. 1
[Traffic Assignment-Course Project](Traffic Assignment-Course Project.pdf)
In sioux falls network, the capacity and free flow time of a link is determined.
We can use the above two equations obtain a function of
Then upadate the flow until convergence
Source Code: solution_sious_falls.py