-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance issue in Dulmage-Mendelsohn #8
Comments
This is not the current performance bottleneck. Testing on some medium-sized instances from pglib, a 2k-bus ACOPF mode takes 70 s to compute the DM partition. In a cursory test, the dominant computations cost appears to be in computing the maximum matching. However, BipartiteMatching.jl reports excellent performance on much larger graphs than this. The bottlneck may be translating our Graphs.jl graph into the format required by BipartiteMatching.jl? |
BipartiteMatching.jl accepts as input a 2-dimensional bit array. The quadratic loop over both edge sets used to construct this array is likely the bottleneck. To get around this, we need an implementation that takes a sparse graph or matrix as an input. |
The BlossomV url appears to be back up, so I can test with the GraphsMatching Hungarian algorithm implementation: import Graphs
import GraphsMatching as GM
function maximum_matching(graph::Graphs.Graph, set1::Set)
if !_is_valid_bipartition(graph, set1)
throw(Exception)
end
nvert = Graphs.nv(graph)
weights = SparseArrays.spzeros(nvert, nvert)
for e in Graphs.edges(graph)
weights[Graphs.src(e), Graphs.dst(e)] = 1.0
end
println("Beginning maximum weight maximal matching")
result = GM.maximum_weight_maximal_matching(
graph,
weights;
algorithm = GM.HungarianAlgorithm(),
)
println("Done with maximum weight maximal matching")
matching = Dict(
# The GraphsMatching convention is that mate[n] is -1 if n is unmatched.
# Calling functions need a map from set1 nodes to set2 (other) nodes.
n1 => result.mate[n1] for n1 in set1 if result.mate[n1] != -1
)
return matching
end Despite accepting sparse data structures, this is actually significantly slower than the BipartiteMatching.jl implementation. The time complexity of the Hungarian algorithm is O(n^3), although I'm not sure if this is O(nm) or a true O(n^3). We may need a Hopcroft-Karm implementation? |
The GraphsMatching Both of these are significantly slower than BipartiteMatching (DM for IEEE-118 in 0.7 s). |
Timing results on small PGLib instances, with an initial implementation of Hopcroft-Karp in Graphs.jl.
The IEEE-118 case is comparable to BipartiteMatching. On GOC-2742, BipartiteMatching takes 37.4 s. |
The time spent computing the matching seems small. HK on GOC-2742 takes 0.15 s. The rest of the time is probably the quadratic loop mentioned above. When I fix this quadratic loop, GOC-2742 takes about 1 s. |
Results on all PGLib instances with HK matching and fixed quadratic loop:
This is more like what I would expect. |
Now I need to:
|
Fixed in #15 |
dulmage_mendelsohn.jl
uses the following:As
filter
is a vector, this is quadratic time and will become slow ifnodes
, the vector of nodes in the bipartite graph, becomes large. As we do expect this set of nodes to become large, we should implementfilter
as a set.The text was updated successfully, but these errors were encountered: