Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Re-running Wagyu on its own output sometimes makes additional changes #60

Closed
e-n-f opened this issue Jan 4, 2017 · 13 comments
Closed

Comments

@e-n-f
Copy link
Contributor

e-n-f commented Jan 4, 2017

tests/geometry-test-data/input-polyjson/croatia1.json is an uncleaned border polygon for Croatia at z0, the one that Tippecanoe calls Wagyu to clean when you run

grep Croatia tests/border/in.json | ./tippecanoe -z1 --detect-shared-borders -f -o foo.mbtiles

If you run Wagyu on this polygon:

./fixture-tester -t union -f positive tests/geometry-test-data/input-polyjson/croatia1.json

you get the cleaned polygon in tests/geometry-test-data/input-polyjson/croatia2.json. However, if you rerun Wagyu on that second polygon:

./fixture-tester -t union -f positive tests/geometry-test-data/input-polyjson/croatia2.json

you get a third polygon, the one in tests/geometry-test-data/input-polyjson/croatia3.json. If you run Wagyu again on the third polygon, it is stable and you get an identical polygon back.

I had expected that giving Wagyu a strictly simple polygon that was previously the output of Wagyu would return the identical polygon rather than making additional changes to it.

@e-n-f
Copy link
Contributor Author

e-n-f commented Jan 4, 2017

It looks like the first-stage output may actually have some overlapping rings:

screen shot 2017-01-03 at 4 20 41 pm

but ./tests/run-geometry-tests.sh ./fixture-tester does not complain.

@e-n-f
Copy link
Contributor Author

e-n-f commented Jan 4, 2017

It looks like fixture-tester only guarantees that each outer ring of a MultiPolygon is valid with respect to its inner rings, not that the outer rings are free from conflict with each other.

@flippmoke
Copy link
Member

This has been really hard to test properly due to #34 but it is something I have been concerned about! Good find!

@e-n-f
Copy link
Contributor Author

e-n-f commented Jan 4, 2017

Thanks. I just made a PR (#61) to test for stability (after putting the rings in a canonical order) that I hope will help track down some other similar problems.

@e-n-f
Copy link
Contributor Author

e-n-f commented Jan 4, 2017

The stability test finds 1105 of 3120 outputs to be unstable, which is far more than the 105 that boost::is_valid complains about, so instability is not necessarily an indicator of invalidity.

@springmeyer
Copy link
Contributor

Stability seems like a critical issue - thanks for raising this @ericfischer and for adding a test harness for it. What are the next steps to getting wagyu to have stable output the first run?

@flippmoke
Copy link
Member

In order to really pursue this with vigor we would need to have #34 properly sorted so that we can actually debug properly with out have all the false negatives. I would also be curious how many of these differences are simply due to a change in the initial position of the ring of a polygon. So we might have a large number of "stability" issues that aren't really anything more then the order of the points.

@e-n-f
Copy link
Contributor Author

e-n-f commented Jan 9, 2017

@flippmoke I did already make my test code normalize the order of points within each ring, so these are cases where it changes in some other way.

@springmeyer
Copy link
Contributor

In order to really pursue this with vigor we would need to have #34 properly sorted so that we can actually debug properly with out have all the false negatives.

Hrm, this feels important to de-couple from #34 and possible boost bugs. Why not write our own function to determine validity and stability?

@flippmoke
Copy link
Member

flippmoke commented Jan 9, 2017

Why not write our own function to determine validity and stability?

@springmeyer I have considered such a think already, but this could be a big undertaking I am afraid.

@springmeyer
Copy link
Contributor

springmeyer commented Jan 10, 2017 via email

@flippmoke
Copy link
Member

@springmeyer we have code to detect where rings meet at the same point, not any code that actually does intersection testing against different segments. Additionally the code within Wagyu might not be ideal to port to an algorithm such as this (although the knowledge from writing Wagyu would help in writing the validity algorithm).

This is its own algorithm in some aspect, which is why I have always trusted the boost geometry, because up until this point it was always very good.

@flippmoke
Copy link
Member

I dug into some of the results of these tests and they have to do with two seperate situations:

The first is a situation where the snap rounding when repeated multiple times will result in more snapping then originally occurs. This results in features changing in shapes over time due to this, and unless we ran snap rounding until it stabilized it will continue to be this way. However, our current limited use of snap rounding preserves the original shapes of rings in a better fashion IMO. Therefore, it is likely a speed savings and a good technical reason not to expect stable results when repeatedly running Wagyu.

screen shot 2017-02-08 at 4 05 07 pm

Additionally, there are sometimes additional points that are kept during processing of wagyu that are collinear in nature. This occurs in the output because it is possible that other rings or vectors might touch these points and these points are important to keep because currently the validity results will throw due to rounding errors in a false manner at these points. During a repeated run, the situation where these are not touching is removed. We could consider this as a possible feature to add to the topology correction portion of wagyu, however, I do not view it as something that is urgent to fix as it might degrade performance.

screen shot 2017-02-08 at 4 02 27 pm

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants