Skip to content

Proposal: Use images as unit of observation instead of sampled points #81

@jayqi

Description

@jayqi

Status quo

Currently, the output of the "2. Match an image to each point" (assign_images.py) keeps the sampled points as the unit of observation in the output.

This is reflected by a few design choices we have:

  • The output geodataframe/GeoPackage file has the points as the rows.
  • We attempt to match as many sampled points as possible to images.
    • We dont allow multiple sampled points to map to the same image. If an image has already been "claimed", a point will try to find another close (but slightly further) image.
  • The primary geometry data of the Point features are still the geolocation of the sampled points and not the geolocation of the images.

Proposed change

I propose that the output should instead have the images as the unit of observation, with the geolocation of the images as the primary geometry of the geospatial dataset.

  • The sampled points are kind of imaginary. We provided roads that we want to analyze, and from those roads we sampled points, but there isn't actually any data associated with those points. The real data is associated with the imagery and physically located at the images' geolocations.
  • If multiple points have the same closest image, we probably just care about that image. It doesn't seem like it makes sense to get another image that is further away to more closely match having an arbitrary number of images.

We should think of the "matching" step as more like a "spatial query": given a dataset of street-level imagery, we are querying a subset of that imagery based on the intersection with a set of evenly spaced points along roads we care about.

This change would have the following interactions or implications with these open issues:

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

Status

Todo - high priority

Relationships

None yet

Development

No branches or pull requests

Issue actions