You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now rendered scenes are pretty opaque. They are hard to parse by machines to extract information about what is being shown and where it is in 3D space.
I would like to propose a solution where we have an object graph created by the user and attached to an entry point on the session each object is assigned a colour.
And a stencil buffer where these colours are rendered so that the device knows what is on the scene.
Does this sound useful?
Does it sound interesting?
The spec side is pretty light but we should describe what is expected in the graph
What kind of information would you want, e.g. visibility, bounding-box
what kind of description is useful, we should make this extensible
Should the user pick the colours or should it be generated by a hash
What carrots should we provide to get developers to actually use it?
/facetoface
The text was updated successfully, but these errors were encountered:
Mentioned in an editors meeting: There's a possibility that this information could also be used as a generic input assist, where we could start surfacing which semantic object a target ray intersected with select events. This could make some types of inputs easier for developers.
Right now rendered scenes are pretty opaque. They are hard to parse by machines to extract information about what is being shown and where it is in 3D space.
I would like to propose a solution where we have an object graph created by the user and attached to an entry point on the session each object is assigned a colour.
And a stencil buffer where these colours are rendered so that the device knows what is on the scene.
/facetoface
The text was updated successfully, but these errors were encountered: