-
Notifications
You must be signed in to change notification settings - Fork 454
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Efficient way to infere visibility of object endpoints #1153
Comments
Hi @zadaninck , we are actually facing a similar question in our latest post in Issue #1150 . |
Hey @zadaninck and @march038, there are multiple ways how to do this:
bvh_tree = bproc.object.create_bvh_tree_multi_objects(objs)
# Project your 3D points into 2D pixel space
points2D = bproc.camera.project_points(points3D)
# Send rays through 2D points and get hit distance
hit_distances = bproc.camera.depth_at_points_via_raytracing(bvh_tree, points2D, return_dist=True)
# Compute distance of original 3D points from camera
point3D_distances = np.linalg.norm(points3D - bproc.camera.get_camera_pose()[None, :3,3], axis=-1)
# If hit_distance equals actual distance, then the 3D points are visible in the image
points_visible = np.abs(point3D_distances - hit_distances) < 1e-2 You might need to adjust the threshold in the last line. Further, each method supports a frame parameter to select a specific camera frame/pose that you want to use. |
Hi @cornerfarmer , thank your your response! Personally we think that it would be a good idea to put this into an extra method. It would make sense to add functionality so that it can be applied either to points or objects . Then for objects, as you proposed, have a threshold parameter so the user can decide whether the whole object needs to be visible or only e.g. 70% of it for the object to be determined as visible or not. I looked for a bpy functionality to sample e.g. 1000 points on a mesh and then implement the script you gave above with these given points but couldn't find one, therefore open for your ideas. |
Describe the issue
Hi,
I am using BlenderProc to simulate the drop of 2000+ objects onto a flat surface, with a camera located above the flat surface. The objects are approximately a rectangular cuboid.
After rendering the simulation I want to know for each individual object whether the endpoints are visible or occluded. The endpoints are defined as 10% of the object at the edges along the length axis of the cuboid (an approximate drawing can be seen in the image). Is it possible to perform this detection using BlenderProc and if so, can it be done in an efficient way? At this moment, I am already calculating the occlusion % by iterating over each vertex of each object and verifying whether it is visible from the camera POV, but I'd also like to detect whether the endpoint are visible.
Minimal code example
Files required to run the code
No response
Expected behavior
Detection of object endpoints
BlenderProc version
v2.7.1
The text was updated successfully, but these errors were encountered: