-
Notifications
You must be signed in to change notification settings - Fork 454
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
3D (Oriented) Bounding Boxes and projection on 2D rendered images #1150
Comments
Hey @march038, this should be pretty easy to do nowadays in blenderproc: points = bproc.camera.project_points(obj.get_bound_box(), frame=0) For a given object e.g. this will draw red points into the rgb image for each bbox corner: data = bproc.renderer.render()
for p in points:
data["colors"][0][int(p[1])][int(p[0])] = [255, 0, 0] Let me know if you have any further questions |
Hi @cornerfarmer and thank you for the fast reply! I'm not too sure where to add this code in Blenderproc as I don't fully understand yet the whole workflow Blenderproc uses. As we don't want to use bboxes only for a specific module, I think it would make sense to adapt the Renderer that is called in every module. I tried to understand which part of Blenderproc could be modified for this use case and is responsible for rendering the "normal" image and thought that it might be the render function in python>renderer>RendererUtility.py If you could explain where and how to modify BP for this , we would very much appreciate your help! |
You dont need to modify the bproc code. The code I wrote simply goes into you |
Thanks a lot @cornerfarmer, we really appreciate your help and support! After thoroughly reviewing the function get_bound_box, we finally understood it. Apologies for not catching it earlier. We extented the semantic_segmentation main.py to not only create the .hdf5 container but also write all the bbox points as lists into a text file. We realized that the function get_bound_box exports the points according to a fixed pattern which we used to implement our own bbox functionality.
After generating the text file as well as the .hdf5 container in the 1st step, in the 2nd step, we wrote a script that saves the 'colors' / normal image from the hdf5 as a PNG and then we use a 3rd script that draws the points onto the image in different colors according to their scheme and connects the points with white lines. For establishing patterns on which points to connect, we simply looked at how the points for each bbox are distributed and then implemented this into the script manually. This works really well, as you can see in the picture below, where a tank's parts are segmented into multiple meshes. Problem to solve Looking at our current code from above, the function is_point_inside_camera_frustum seems to only check if a point is located inside the camera's viewfield area, not if it is actually visible in the viewfield so that is not the way to go. Are there any functionalities in Blenderproc that we could use for this? P.S. : We could for sure share our code for drawing the bbox points and corners onto the image if you think that a 3D bounding box module might be something to implement in the future. |
Describe your feature request
Hi everyone,
We are currently using and testing BlenderProc @Fraunhofer IOSB for setting up synthetic data creation pipelines in Blender for training visual language models and would also like to maybe collaborate with you as the dev team to implement new functions. As a team of 2 people being really new to Blender and Blenderproc we would very much welcome your collaboration as we think Blenderproc is absolutely fantastic to work with and for further development.
For our case we need segmentation masks, annotations and bounding boxes which are mostly already implemented.
As we aim for 3D Object Detection and 6D Pose Estimation we miss a 3D bounding box feature. I saw that Issue #444 already asked for it and @andrewyguo offered to work on it but it seems that the feature was abandoned.
What we need is a functionality, that transforms the three-dimensional coordinates of an objects bounding box's vertices into the two-dimensional coordinates they would have in the 2-dimensional rendered image, depending on the camera perspective.
We are thinking about defining an own JSON-Format which should then get the X and Y coordinates of the bbox in the 2D image.
The format could look like:
``json
{
"image_id": "image_001",
"objects": [
{
"class": "car",
"bounding_box_3d": {
"vertices": [
{"x": 100, "y": 200},
{"x": 150, "y": 200},
{"x": 150, "y": 250},
{"x": 100, "y": 250},
{"x": 105, "y": 205},
{"x": 155, "y": 205},
{"x": 155, "y": 255},
{"x": 105, "y": 255}
]
}
}
]
}
We would love to see this being implemented as we would rather leverage the awesome functions and architecture BlenderProc already has instead of trying to build something separately.
Describe a possible solution
code_base.zip
We already experimented with a GPT-generated script for projecting an object's bounding box in Blender onto the rendered image without the need of Blenderproc but it seems that the transformations from the bounding box's three-dimensional world coordinates to the pixel coordinates in the 2D rendered picture has some issues as the box is not at the right spot. We also used a fixed camera instead of camera_positions as we didn't use Blenderproc for this.
The process consists of two scripts, firstly the transformation script transforms the bbox world coordinates into pixel coordinates depending on the camera perspective and write the output to a JSON-file and then a visualization script projects these pixel coordinates onto the rendered image.
At the end, the real feature could be a functionality, that either renders a second image with the 3D bounding box projected onto the image/object or an own sort of module for this functionality. I also think it can't hurt to keep a feature that exports the pixel coordinates of the bouning box in JSON-format.
I attache our code as well as an image of how the rendered image looks and how the bounding box got projected
The text was updated successfully, but these errors were encountered: