You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm using blenderproc to make a dataset for a semantic segmentation task. I have to render the objects at different distances, so most of the time they are very small. The segmentation maps come out disconnected and it is not good for model training.
Minimal code example
defrender_scene(scene_dir, distance="random", num_renders=100, resolution=3072):
metadata= {}
print(f"screne_dir: {scene_dir}")
scene=glob(str(scene_dir) +"/*.blend")[0]
# load the objects into the scene# blend_file = scene# objs = bproc.loader.load_blend(blend_file)# define a light and set its location and energy levellight=bproc.types.Light()
light.set_type("SUN")
light.set_location(sample_sun_loc())
field_of_view=np.deg2rad(0.4)
# decrease the energy level to mimic d1-imageslight.set_energy(5)
# define the camera intrinsicsbproc.camera.set_intrinsics_from_blender_params(lens=field_of_view, lens_unit="FOV", clip_end=np.inf)
bproc.camera.set_resolution(resolution, resolution)
# Create camera poses for n renders. each camera pose corresponds to one imageforiinrange(num_renders):
iftype(distance) ==str:
distance=random.randint(50000, 200000)
# distance = random.randint(10000, 20000)# distance = 5000position=np.random.uniform([-distance, -distance, -distance], [distance, distance, distance])
x=position[0]
y=position[1]
z=np.sqrt(np.abs(distance**2-x**2-y**2))
position= [x, y, z]
quat, rot_matrix=create_tracking(position, [0, 0, 0])
# print('position ', position, 'rot ', quat, rot_matrix.shape)matrix_world=bproc.math.build_transformation_mat(position, rot_matrix)
bproc.camera.add_camera_pose(matrix_world)
# activate depth rendering, render the whole pipeline# bproc.renderer.set_world_background([0,0,0])bproc.renderer.enable_segmentation_output(
map_by=["category_id", "instance", "name"], default_values={"category_id": 1}, pass_alpha_threshold=0.01
)
data=bproc.renderer.render()
satellite_name=scene.split("/")[-1].split(".")[0]
try:
os.mkdir("output/"+satellite_name)
exceptOSError:
print("Folder exists...")
bproc.writer.write_hdf5("output/"+satellite_name, data)
bproc.clean_up()
forhdf5_fileinglob("output/"+satellite_name+"/*.hdf5"):
write_image(hdf5_file, "output")
Files required to run the code
No response
Expected behavior
BlenderProc version
4.2
The text was updated successfully, but these errors were encountered:
these seem to be aliasing effects of blenders raytracing. I am not sure there is anything we can do here.
One alternative would be to render the mask in a higher resolution and scale it down afterwards.
these seem to be aliasing effects of blenders raytracing. I am not sure there is anything we can do here. One alternative would be to render the mask in a higher resolution and scale it down afterwards.
I see. Can I increase the number of rays to remedy that you think?
Also, your alternative solution just might work! Will try it and report back. Thanks for the reply
Describe the issue
I'm using blenderproc to make a dataset for a semantic segmentation task. I have to render the objects at different distances, so most of the time they are very small. The segmentation maps come out disconnected and it is not good for model training.
Minimal code example
Files required to run the code
No response
Expected behavior
BlenderProc version
4.2
The text was updated successfully, but these errors were encountered: