We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi @Volcomix , thx for this nice work.
I'm interested in the light wrapping function of the post process pipeline and try to rewrite the following function it in python.
virtual-background/src/pipelines/webgl2/backgroundImageStage.ts
Lines 68 to 78 in 8530c56
however, the output result is different with the output of your original code...
I paste my python implementation here
def screen(a, b): return 1.0 - (1.0 - a) * (1.0 - b) def linear_dodge(a, b): return a + b def clamp(x, lowerlimit, upperlimit): if x < lowerlimit: x = lowerlimit if x > upperlimit: x = upperlimit return x def smooth_step(edge0, edge1, x): # Scale, bias and saturate x to 0..1 range x = clamp((x - edge0) / (edge1 - edge0), 0.0, 1.0) # Evaluate polynomial return x * x * (3 - 2 * x) def light_wrapping(segmentation_mask, image, background, cov_x=0.6, cov_y=0.8, wrap_cof=0.3, blend_mode='linear'): light_wrap_mask = 1. - np.maximum(0, (segmentation_mask - cov_y))/(1 - cov_y) light_wrap = wrap_cof * light_wrap_mask[:, :, np.newaxis] * background if blend_mode == 'linear': frame_color = linear_dodge(image, light_wrap) elif blend_mode == 'screen': frame_color = screen(image, light_wrap) smooth = lambda i: smooth_step(cov_x, cov_y, i) vectorized_smooth = np.vectorize(smooth) person_mask = vectorized_smooth(segmentation_mask) return person_mask, frame_color def combind_frond_back_ground(person_mask, image, background): person_mask = person_mask[:, :, np.newaxis] output_image = image * mix_value + background * (1.0 - mix_value) output_image = output_image.astype(np.uint8) return output_image def main(): image_src = "path to input image" image = cv2.imread(image_src) # (720, 1280, 3) background_src = "path to background image" background = cv2.imread(background_src) # assuming image and background have the same size (720, 1280, 3) segmentation_mask = "segmentation result from model" # (720, 1280) person_mask, frame_color = light_wrapping(segmentation_mask, image, background) output_frame = combind_frond_back_ground(person_mask, frame_color, background) return output_frame
I feel something might be wrong in smooth_step, but I cannot find more information except for this link.
Could you help me check if there is an obvious error in this python code please?
Thx in advance~
The text was updated successfully, but these errors were encountered:
Hi @carter54, I don't see anything obvious by reading the code. smooth_step also looks fine as it seems consistent with OpenGL documentation.
smooth_step
It might help if you could post the resulting output and maybe some intermediate outputs like the masks, the output before smooth_step and few others.
Sorry, something went wrong.
Hi @carter54, did you ever manage to implement the light wrapping in python?
No branches or pull requests
Hi @Volcomix , thx for this nice work.
I'm interested in the light wrapping function of the post process pipeline and try to rewrite the following function it in python.
virtual-background/src/pipelines/webgl2/backgroundImageStage.ts
Lines 68 to 78 in 8530c56
however, the output result is different with the output of your original code...
I paste my python implementation here
I feel something might be wrong in smooth_step, but I cannot find more information except for this link.
Could you help me check if there is an obvious error in this python code please?
Thx in advance~
The text was updated successfully, but these errors were encountered: