Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Backend: Browser (remote) #40

Open
almarklein opened this issue Nov 22, 2024 · 0 comments
Open

Backend: Browser (remote) #40

almarklein opened this issue Nov 22, 2024 · 0 comments

Comments

@almarklein
Copy link
Member

Would be interesting to have a generic browser backend. A bit like jupyter_rfb but without Jupyter. It may also make it easier to test / benchmark new techniques.

Overview in short

The simplified and naive approach:

  • Context must present as image.
  • Canvas backend sends image over network.
  • Browser presents the image.

More detailed steps and notes on performance

See also pygfx/wgpu-py#378

We can also use a custom present-method, so that the image can be encoded. That broadens the options quite a bot. Assuming rendering with wgpu here:

  • Encoding the image on the GPU (optional):
    • The purpose is to reduce the image size (in bytes), so the next steps are faster.
    • To make this step fast, we have to stick to encodings that parallelize well.
    • There are compressed texture formats in wgpu.
    • Can use YUV (yuv420 results in 37.5% the size compared to rgba)
    • JPEG encoding with variable quality
    • Diff images
    • mp4 encoding
  • Downloading from GPU into RAM:
    • The bottleneck is the amount of bytes.
    • For small images, as typical in a notebook, this step does not cost a lot of time.
    • For larger images, going towards full screen, the impact becomes very significant (when downloading raw rgba data)
    • Also see the little benchmark in Refactor canvas context to allow presenting as image wgpu-py#586
  • Encoding the image on the CPU:
    • Can be done by the context, or in canvas._rc_present_xx(), or a bit in both.
    • Let's assume we have a binary websocket, but if we don't than base64 encoding is part of this step too.
    • jupyter_rfb does png and jpg encoding.
    • Making the image size smaller will make the next step faster.
    • But the encoding also greatly affects the presentation in the browser.
  • Sending over the network, io:
    • The speed of this step is pretty good if you're on localhost.
    • Otherwise, the speed can vary a lot, depending on where the server and browser are.
    • Speed can vary over time too.
    • It would be nice to measure this speed and use it to control the encoding steps above.
  • Presenting in the browser:
    • This step includes any necessary decoding. Browsers have builtin support for some decoding mechanisms.
    • jupyter_rfb sets the src of an <img> object to the base64 encoded png/jpg. This is surprisingly fast.
    • Using a canvas. IIRC drawing an image to a canvas is pretty slow.
    • Using WebGL / WebGPU.

There are multiple paths to go through these steps. What's tricky is that a choice in one step affects the other steps, so
we cannot simply pick "the best" from each step and stack them together.

MP4 has nice properties in being pretty good in a situations with low/variable bandwidth, and it very naturally improves the resul over time for static images.

Something like WebCRT can possibly be used to implement some of these steps.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant