-
Notifications
You must be signed in to change notification settings - Fork 712
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[V3 proposal] Improved defaults for quantization and device selection #960
Comments
Current logic for session selection: https://github.com/xenova/transformers.js/blob/6505abb164a3eea1dd5e80e56a72f7d805715f0a/src/models.js#L148-L262 |
Some thoughts:
Currently, the distinction between our I propose:
This class will encapsulate all of the logic wrt to devices and converting a The We should create a |
Feature request
Currently, Transformers.js V3 defaults to use CPU (WASM) instead of GPU (WebGPU) due to lack of support and instability across browsers (specifically Firefox and Safari, and Chrome in Ubuntu). However, this provides a poor user experience since is performance left on the table. As browser support for WebGPU increases (currently ~70%), this will become more important since users may experience poor performance when better settings are available.
A better proposal should be to use
device: "auto"
instead ofdevice: null
by default, which should select (1) quantization and (2) device) based on the following:Motivation
Improve user experience and performance with better defaults
Your contribution
Will work with @FL33TW00D on this
The text was updated successfully, but these errors were encountered: