-
Notifications
You must be signed in to change notification settings - Fork 20
Open
Description
Loading the model can be time-consuming because it may need to be downloaded from the network.
It would be beneficial to make modelOperations.loadModel public. This would allow developers to preload the model as needed, even though we can also load the model by executing modelOperations.runModel('').
vscode-languagedetection/lib/index.ts
Lines 152 to 170 in 100acd1
| private async loadModel() { | |
| if (this._model) { | |
| return; | |
| } | |
| // These 2 env set's just suppress some warnings that get logged that | |
| // are not applicable for this use case. | |
| const tfEnv = env(); | |
| tfEnv.set('IS_NODE', false); | |
| tfEnv.set('PROD', true); | |
| if(!(await setBackend('cpu'))) { | |
| throw new Error('Unable to set backend to CPU.'); | |
| } | |
| const resolvedModelJSON = await this.getModelJSON(); | |
| const resolvedWeights = await this.getWeights(); | |
| this._model = await loadGraphModel(new InMemoryIOHandler(resolvedModelJSON, resolvedWeights)); | |
| } |
Furthermore, a new isReady method is very helpful for developers.
public isReady() {
return !!this._model;
}By the way, this package is awesome! I deployed it in guesslang-worker and tried to use it in blocksuite and its performance was excellent.
Metadata
Metadata
Assignees
Labels
No labels