You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
While cloud-hosted large models provide performance that cannot me matched locally due to infrastructure cost or the model unavailability, having this kind of external connectivity requirement for robots can be a no-go for some applications. Even though it is possible, it is currently not clear how to make RAI work fully locally, and what are its platform requirements.
Local does not necessarily mean on-board, as the possibilities would be too limiting at the moment.
Describe the solution you'd like
Documentation pages and ready configuration files for a fully local RAI setup, clearly stating requirements for GPU memory and other platform parameters. If this depends on a selection of models, give numbers for a few configurations that makes sense and cover lower, mid, and higher end. Indicate the trade-offs such as lower task performance overall.
Look at packages such as llama_ros, whether they can be directly used with the project.
Describe alternatives you've considered
There is no alternative that resolves the problem.
Is your feature request related to a problem? Please describe.
While cloud-hosted large models provide performance that cannot me matched locally due to infrastructure cost or the model unavailability, having this kind of external connectivity requirement for robots can be a no-go for some applications. Even though it is possible, it is currently not clear how to make RAI work fully locally, and what are its platform requirements.
Local does not necessarily mean on-board, as the possibilities would be too limiting at the moment.
Describe the solution you'd like
Documentation pages and ready configuration files for a fully local RAI setup, clearly stating requirements for GPU memory and other platform parameters. If this depends on a selection of models, give numbers for a few configurations that makes sense and cover lower, mid, and higher end. Indicate the trade-offs such as lower task performance overall.
Look at packages such as llama_ros, whether they can be directly used with the project.
Describe alternatives you've considered
There is no alternative that resolves the problem.
Additional context
See https://github.com/mgonzs13/llama_ros
The text was updated successfully, but these errors were encountered: