-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
find a way to optimize AI loading times #166
Comments
@caquino this is a nice hacktoberfest entry |
👍 |
I think the two main things that takes time:
EDIT: Regarding issue #49, CMA-ES (which has a nice python package) is usually a good start |
@araffin thanks for your feedback!
For #49 I already have my own implementation, but thanks :D |
Importing stable baseline, all old tensorflow (works on v1 not v2) which loads all the tf graph in memory the first time, instead of compiling at run time, makes way to heavy. Maybe one can rewrite a2c tuned for this application and use (not sure if you are already doing this) a compiled version for rasberry. This should reduce loading time quite a bit. As a side: LSTM is computationally heavy to run (complexity is on the size of the features) and it make sense if you have long time dependencies in your time-series. Is this the case? One can try to use 1d convolution (complexity is on number of samples) which are more suitable for time-series kind of signal. |
yes |
TensorFlow takes minutes to import on a Raspberry Pi Zero W and that's probably because of the huge .so file with native primitives it has to load, among other things. Given the nature of the project, that stuff is imported only once, so caching it in memory wouldn't speed things up. Switching frameworks is not feasible, unless we have the same exact features (unlikely given that stable-baselines is TF based). For instance, there's no stable-baselines port for TF-lite.
The text was updated successfully, but these errors were encountered: