v0.9.10
π Added
inference
Benchmarking πββοΈ
A new command has been added to the inference-cli
for benchmarking performance. Now you can test inference
in different environments with different configurations and measure its performance. Look at us testing speed and scalability of hosted inference at Roboflow platform π€―
scaling_of_hosted_roboflow_platform.mov
Run your own benchmark with a simple command:
inference benchmark python-package-speed -m coco/3
See the docs for more details.
π± Changed
- Improved serialisation logic of requests and responses that helps Roboflow platform to improve model monitoring
π¨ Fixed
- bug #260 causing
inference
API instability in multiple-workers setup and in case of shuffling large amount of models - from now on, API container should not raise strange HTTP 5xx errors due to model management - faulty logic for getting request_id causing errors in parallel-http container
π Contributors
@paulguerrie (Paul Guerrie), @SolomonLake (Solomon Lake ), @robiscoding (Rob Miller) @PawelPeczek-Roboflow (PaweΕ PΔczek)
Full Changelog: v0.9.9...v0.9.10