🚀 Added
🧠 Support for fine-tuned Florence-2 💥
As part of onboarding of Florence-2 fine-tuning on Roboflow platform, @probicheaux made it possible to run your fine-tuned models in inference
. Just complete the training on the Platform and deploy it using inference
, as any other model we support 🤯
🚦Jetpack 6 Support
We are excited to announce the support for Jetpack 6 which will enable more flexibility of development for Nvidia Jetson devices.
Test the image with the following command on Jetson device with Jetpack 6:
pip install inference-cli
inference server start
or pull the image from
docker pull roboflow/roboflow-inference-server-jetson-6.0.0
🏗️ Changed
InferencePipeline
video files FPS subsampling
We've discovered that the behaviour of max_fps
parameter is not in line with inference clients expectations regarding processing of video files. Current implementation for vides waits before processing the next video frame, instead dropping the frames to modulate video FPS.
We have added a way to change this suboptimal behaviour in release v0.26.0
- new behaviour of InferencePipeline
can be enabled setting environmental variable flag ENABLE_FRAME_DROP_ON_VIDEO_FILE_RATE_LIMITING=True
.
❗ Breaking change planned
Please note that the new behaviour will be the default one end of Q4 2024!
See details: #779
Stay tuned for future updates!
Other changes
- Pass countinference to usage collector by @SolomonLake in #774
- Do not run tests if branch is not up to date with main by @grzegorz-roboflow in #767
- Include resource_details.billable in workflows usage by @SolomonLake in #776
- Add hostname with optional DEDICATED_DEPLOYMENT_ID to usage payload by @grzegorz-roboflow in #778
- Return single top class confidence for mulit-label predictions by @EmilyGavrilenko in #781
- Aggregate usage for streams and photos separately by @grzegorz-roboflow in #786
- Add gzip support by @alexnorell in #783
- avoiding downloading images if possible by @isaacrob-roboflow in #782
🔧 Fixed
- vulnerability issue with
crypthography
by @PawelPeczek-Roboflow in #790 - Fix model type for classification by @robiscoding in #773
- fix case where there are no good matches for the prompt by @isaacrob-roboflow in #770
- Bugfix: keypoint visualization block by @EmilyGavrilenko in #769
- Do not store usage in cache when API key is not available by @grzegorz-roboflow in #772
- Fix the bug with two stage workflow and continue-if failing when nothing gets detected by primary model by @PawelPeczek-Roboflow in #777
- Remove debug step from 'Test package install - inference-gpu' by @grzegorz-roboflow in #780
- Allow easier inheritance of pipeline by @RossLote in #789
🏅 New Contributors
Full Changelog: v0.25...v0.26.0