Skip to content

v0.26.0

Latest
Compare
Choose a tag to compare
@PawelPeczek-Roboflow PawelPeczek-Roboflow released this 08 Nov 17:19
· 14 commits to main since this release
b8a5f64

🚀 Added

🧠 Support for fine-tuned Florence-2 💥

As part of onboarding of Florence-2 fine-tuning on Roboflow platform, @probicheaux made it possible to run your fine-tuned models in inference. Just complete the training on the Platform and deploy it using inference, as any other model we support 🤯

🚦Jetpack 6 Support

We are excited to announce the support for Jetpack 6 which will enable more flexibility of development for Nvidia Jetson devices.

Test the image with the following command on Jetson device with Jetpack 6:

pip install inference-cli

inference server start

or pull the image from

docker pull roboflow/roboflow-inference-server-jetson-6.0.0

🏗️ Changed

InferencePipeline video files FPS subsampling

We've discovered that the behaviour of max_fps parameter is not in line with inference clients expectations regarding processing of video files. Current implementation for vides waits before processing the next video frame, instead dropping the frames to modulate video FPS.

We have added a way to change this suboptimal behaviour in release v0.26.0 - new behaviour of InferencePipeline can be enabled setting environmental variable flag ENABLE_FRAME_DROP_ON_VIDEO_FILE_RATE_LIMITING=True.

❗ Breaking change planned

Please note that the new behaviour will be the default one end of Q4 2024!

See details: #779

Stay tuned for future updates!

Other changes

🔧 Fixed

🏅 New Contributors

Full Changelog: v0.25...v0.26.0