Skip to content

v0.9.16

Compare
Choose a tag to compare
@PawelPeczek-Roboflow PawelPeczek-Roboflow released this 11 Mar 15:48
· 2714 commits to main since this release
14fe2a9

🚀 Added

🎬 InferencePipeline can now process the video using your custom logic

Prior to v0.9.16, InferencePipeline was only able to make inference against Roboflow models. Now - you can inject any arbitrary logic of your choice and process videos (files and streams) using custom function you create. Just look at the example:

import os
import json
from inference.core.interfaces.camera.entities import VideoFrame
from inference import InferencePipeline

TARGET_DIR = "./my_predictions"

class MyModel:

  def __init__(self, weights_path: str):
    self._model = your_model_loader(weights_path)

  def infer(self, video_frame: VideoFrame) -> dict:
    return self._model(video_frame.image)


def save_prediction(prediction: dict, video_frame: VideoFrame) -> None:
  with open(os.path.join(TARGET_DIR, f"{video_frame.frame_id}.json")) as f:
    json.dump(prediction, f)

my_model = MyModel("./my_model.pt")

pipeline = InferencePipeline.init_with_custom_logic(
  video_reference="./my_video.mp4",
  on_video_frame=my_model.infer,   # <-- your custom video frame processing function
  on_prediction=save_prediction,  # <-- your custom sink for predictions
)

# start the pipeline
pipeline.start()
# wait for the pipeline to finish
pipeline.join()

That's not everything! Remember our workflows feature? We've just added workflows into InferencePipeline (in experimental mode). Check InferencePipeline.init_with_workflow(...) to test the feature.

❗ Breaking change: we've reverted changes introduced in v0.9.15 to InferencePipeline.init(...) making it compatible with YOLOWorld model. Now, you would need to use InferencePipeline.init_with_yolo_world(...) as shown here:

pipeline = InferencePipeline.init_with_yolo_world(
      video_reference="YOUR-VIDEO"
      on_prediction=...,
      classes=["person", "dog", "car", "truck"]
  )

We've updated 📖 docs to make it easy to use new feature.

Thanks @paulguerrie for great contribution

🌱 Changed

  • Huge changes in 📖 docs - thanks @capjamesg, @SkalskiP, @SolomonLake for contribution
  • Improved contributor experience by adding contributor guide and separating GHA CI, such that most important tests could work against repository fork
  • OpenVINO as default ONNX Execution Provider for x86 based docker images to improve speed of inference (@probicheaux )
  • Camera properties in InferencePipeline can be set now by caller (@sberan)

🔨 Fixed

  • added missing structlog dependency to package (@paulguerrie)
  • clarified models licence (@yeldarby)
  • bugs in lambda HTTP inference
  • fixed portion of security vulnerabilities
  • breaking: Two exceptions (WorkspaceLoadError, MalformedWorkflowResponseError), when raised will be given HTTP502 error, instead of HTTP500 as previously
  • bug in workflows with class-filter at the level of detection-based model blocks not being applied.

New Contributors

Full Changelog: v0.9.15...v0.9.16