Skip to content

Releases: obss/sahi

v0.10.8

26 Oct 07:26
84cc62e
Compare
Choose a tag to compare

What's Changed

Full Changelog: 0.10.7...0.10.8

v0.10.7

27 Sep 16:36
0e60975
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.10.6...0.10.7

v0.10.6

24 Sep 23:58
4412ba3
Compare
Choose a tag to compare

What's Changed

Full Changelog: 0.10.5...0.10.6

v0.10.5

04 Sep 08:12
9c08275
Compare
Choose a tag to compare

What's Changed

bugfix

other

Full Changelog: 0.10.4...0.10.5

v0.10.4

12 Aug 20:26
072ea62
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.10.3...0.10.4

v0.10.3

02 Aug 07:43
1a9ae25
Compare
Choose a tag to compare

What's Changed

Full Changelog: 0.10.2...0.10.3

v0.10.2

28 Jul 07:24
23b2754
Compare
Choose a tag to compare

What's Changed

  • add automatic slice size calculation by @mcvarer in #512
  • Fix bug that Unable to print for prediction time when verbose=2 by @youngjae-avikus in #521
  • pybboxes allow oob, strict=False fix. by @devrimcavusoglu in #528
  • update predict verbose by @fcakyon in #514
  • update pybboxes version by @fcakyon in #531

New Contributors

  • @youngjae-avikus made their first contribution in #521

Full Changelog: 0.10.1...0.10.2

v0.10.1

25 Jun 20:07
5912293
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.10.0...0.10.1

v0.10.0

21 Jun 14:15
4934db9
Compare
Choose a tag to compare

New Features

- Layer.ai integration

from sahi import AutoDetectionModel

detection_model = AutoDetectionModel.from_layer("layer/yolov5/models/yolov5s")

result = get_sliced_prediction(
    "image.jpeg",
    detection_model,
    slice_height = 512,
    slice_width = 512,
    overlap_height_ratio = 0.2,
    overlap_width_ratio = 0.2
)

- HuggingfFace Transformers object detectors

from sahi.model import HuggingfaceDetectionModel

detection_model = HuggingfaceDetectionModel(
    model_path="facebook/detr-resnet-50",
    image_size=640,
    confidence_threshold=0.5
)

result = get_sliced_prediction(
    "image.jpeg",
    detection_model,
    slice_height = 512,
    slice_width = 512,
    overlap_height_ratio = 0.2,
    overlap_width_ratio = 0.2
)

- TorchVision object detectors

import torchvision
from sahi.model import TorchVisionDetectionModel

model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)

detection_model = TorchVisionDetectionModel(
    model=model,
    image_size=640,
    confidence_threshold=0.5
)

result = get_sliced_prediction(
    "image.jpeg",
    detection_model,
    slice_height = 512,
    slice_width = 512,
    overlap_height_ratio = 0.2,
    overlap_width_ratio = 0.2
)

- Support for exporting predictions in COCO format

from sahi.utils.coco import Coco, CocoImage, CocoAnnotation, CocoPrediction
from sahi.utils.file import save_json
from pycocotools.cocoeval import COCOeval
from pycocotools.coco import COCO

coco_obj = Coco()

# add n images to coco_obj
for _ in range(n):
    image = CocoImage(**kwargs)
    
    # add n annotations to the image
    for _ in ange(n):
        image.add_annotation(CocoAnnotation(**kwargs))
    
    # add n predictions to the image
    for _ in range(n)
        image.add_prediction(CocoPrediction(**kwargs))
    
    # add image to coco object
    coco_obj.add_image(image)

# export ground truth annotations
coco_gt = coco_obj.json
save_json(coco_gt , "ground_truth.json")

# export predictions 
coco_predictions = coco_obj.prediction_array
save_json(coco_predictions, "predictions.json")

coco_ground_truth = COCO(annotation_file="coco_dataset.json")
coco_predictions = coco_ground_truth.loadRes("coco_predictions.json")
coco_evaluator = COCOeval(coco_ground_truth, coco_predictions, "bbox")
coco_evaluator.evaluate()
coco_evaluator.accumulate()
coco_evaluator.summarize()

What's Changed

New Contributors

Full Changelog: 0.9.4...0.10.0

v0.9.4

28 May 18:14
a9122e7
Compare
Choose a tag to compare

What's Changed

New Contributors

Full Changelog: 0.9.3...0.9.4