Real time tracks saving for a long running experiment #253
Replies: 6 comments
-
Hi @lesptizami, Unfortunately, we don't have a way of serializing the TrackedObjects, if you only need their positions and ids that should be easy to extract into a simpler data-structure (maybe a numpy array or a python list) and then serialize it. It sounds like you are doing something like that with h5py. Regarding the slow down of the tracker, we haven't observed this behavior but we also haven't tested it on videos as long as you describe. We have seen some performance degradation if the video has many objects at the same time so a few questions to try to narrow down the problem:
Happy that you're enjoying the library and looking forward to that citation 🙌 Let me know if I can help with anything else |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
Also, one other question. In order to implement my analysis of tracking, I needed to keep track of the last n detection. So I modified TrackedObject like this: #################################################
# Subclassing
#################################################
class UpTracker(Tracker):
"""Modify the tracker with a new Factory"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self._obj_factory = Modified_TrackedObjectFactory()
class Modified_TrackedObjectFactory(_TrackedObjectFactory):
"""Modify the TrackedObject from the Factory"""
def __init__(self):
super().__init__()
def create(
self,
initial_detection: "Detection",
hit_counter_max: int,
initialization_delay: int,
pointwise_hit_counter_max: int,
detection_threshold: float,
period: int,
filter_factory: "FilterFactory",
past_detections_length: int,
reid_hit_counter_max: Optional[int],
coord_transformations: CoordinatesTransformation,
) -> "TrackedObject":
obj = UpTrackedObject(
obj_factory=self,
initial_detection=initial_detection,
hit_counter_max=hit_counter_max,
initialization_delay=initialization_delay,
pointwise_hit_counter_max=pointwise_hit_counter_max,
detection_threshold=detection_threshold,
period=period,
filter_factory=filter_factory,
past_detections_length=past_detections_length,
reid_hit_counter_max=reid_hit_counter_max,
coord_transformations=coord_transformations,
)
return obj
class UpTrackedObject(TrackedObject):
"""This is a patch of the class Tracker from norfair"""
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
def _conditionally_add_to_past_detections(self, detection):
"""Patching of past to be n last frames"""
if self.past_detections_length == 0:
return
if len(self.past_detections) < self.past_detections_length:
detection.age = self.age
self.past_detections.append(detection)
else:
detection.age = self.age
self.past_detections.pop(0)
self.past_detections.append(detection) Could it be a method for subclassing within the package without modifying the factory? Or maybe a |
Beta Was this translation helpful? Give feedback.
-
Hola With this very very dirty patching I get a 2-5x increase in speed by using cdist, I found that with a little profiling . Maybe I miss something. from scipy.spatial.distance import cdist
class UpTracker(Tracker):
"""Modify the tracker with a new Factory"""
def __init__(self, *args, **kwargs):
self.distance_function_name = kwargs.get('distance_function', "euclidean")
kwargs['distance_function'] = False
super().__init__(*args, **kwargs)
self._obj_factory = Modified_TrackedObjectFactory()
self.distance_function = "perso"
def _get_distances(
self,
distance_function,
distance_threshold,
objects: Sequence["TrackedObject"],
candidates: Optional[Union[List["Detection"], List["TrackedObject"]]],
):
if distance_function == "perso":
if len(objects)>0:
candidates_list = np.array([[candidate.points[0][0], candidate.points[0][1]] for candidate in candidates])
objects_list = np.array([[objects.estimate[0][0], objects.estimate[0][1]] for objects in objects])
distance_matrix = cdist(candidates_list, objects_list, metric=self.distance_function_name)
distance_matrix[distance_matrix>distance_threshold] = distance_threshold + 1
else:
distance_matrix = np.ones((len(candidates), len(objects)), dtype=np.float32)
distance_matrix *= distance_threshold + 1
return distance_matrix
else:
distance_matrix = np.ones((len(candidates), len(objects)), dtype=np.float32)
distance_matrix *= distance_threshold + 1
for c, candidate in enumerate(candidates):
for o, obj in enumerate(objects):
if candidate.label != obj.label:
distance_matrix[c, o] = distance_threshold + 1
if (candidate.label is None) or (obj.label is None):
print("\nThere are detections with and without label!")
continue
distance = distance_function(candidate, obj)
# Cap detections and objects with no chance of getting matched so we
# dont force the hungarian algorithm to minimize them and therefore
# introduce the possibility of sub optimal results.
# Note: This is probably not needed with the new distance minimizing algorithm
if distance > distance_threshold:
distance_matrix[c, o] = distance_threshold + 1
else:
distance_matrix[c, o] = distance
return distance_matrix cdist could be fully implemented, but not sur why I get an error with the embeding distance so I keep the If I try to use: def _get_distances(
self,
distance_function,
distance_threshold,
objects: Sequence["TrackedObject"],
candidates: Optional[Union[List["Detection"], List["TrackedObject"]]],
):
print(f"Candidates list is:{candidates}")
print(f"Objects list is:{objects}")
print(candidates[0].__dict__)
if len(objects)>0:
candidates_list = np.array([[candidate.points[0][0], candidate.points[0][1]] for candidate in candidates])
#print(candidates_list)
objects_list = np.array([[objects.estimate[0][0], objects.estimate[0][1]] for objects in objects])
#print(objects_list)
distance_matrix = cdist(candidates_list, objects_list, metric=distance_function)
#print(distance_matrix)
distance_matrix[distance_matrix>distance_threshold] = distance_threshold + 1
else:
distance_matrix = np.ones((len(candidates), len(objects)), dtype=np.float32)
distance_matrix *= distance_threshold + 1
return distance_matrix I get:
PS: profiling code import cProfile
pr = cProfile.Profile()
pr.enable()
[.....]
pr.disable()
s = io.StringIO()
ps = pstats.Stats(pr, stream=s).sort_stats('tottime')
ps.print_stats()
with open('profile.txt', 'w+') as f:
f.write(s.getvalue())
|
Beta Was this translation helpful? Give feedback.
-
Here are the new timing after patching. Quite an improvement in timing. |
Beta Was this translation helpful? Give feedback.
-
Another 10 time increase in the speed of conversion from Yolo to Norfair (5ms to 0.5ms for 100 objects) can be implemented by using the following for norfair_detections: List[Detection] = []
detections_as_xywh = np.array(yolo_detections.xywh[0].cpu())
for i in range(detections_as_xywh.shape[0]):
centroid = detections_as_xywh[i, 0:2]
scores = np.array([detections_as_xywh[i, 4]])
norfair_detections.append(
Detection(
points=centroid,
scores=scores,
label=int(detections_as_xywh[i, -1]),
)
) instead of: norfair_detections: List[Detection] = []
detections_as_xywh = yolo_detections.xywh[0]
for detection_as_xywh in detections_as_xywh:
centroid = np.array(
[detection_as_xywh[0].item(), detection_as_xywh[1].item()]
)
scores = np.array([detection_as_xywh[4].item()])
norfair_detections.append(
Detection(
points=centroid,
scores=scores,
label=int(detection_as_xywh[-1].item()),
)
) better to call the tensor once it seems, this reduce the load on the GPU. |
Beta Was this translation helpful? Give feedback.
-
Hey,
Thanks for the package! Pretty straight forward to use and the code readability is improving.
I am wondering, is there a way to save few hundred tracks in real time with compression for an unknown number of objects and few days at 20 FPS?
Pretty interesting for science.
Else I may have something I tinkered based on multiprocessing and h5py.
Also the tracker slow down after a time, I am not sure why but I need to characterize this.
Any way hope to give you some citation soon.
Best.
Beta Was this translation helpful? Give feedback.
All reactions