You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thanks a lot for this huge contribution.
I've been testing Florence-2 recently and believe I've found a significant bug, specifically in the predict() method:
ontology_classes=self.ontology.classes() <--result=run_example(
"<CAPTION_TO_PHRASE_GROUNDING>",
self.processor,
self.model,
image,
"A photo of "+", and ".join(ontology_classes) +".", <--
)
As it can be seen, when inferencing, the prompt is being composed from the ontology classes instead of from the actual prompts.
Btw, I've also noticed a huge improvement when removing the prefix "A photo of" and just leaving the prompts:
", ".join(ontology_classes) + ".",
Hope it's helpful!
KR,
The text was updated successfully, but these errors were encountered:
Hi!
First of all, thanks a lot for this huge contribution.
I've been testing Florence-2 recently and believe I've found a significant bug, specifically in the predict() method:
As it can be seen, when inferencing, the prompt is being composed from the ontology classes instead of from the actual prompts.
Btw, I've also noticed a huge improvement when removing the prefix "A photo of" and just leaving the prompts:
Hope it's helpful!
KR,
The text was updated successfully, but these errors were encountered: