Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Anyway to tune to reduce false positives? #7

Open
ForceConstant opened this issue Feb 26, 2024 · 9 comments
Open

Anyway to tune to reduce false positives? #7

ForceConstant opened this issue Feb 26, 2024 · 9 comments

Comments

@ForceConstant
Copy link

I have disabled pause for now, because I get many warning notifications, depending on what the model looks like. For example see the screenshot below. Print looks fine, but level has passed 5 or six times into the warning zone. This is a stock P1S.

image

@ForceConstant
Copy link
Author

Is there a way to get the output image out of obico that shows the squares for issue spots?

@ForceConstant
Copy link
Author

ForceConstant commented Mar 3, 2024

Can you explain how the api to the ml works? I see that you send image url via the /img , and how/what data comes back? @nberktumer

@nberktumer
Copy link
Owner

I have disabled pause for now, because I get many warning notifications, depending on what the model looks like. For example see the screenshot below. Print looks fine, but level has passed 5 or six times into the warning zone. This is a stock P1S.

The threshold values are hardcoded to aggregation functions in automation. It is possible to create inputs for the thresholds by rewriting the aggregation functions: #2 (comment)

Is there a way to get the output image out of obico that shows the squares for issue spots?

API returns the squares but they are not shown in HA. However, it is possible to create a new image sensor with detected failures.

Can you explain how the api to the ml works? I see that you send image url via the /img , and how/what data comes back? @nberktumer

The response is in the following format:

{
    "detections": [
        ["failure", <probability1>, [box1.x, box1.y, box1.width, box1.height]],
        ["failure", <probability2>, [box2.x, box2.y, box2.width, box2.height]],
        ...
    ]
}

Box values are the detected failures shown as squares in the original Obico service.

@ForceConstant
Copy link
Author

@nberktumer how about the values like ewm, etc. How do they get populated?

@ForceConstant
Copy link
Author

I have disabled pause for now, because I get many warning notifications, depending on what the model looks like. For example see the screenshot below. Print looks fine, but level has passed 5 or six times into the warning zone. This is a stock P1S.

The threshold values are hardcoded to aggregation functions in automation. It is possible to create inputs for the thresholds by rewriting the aggregation functions: #2 (comment)

Is there a way to get the output image out of obico that shows the squares for issue spots?

API returns the squares but they are not shown in HA. However, it is possible to create a new image sensor with detected failures.

Can you explain how the api to the ml works? I see that you send image url via the /img , and how/what data comes back? @nberktumer

The response is in the following format:

{
    "detections": [
        ["failure", <probability1>, [box1.x, box1.y, box1.width, box1.height]],
        ["failure", <probability2>, [box2.x, box2.y, box2.width, box2.height]],
        ...
    ]
}

Box values are the detected failures shown as squares in the original Obico service.

Ok I understand how things work a lot better now and think making a card that shows the boxes wouldn't be too hard. My only request is if in the next drop of this code, to save that actual json resuilt into a entity, so we can use it for the card.

@nopoz
Copy link

nopoz commented Oct 19, 2024

I would really like the ability to adjust the sensitivity threshold as well. I was getting pauses at ~40% spaghetti confidence. This makes the service kind of unusable for me right now. I would think something in the ≥85 or above would be more suitable for a pause.

@nopoz
Copy link

nopoz commented Oct 19, 2024

I'm in the process of adding LED strips to the top of my P1S so maybe the extra light will help with the false positives?

@hairyfred
Copy link

Also agree a threshhold would be really helpful, I end up disabling the automation after the first 1/4 of the print is done due to false positives. Apart from that full credit to an easy way to setup print failure detection :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants