-
Notifications
You must be signed in to change notification settings - Fork 7
Basic Concepts
If you landed here is because, you decided to cloud free your vacuum and you are looking for a way to render it on home assistant without additional docker containers, ssh, modification's to the vacuum, or traffic over TCP network. The fact is, that we know perfectly what means Valetudo, so a kind of do it yourself isn't inappropriate and it is not laziness to have a comfortable way to setup and render our vacuum's images.
The scope of this project, is, to render the images of our vacuums in Home Assistant.
The calibration points setup of the images to use the lovelace-xiaomi-vacuum-map-card should be automatic and as I'm a great fun of the work done from Piotr Machowski, it was one of the future I wanted from the map render, but at that time there was not this possibility.. the map extractor of Piotr support only cloud based vacuums and official Valetudo add on didn't provide such functionality (no blame, just fact).
So here we are, this development started on August 2013, after several months of time gladly spent on it, and I need to say, it's not may primally occupation to develop software ;) anyhow let's introduce..
The Camera works like this:
Classes wise the Camera is here represented:
The main class of course is the Camera:
- It checks for the incoming messages from MQTT via Connector.
- Redirect the extracted data to Hypfer or Rand236 Image Handlers.
- The image Handlers will decode (via Utils) Hypfer or Rand236 Json data (produced in the ReParser, that decode the vacuum raw data).
- The image is composed in the image handler, we use Numpy arrays (as OpenCV would do), here we also calculate the calibration, rooms and image trimming constants we will use. We also manage the Frames, at each frame 0 the image is completely redraw, the trimming is based on the startup image size here produced (just at startup).
There is still some adjustment to do but in general lines this is how it works.