-
Notifications
You must be signed in to change notification settings - Fork 46
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(microservices): add LiDAR, Weight, Peripheral Analytics, and Grafana integration #655
base: main
Are you sure you want to change the base?
Conversation
…Kafka/HTTP publishing - Implement Python microservice to ingest real or mock Lidar data - Publish sensor data to MQTT, Kafka, or HTTP (controlled by environment vars) - Add logging, retry logic, and graceful shutdown handling - Integrate with existing Makefile and Docker Compose Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
…DAR and weight data - Subscribes to LiDAR and weight sensor MQTT topics - Performs analytics to calculate item density (weight per LiDAR point) - Publishes analytics results to a dedicated MQTT topic () - Includes robust logging and error handling Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
- Introduced Grafana container configuration in docker-compose.yml - Configured Grafana to depend on mqtt-broker_1 Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
Can you please update the title of your PR with a conventional commit style message? |
@naman-ranka can you fix the security issues flagged by github-advanced-security bot. Also please follow the git conventional commit format for the PR title -
type here should be feat. |
Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
- Added required copyright header to newly created Python files. - Ensures compliance with repository licensing guidelines. Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as off-topic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@naman-ranka From the Design, app.py, Dockerfile and requirements.txt look the same for all 3 services. What is the difference between app.py in each of these services ? Can you consolidate all into one service then ? Looks like only sensor data type is changing here. Are you adding any business logic to any of them to make them unique ? Why duplicate ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@naman-ranka I was able to run your app, but I had to make so many changes in the docker-compose specifically mqtt and all the networks so the services can communicate with each other. Also, you can create a dashboard template to show the data for demo purposes.
src/docker-compose.yml
Outdated
|
||
peripheral-analytics: | ||
build: | ||
context: ./peripheral-analytics |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
context: ./peripheral-analytics | |
context: ./peripheral_analytics |
src/docker-compose.yml
Outdated
ports: | ||
- "1884:1883" # External port 1884 mapped to internal port 1883 | ||
volumes: | ||
- ./mosquitto/config/mosquitto.conf:/mosquitto/config/mosquitto.conf |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this creates some issues, the file is empty by the way. I don't think we need this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NeethuES-intel
Hi Neethu, we implemented one feature at a time, so they are currently in different service directories. We can have a common publish.py for the common publishing framework and three different app.py for the three services. All in the same src dir.
For the business logic, right now we have just implemented the framework to publish the data (currently random), and no specific business logic has been added.
Can you guide us what type of logic we can implement, if it is required for this particular issue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@antoniomtz
Sorry for the many changes you had to make to our commit. We will review and resolve these issues, and we will also commit the suggested changes. We are working on the dashboard template and will update the code by tomorrow. Please let us know if you need any specific visualizations.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NeethuES-intel Hi Neethu, we implemented one feature at a time, so they are currently in different service directories. We can have a common publish.py for the common publishing framework and three different app.py for the three services. All in the same src dir. For the business logic, right now we have just implemented the framework to publish the data (currently random), and no specific business logic has been added. Can you guide us what type of logic we can implement, if it is required for this particular issue
@naman-ranka For now just consolidate everything, we don't need to duplicate the files for all 3. If you are interested we could add business logic separately after the hackathon.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NeethuES-intel
I've made several changes in this PR to streamline and consolidate our sensor-related code:
Directory Consolidation:
Merged all sensor-specific directories into a single common-sources directory to simplify the structure and improve maintainability.
Unified Publisher Logic with Multi-Sensor Support:
Implemented a common publisher.py that now supports all types of publishing (MQTT, Kafka, HTTP) across sensor types. This update also adds support for multiple sensors of the same type, enabling better scalability and easier integration of additional sensors.
Enhanced MQTT Configuration:
Introduced a custom MQTT Docker configuration to resolve previous network configuration issues, ensuring a more robust deployment setup.
Cleanup:
Removed redundant directories, reducing complexity and potential confusion.
Please review the changes and let me know if you have any questions or further suggestions. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@antoniomtz
Hi Antonio, we have fixed the issues related to MQTT files and also added Grafana dashboards to visualize the published data. These dashboards are pre-saved—you only need to load the data source as per the instructions provided above.
Additionally, after implementing the consolidated package, we learned that it isn’t optimal for performance. If needed, we can explore parallel processing or other solutions in the future.
Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
…blisher logic - Consolidated sensor-specific directories into a single common-sources directory - Implemented a common publisher.py to streamline all types of publishing (MQTT, Kafka, HTTP) across sensor types - Added support for multiple sensors of the same type to improve scalability - Introduced a custom MQTT Docker configuration to resolve previous network configuration issues - Removed redundant directories to simplify the codebase Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
- Integrated a Grafana dashboard to visualize and analyze data from LiDAR and weight sensors. - Configured dashboard panels to display time series, sensor counts, and analytics metrics. - Dashboard pulls data from our existing MQTT data source for real-time insights. Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@naman-ranka @JCHETAN26 Can you provide details on how to create the dashboard and add the datasource. I am getting empty dashboard when I run your code.
I can subscribe to these topics & see the data on mqtt client.
@NeethuES-intel We have created a preloaded Graphana dashboard, which should be automatically detected on the Graphana "Dashboards (Sensor-Analytics)". You must add the data source once and the instructions remain the same as in the original readme. To add MQTT data source:"Add an MQTT Data Source If the dashboard does not display the data you might have to enter each panel individually and click on refresh. Thank you @gauravkhanduri. It worked this time. But as you mentioned I had to enter into each panel and wait for sometime to get the data. One improvement would be, can you setup mqtt as a datasource (may be in dashboard provisioning yaml) automatically so that user doesn't have to manually set it up ? |
{ | ||
"mqtt": { "enable": bool, "host": str, "port": int, "topic": str }, | ||
"http": { "enable": bool, "url": str }, | ||
"kafka": { "enable": bool, "bootstrap_servers": str, "topic": str } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is kafka used here ? I only see MQTT setup as datasource.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NeethuES-intel
The framework we've built is designed to support all three publishing methods—MQTT, Kafka, and HTTP. While the demo currently uses MQTT (and we visualize it via Grafana through an MQTT datasource), the publisher code includes implementations for Kafka and HTTP as well. This setup enables us to publish any type of data across these platforms. For now, we're only demonstrating MQTT, but Kafka support is there if needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NeethuES-intel The framework we've built is designed to support all three publishing methods—MQTT, Kafka, and HTTP. While the demo currently uses MQTT (and we visualize it via Grafana through an MQTT datasource), the publisher code includes implementations for Kafka and HTTP as well. This setup enables us to publish any type of data across these platforms. For now, we're only demonstrating MQTT, but Kafka support is there if needed.
@naman-ranka can you provide steps to test Kafka & HTTP features ? If not grafana, are there unit test cases that we can use ? We won't be able to merge these in without testing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we are working on it. We will provide test instructions for both these features.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
HTTP Publishing Feature Test Instructions
Hi @NeethuES-intel,
We have added a test for the HTTP publishing feature. It can be tested in two ways:
1️⃣ Local HTTP Test (Inside Docker)
- Set
LIDAR_HTTP_URL
tohttp://localhost:5000/api/lidar_data
indocker-compose.yml
. - Run the test inside the container:
docker exec asc_common_service python http_publisher_test.py
- The test Flask server inside
common-service
will print the received data.
2️⃣ Webhook.site for External Validation
-
Go to Webhook.site and generate a unique URL.
-
Set
LIDAR_HTTP_URL
to this Webhook URL. -
Restart the container, and you will see the lidar data arriving in Webhook.site logs .
Additionally, we did not modify publisher.py
** —our existing framework handled HTTP publishing without any code changes.
Let us know if you need further verification! 🚀
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Kafka Publishing Feature Test Instructions
Hi @NeethuES-intel,
We have added a test for the Kafka publishing feature to verify that messages are correctly published and consumed. It can be tested as follows:**
1️⃣ Kafka Consumer Test (Inside Docker)
Run the test inside the common-service container to verify messages are being received:
docker exec asc_common_service python kafka_publisher_test.py --topic lidar-data
-
This will subscribe to the given topic (default:
lidar-data
) and print received messages. -
You can specify a different topic using the
--topic
flag.
Let us know if you need further verification! 🚀
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated the initial PR comment to reflect all the updates made till now :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NeethuES-intel
I've also added a README file inside the common-service directory. It includes details on setup, environment variables, and testing procedures. Let me know if you need any additional information! 🚀
This comment was marked as outdated.
This comment was marked as outdated.
- Implemented Grafana datasource provisioning via YAML to auto-load the MQTT datasource. - Updated dashboard.json to improve data visualization and remove manual refresh requirement. - Note: Datasource URL still needs to be manually set due to Grafana's provisioning limitations. Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
Thank you for the suggestion, @NeethuES-intel. We have implemented this feature using a provisioning YAML file, so now the DataSource is pre-loaded, and refreshing each panel is no longer necessary. However, there is a small limitation—the DataSource does not automatically pick up the URL from the YAML file. Users will need to manually add it once in the DataSource settings. We thoroughly reviewed the documentation but couldn't find a solution for this. Let us know if you come across any insights on this! |
- Added to test HTTP publishing inside . - Updated to expose port 5000 for internal HTTP testing. - Verified HTTP publishing via Webhook.site without modifying . Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
…tion - Updated Kafka Docker configuration for improved stability and compatibility. - Added to verify message consumption from any user-defined topic. - Supports command-line arguments for dynamic topic selection and timeout configuration. - Ensures Kafka messages are received correctly and provides meaningful test results. Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
- Added a README file inside the common-service directory. - Includes details on setup, environment variables, and testing procedures. Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
return jsonify({"status": "success"}), 200 | ||
|
||
if __name__ == '__main__': | ||
app.run(port=5000, debug=True) |
Check failure
Code scanning / CodeQL
Flask app is run in debug mode High
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you run without debug mode - to avoid attackers running code through the debugger?
image: my-mosquitto:latest | ||
ports: | ||
- "1884:1883" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can we just use the public mqtt broker here? there should be no need to build a custom one.
image: my-mosquitto:latest | |
ports: | |
- "1884:1883" | |
image: eclipse-mosquitto:2.0.18 | |
ports: | |
- "1883:1883" |
This allows these components to be reused across the project instead of building a custom one. The configuration should be passed to the container not built into it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I built a custom Mosquitto image to copy the configuration file that enables anonymous connections during the build process. The base image is still the public eclipse-mosquitto:2.0.18, but the custom build ensures that the required configuration is in place without needing manual setup.
src/docker-compose.yml
Outdated
dockerfile: Dockerfile | ||
|
||
container_name: my_grafana | ||
image: my_grafana:latest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we should use the public grafana image - the configuration needs to be passed to the image so that we can add other dashboards without being tied to rebuilding the image.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similarly, the base image for Grafana is still the public one, but the custom build allows us to copy the predefined dashboards and datasources during the build process. This ensures that the necessary configurations are in place from the start, without requiring manual setup, while still allowing additional dashboards to be added dynamically. The public image is called from src/grafana-custom/Dockerfile
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed this by using volumes instead of a custom image. Previously, I encountered permission issues, which is why I initially built a custom image. With this update, the public Grafana image is used while ensuring dashboards and datasources are provisioned dynamically via volumes.
return jsonify({"status": "success"}), 200 | ||
|
||
if __name__ == '__main__': | ||
app.run(port=5000, debug=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you run without debug mode - to avoid attackers running code through the debugger?
Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
Replaced COPY commands with volumes in docker-compose.yml Allows dynamic updates to dashboards and datasources without rebuilding the image Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
Introduces a single
common-service
for managing both LiDAR and Weight sensors, configurable through environment variables for MQTT, Kafka, and HTTP publishing. Integrates Grafana (with a pre-loaded dashboard and data source) for real-time visualization over MQTT.Closes #537
PR Checklist
What are you changing?
This PR introduces the following features:
1. Overview
LiDAR & Weight Sensors handled within one container (
common-service
).Each sensor has its own configuration (e.g., sensor ID, port, mock mode).
Publishes data to one or more protocols:
MQTT (e.g., broker host, port, topic)
Kafka (e.g., bootstrap server, topic)
HTTP (e.g., URL endpoint)
Uses a shared framework (
publisher.py
&config.py
) plus two apps (lidar_app.py
,weight_app.py
).Fully controlled via Fully controlled via
docker-compose.yml
.Example snippet for LiDAR (2 sensors) & Weight (2 sensors):
Change
true/false
to enable or disable each protocol.Adjust intervals, logging levels, or sensor counts as needed.
Included in the
docker-compose.yml
file.By default, Kafka listens on
PLAINTEXT://:9093
.mqtt-broker_1:1883
.Preloaded Dashboard : Sensor-Analytics.
Datasource Provisioning : An MQTT data source is set up (may require manual URI update in Grafana’s Data Sources if the hostname differs).
Issue this PR will close
close: #537
(Build a microservice to get sensor details from a Lidar sensor).
Test Instructions :
Run the Demo
A. MQTT (via Grafana)
Go to http://localhost:3000
Default Credentials :
admin
/admin
Look for Sensor-Analytics in the Dashboards list.
Panels should be configured for:
lidar/data
weight/data
If you see no data:
Go to Configuration > Data Sources and confirm the MQTT data source URI is set to
tcp://mqtt-broker_1:1883
ortcp://mqtt-broker:1883
(depending on your Docker network name).Click Test to ensure it connects.
You should see sensor data updating in each panel.
Adjust the refresh interval (top-right in Grafana) if needed.
B. Kafka Publishing
docker-compose.yml
, set:docker exec asc_common_service python kafka_publisher_test.py --topic lidar-data
This script subscribes to the
lidar-data
topic.Verify it prints incoming LiDAR messages published to Kafka.
C. HTTP Publishing
This feature can be verified in two ways :
Local HTTP Test (Inside Docker)
docker-compose.yml
, set:common-service
container:docker exec asc_common_service python http_publisher_test.py
common-service
will print the received LiDAR data, confirming that HTTP publishing is working.Webhook.site for External Validation
Go to Webhook.site and generate a unique URL.
Set
LIDAR_HTTP_URL
indocker-compose.yml
to that URL, for example:4. Additional Notes
Mock Mode : If you set
LIDAR_MOCK_1
orWEIGHT_MOCK_1
to"true"
, the service publishes randomized demo data instead of reading from hardware.Multi-Sensor Support :
LIDAR_COUNT
andWEIGHT_COUNT
define how many sensors to handle. Each has its own environment variables (e.g.,LIDAR_SENSOR_ID_1
,LIDAR_PORT_1
, etc.).Topic & Interval Config : The publish interval can be tuned (e.g.,
LIDAR_PUBLISH_INTERVAL
,WEIGHT_PUBLISH_INTERVAL
), and each sensor’s MQTT/Kafka topic is customizable.