Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(microservices): add LiDAR, Weight, Peripheral Analytics, and Grafana integration #655

Open
wants to merge 15 commits into
base: main
Choose a base branch
from

Conversation

naman-ranka
Copy link

@naman-ranka naman-ranka commented Jan 28, 2025

Introduces a single common-service for managing both LiDAR and Weight sensors, configurable through environment variables for MQTT, Kafka, and HTTP publishing. Integrates Grafana (with a pre-loaded dashboard and data source) for real-time visualization over MQTT.
Closes #537

PR Checklist

  • Added label to the Pull Request for easier discoverability and search
  • Commit Message meets guidelines as indicated in the URL https://github.com/intel-retail/automated-self-checkout/blob/main/CONTRIBUTING.md
  • Every commit is a single defect fix and does not mix feature addition or changes
  • Unit Tests have been added for new changes
  • Updated Documentation as relevant to the changes
  • All commented code has been removed
  • If you've added a dependency, you've ensured license is compatible with repository license and clearly outlined the added dependency.
  • PR change contains code related to security
  • PR introduces changes that breaks compatibility with other modules (If YES, please provide details below)

What are you changing?

This PR introduces the following features:

1. Overview

  1. Common-Service
  • LiDAR & Weight Sensors handled within one container (common-service).

  • Each sensor has its own configuration (e.g., sensor ID, port, mock mode).

  • Publishes data to one or more protocols:

    • MQTT (e.g., broker host, port, topic)

    • Kafka (e.g., bootstrap server, topic)

    • HTTP (e.g., URL endpoint)

  • Uses a shared framework (publisher.py & config.py) plus two apps (lidar_app.py, weight_app.py).

  1. Environment Variables & Configuration
  • Fully controlled via Fully controlled via docker-compose.yml .

  • Example snippet for LiDAR (2 sensors) & Weight (2 sensors):

environment:
  # LiDAR environment variables
  LIDAR_COUNT: 2
  LIDAR_SENSOR_ID_1: lidar-001
  LIDAR_SENSOR_ID_2: lidar-002
  LIDAR_MOCK_1: "true"
  LIDAR_MOCK_2: "true"
  LIDAR_PORT_1: /dev/ttyUSB0
  LIDAR_PORT_2: /dev/ttyUSB1
  LIDAR_MQTT_ENABLE: "true"
  LIDAR_MQTT_BROKER_HOST: mqtt-broker_1
  LIDAR_MQTT_BROKER_PORT: 1883
  LIDAR_KAFKA_ENABLE: "true"
  KAFKA_BOOTSTRAP_SERVERS: kafka:9093
  LIDAR_KAFKA_TOPIC: "lidar-data"
  LIDAR_MQTT_TOPIC: "lidar/data"
  LIDAR_HTTP_ENABLE: "true"
  LIDAR_HTTP_URL: "http://localhost:5000/api/lidar_data"
  LIDAR_PUBLISH_INTERVAL: 1.0
  LIDAR_LOG_LEVEL: INFO

  # Weight environment variables
  WEIGHT_COUNT: 2
  WEIGHT_SENSOR_ID_1: weight-001
  WEIGHT_SENSOR_ID_2: weight-002
  WEIGHT_MOCK_1: "true"
  WEIGHT_MOCK_2: "true"
  WEIGHT_PORT_1: /dev/ttyUSB2
  WEIGHT_PORT_2: /dev/ttyUSB3
  WEIGHT_MQTT_ENABLE: "true"
  WEIGHT_MQTT_BROKER_HOST: mqtt-broker_1
  WEIGHT_MQTT_BROKER_PORT: 1883
  WEIGHT_KAFKA_ENABLE: "false"
  WEIGHT_MQTT_TOPIC: "weight/data"
  WEIGHT_HTTP_ENABLE: "false"
  WEIGHT_PUBLISH_INTERVAL: 1.0
  WEIGHT_LOG_LEVEL: INFO
  • Change true/false to enable or disable each protocol.

  • Adjust intervals, logging levels, or sensor counts as needed.

  1. Kafka & Zookeeper
  • Included in the docker-compose.yml file.

  • By default, Kafka listens on PLAINTEXT://:9093.

  1. MQTT Broker
  • An Eclipse Mosquitto container, usually accessible at mqtt-broker_1:1883.
  1. Grafana Integration
  • A Grafana container is provided with:
    • Preloaded Dashboard : Sensor-Analytics.

    • Datasource Provisioning : An MQTT data source is set up (may require manual URI update in Grafana’s Data Sources if the hostname differs).


Issue this PR will close

close: #537
(Build a microservice to get sensor details from a Lidar sensor).

Test Instructions :

Run the Demo

  • Start all services using the following command:
    make run-demo
    

A. MQTT (via Grafana)

  1. Access Grafana
  1. Preloaded Dashboard :
  • Look for Sensor-Analytics in the Dashboards list.

  • Panels should be configured for:

    • lidar/data

    • weight/data

  • If you see no data:

    • Go to Configuration > Data Sources and confirm the MQTT data source URI is set to tcp://mqtt-broker_1:1883 or tcp://mqtt-broker:1883 (depending on your Docker network name).

    • Click Test to ensure it connects.

  1. Real-Time Visualization :
  • You should see sensor data updating in each panel.

  • Adjust the refresh interval (top-right in Grafana) if needed.


B. Kafka Publishing

  1. Enable Kafka
  • In your docker-compose.yml, set:
LIDAR_KAFKA_ENABLE: "true"
WEIGHT_KAFKA_ENABLE: "true"
  1. Test from inside the common-service container:
docker exec asc_common_service python kafka_publisher_test.py --topic lidar-data
  • This script subscribes to the lidar-data topic.

  • Verify it prints incoming LiDAR messages published to Kafka.

  1. (Optional) Use other Kafka clients to confirm message flow.

C. HTTP Publishing

This feature can be verified in two ways :

  1. Local HTTP Test (Inside Docker)

    1. In your docker-compose.yml, set:
    LIDAR_HTTP_ENABLE: "true"
    LIDAR_HTTP_URL: "http://localhost:5000/api/lidar_data"
    1. Start the containers:
    make run-demo
    1. Run the test script inside the common-service container:
    docker exec asc_common_service python http_publisher_test.py
    1. The Flask server within common-service will print the received LiDAR data, confirming that HTTP publishing is working.
  2. Webhook.site for External Validation

    1. Go to Webhook.site and generate a unique URL.

    2. Set LIDAR_HTTP_URL in docker-compose.yml to that URL, for example:

    LIDAR_HTTP_ENABLE: "true"
    LIDAR_HTTP_URL: "https://webhook.site/abcdef12-3456-7890-..."
    1. Restart the containers:
    make run-demo
    1. Observe the Recent Requests section on Webhook.site to see incoming LiDAR data in real time.

4. Additional Notes

  • Mock Mode : If you set LIDAR_MOCK_1 or WEIGHT_MOCK_1 to "true", the service publishes randomized demo data instead of reading from hardware.

  • Multi-Sensor Support : LIDAR_COUNT and WEIGHT_COUNT define how many sensors to handle. Each has its own environment variables (e.g., LIDAR_SENSOR_ID_1, LIDAR_PORT_1, etc.).

  • Topic & Interval Config : The publish interval can be tuned (e.g., LIDAR_PUBLISH_INTERVAL, WEIGHT_PUBLISH_INTERVAL), and each sensor’s MQTT/Kafka topic is customizable.

…Kafka/HTTP publishing

- Implement Python microservice to ingest real or mock Lidar data
- Publish sensor data to MQTT, Kafka, or HTTP (controlled by environment vars)
- Add logging, retry logic, and graceful shutdown handling
- Integrate with existing Makefile and Docker Compose

Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
…DAR and weight data

- Subscribes to LiDAR and weight sensor MQTT topics
- Performs analytics to calculate item density (weight per LiDAR point)
- Publishes analytics results to a dedicated MQTT topic ()
- Includes robust logging and error handling

Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
- Introduced Grafana container configuration in docker-compose.yml
- Configured Grafana to depend on mqtt-broker_1

Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
@ejlee3
Copy link
Contributor

ejlee3 commented Jan 28, 2025

Can you please update the title of your PR with a conventional commit style message?

@naman-ranka naman-ranka changed the title I 537 Add microservices for LiDAR, Weight, Peripheral Analytics, and Grafana integration (#537) Jan 28, 2025
src/lidar-service/requirements.txt Fixed Show fixed Hide fixed
src/peripheral_analytics/requirements.txt Fixed Show fixed Hide fixed
src/weight-service/requirements.txt Fixed Show fixed Hide fixed
@NeethuES-intel
Copy link
Contributor

@naman-ranka can you fix the security issues flagged by github-advanced-security bot. Also please follow the git conventional commit format for the PR title -
https://github.com/intel-retail/automated-self-checkout/blob/main/CONTRIBUTING.md

<type>(<scope>): <subject>
<BLANK LINE>
<body>
<BLANK LINE>
<footer>

type here should be feat.

src/lidar-service/app.py Outdated Show resolved Hide resolved
src/peripheral_analytics/app.py Outdated Show resolved Hide resolved
src/weight-service/app.py Outdated Show resolved Hide resolved
@naman-ranka naman-ranka changed the title Add microservices for LiDAR, Weight, Peripheral Analytics, and Grafana integration (#537) feat(microservices): add LiDAR, Weight, Peripheral Analytics, and Grafana integration Introduced microservices for LiDAR and Weight sensors with configurable data publishing. Added Peripheral Analytics service to process and publish computed metrics. Integrated Grafana for real-time data visualization via MQTT. Closes #537 Jan 30, 2025
@naman-ranka naman-ranka changed the title feat(microservices): add LiDAR, Weight, Peripheral Analytics, and Grafana integration Introduced microservices for LiDAR and Weight sensors with configurable data publishing. Added Peripheral Analytics service to process and publish computed metrics. Integrated Grafana for real-time data visualization via MQTT. Closes #537 feat(microservices): add LiDAR, Weight, Peripheral Analytics, and Grafana integration Jan 30, 2025
- Added required copyright header to newly created Python files.
- Ensures compliance with repository licensing guidelines.

Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
@naman-ranka

This comment was marked as off-topic.

@naman-ranka naman-ranka deleted the I-537 branch January 30, 2025 18:31
@naman-ranka naman-ranka restored the I-537 branch January 30, 2025 18:32
@naman-ranka

This comment was marked as off-topic.

Copy link
Contributor

@NeethuES-intel NeethuES-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@naman-ranka From the Design, app.py, Dockerfile and requirements.txt look the same for all 3 services. What is the difference between app.py in each of these services ? Can you consolidate all into one service then ? Looks like only sensor data type is changing here. Are you adding any business logic to any of them to make them unique ? Why duplicate ?

Copy link
Contributor

@antoniomtz antoniomtz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@naman-ranka I was able to run your app, but I had to make so many changes in the docker-compose specifically mqtt and all the networks so the services can communicate with each other. Also, you can create a dashboard template to show the data for demo purposes.
Screenshot from 2025-01-30 18-36-03


peripheral-analytics:
build:
context: ./peripheral-analytics
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
context: ./peripheral-analytics
context: ./peripheral_analytics

ports:
- "1884:1883" # External port 1884 mapped to internal port 1883
volumes:
- ./mosquitto/config/mosquitto.conf:/mosquitto/config/mosquitto.conf
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this creates some issues, the file is empty by the way. I don't think we need this

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NeethuES-intel
Hi Neethu, we implemented one feature at a time, so they are currently in different service directories. We can have a common publish.py for the common publishing framework and three different app.py for the three services. All in the same src dir.
For the business logic, right now we have just implemented the framework to publish the data (currently random), and no specific business logic has been added.
Can you guide us what type of logic we can implement, if it is required for this particular issue

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@antoniomtz
Sorry for the many changes you had to make to our commit. We will review and resolve these issues, and we will also commit the suggested changes. We are working on the dashboard template and will update the code by tomorrow. Please let us know if you need any specific visualizations.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NeethuES-intel Hi Neethu, we implemented one feature at a time, so they are currently in different service directories. We can have a common publish.py for the common publishing framework and three different app.py for the three services. All in the same src dir. For the business logic, right now we have just implemented the framework to publish the data (currently random), and no specific business logic has been added. Can you guide us what type of logic we can implement, if it is required for this particular issue

@naman-ranka For now just consolidate everything, we don't need to duplicate the files for all 3. If you are interested we could add business logic separately after the hackathon.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NeethuES-intel
I've made several changes in this PR to streamline and consolidate our sensor-related code:

Directory Consolidation:
Merged all sensor-specific directories into a single common-sources directory to simplify the structure and improve maintainability.

Unified Publisher Logic with Multi-Sensor Support:
Implemented a common publisher.py that now supports all types of publishing (MQTT, Kafka, HTTP) across sensor types. This update also adds support for multiple sensors of the same type, enabling better scalability and easier integration of additional sensors.

Enhanced MQTT Configuration:
Introduced a custom MQTT Docker configuration to resolve previous network configuration issues, ensuring a more robust deployment setup.

Cleanup:
Removed redundant directories, reducing complexity and potential confusion.

Please review the changes and let me know if you have any questions or further suggestions. Thanks!

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@antoniomtz
Hi Antonio, we have fixed the issues related to MQTT files and also added Grafana dashboards to visualize the published data. These dashboards are pre-saved—you only need to load the data source as per the instructions provided above.

Additionally, after implementing the consolidated package, we learned that it isn’t optimal for performance. If needed, we can explore parallel processing or other solutions in the future.

Thanks!

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Screenshot 2025-01-31 185239
here is a screenshot of grafana dashboard.

…blisher logic

- Consolidated sensor-specific directories into a single common-sources directory
- Implemented a common publisher.py to streamline all types of publishing (MQTT, Kafka, HTTP) across sensor types
- Added support for multiple sensors of the same type to improve scalability
- Introduced a custom MQTT Docker configuration to resolve previous network configuration issues
- Removed redundant directories to simplify the codebase

Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
- Integrated a Grafana dashboard to visualize and analyze data from LiDAR and weight sensors.
- Configured dashboard panels to display time series, sensor counts, and analytics metrics.
- Dashboard pulls data from our existing MQTT data source for real-time insights.

Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
Copy link
Contributor

@NeethuES-intel NeethuES-intel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@naman-ranka @JCHETAN26 Can you provide details on how to create the dashboard and add the datasource. I am getting empty dashboard when I run your code.
I can subscribe to these topics & see the data on mqtt client.
image

@gauravkhanduri
Copy link

gauravkhanduri commented Feb 4, 2025

@naman-ranka @JCHETAN26 Can you provide details on how to create the dashboard and add the datasource. I am getting empty dashboard when I run your code. I can subscribe to these topics & see the data on mqtt client. image

@NeethuES-intel
Hi Neethu,

We have created a preloaded Graphana dashboard, which should be automatically detected on the Graphana "Dashboards (Sensor-Analytics)".

You must add the data source once and the instructions remain the same as in the original readme.

To add MQTT data source:

"Add an MQTT Data Source
Navigate to Configuration > Data Sources.
Add the MQTT plugin and configure it:
URI: tcp://mqtt-broker_1:1883
Save the configuration and click Test to ensure the connection works."

If the dashboard does not display the data you might have to enter each panel individually and click on refresh.
The data load might appear to be a little slow, but it can be easily fixed in a future update

Thank you @gauravkhanduri. It worked this time. But as you mentioned I had to enter into each panel and wait for sometime to get the data. One improvement would be, can you setup mqtt as a datasource (may be in dashboard provisioning yaml) automatically so that user doesn't have to manually set it up ?

{
"mqtt": { "enable": bool, "host": str, "port": int, "topic": str },
"http": { "enable": bool, "url": str },
"kafka": { "enable": bool, "bootstrap_servers": str, "topic": str }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is kafka used here ? I only see MQTT setup as datasource.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NeethuES-intel
The framework we've built is designed to support all three publishing methods—MQTT, Kafka, and HTTP. While the demo currently uses MQTT (and we visualize it via Grafana through an MQTT datasource), the publisher code includes implementations for Kafka and HTTP as well. This setup enables us to publish any type of data across these platforms. For now, we're only demonstrating MQTT, but Kafka support is there if needed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NeethuES-intel The framework we've built is designed to support all three publishing methods—MQTT, Kafka, and HTTP. While the demo currently uses MQTT (and we visualize it via Grafana through an MQTT datasource), the publisher code includes implementations for Kafka and HTTP as well. This setup enables us to publish any type of data across these platforms. For now, we're only demonstrating MQTT, but Kafka support is there if needed.

@naman-ranka can you provide steps to test Kafka & HTTP features ? If not grafana, are there unit test cases that we can use ? We won't be able to merge these in without testing.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we are working on it. We will provide test instructions for both these features.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

HTTP Publishing Feature Test Instructions

Hi @NeethuES-intel,

We have added a test for the HTTP publishing feature. It can be tested in two ways:

1️⃣ Local HTTP Test (Inside Docker)

  1. Set LIDAR_HTTP_URL to http://localhost:5000/api/lidar_data in docker-compose.yml.
  2. Run the test inside the container:
    docker exec asc_common_service python http_publisher_test.py
    
    
  3. The test Flask server inside common-service will print the received data.

2️⃣ Webhook.site for External Validation

  1. Go to Webhook.site and generate a unique URL.

  2. Set LIDAR_HTTP_URL to this Webhook URL.

  3. Restart the container, and you will see the lidar data arriving in Webhook.site logs .

Additionally, we did not modify publisher.py** —our existing framework handled HTTP publishing without any code changes.
Let us know if you need further verification! 🚀

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kafka Publishing Feature Test Instructions

Hi @NeethuES-intel,
We have added a test for the Kafka publishing feature to verify that messages are correctly published and consumed. It can be tested as follows:**

1️⃣ Kafka Consumer Test (Inside Docker)

Run the test inside the common-service container to verify messages are being received:

docker exec asc_common_service python kafka_publisher_test.py --topic lidar-data
  • This will subscribe to the given topic (default: lidar-data) and print received messages.

  • You can specify a different topic using the --topic flag.

Let us know if you need further verification! 🚀

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I updated the initial PR comment to reflect all the updates made till now :)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@NeethuES-intel
I've also added a README file inside the common-service directory. It includes details on setup, environment variables, and testing procedures. Let me know if you need any additional information! 🚀

@naman-ranka

This comment was marked as outdated.

- Implemented Grafana datasource provisioning via YAML to auto-load the MQTT datasource.
- Updated dashboard.json to improve data visualization and remove manual refresh requirement.
- Note: Datasource URL still needs to be manually set due to Grafana's provisioning limitations.

Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
@naman-ranka
Copy link
Author

@naman-ranka @JCHETAN26 Can you provide details on how to create the dashboard and add the datasource. I am getting empty dashboard when I run your code. I can subscribe to these topics & see the data on mqtt client. image

@NeethuES-intel Hi Neethu,

We have created a preloaded Graphana dashboard, which should be automatically detected on the Graphana "Dashboards (Sensor-Analytics)".

You must add the data source once and the instructions remain the same as in the original readme.

To add MQTT data source:

"Add an MQTT Data Source Navigate to Configuration > Data Sources. Add the MQTT plugin and configure it: URI: tcp://mqtt-broker_1:1883 Save the configuration and click Test to ensure the connection works."

If the dashboard does not display the data you might have to enter each panel individually and click on refresh. The data load might appear to be a little slow, but it can be easily fixed in a future update

Thank you @gauravkhanduri. It worked this time. But as you mentioned I had to enter into each panel and wait for sometime to get the data. One improvement would be, can you setup mqtt as a datasource (may be in dashboard provisioning yaml) automatically so that user doesn't have to manually set it up ?

Thank you for the suggestion, @NeethuES-intel. We have implemented this feature using a provisioning YAML file, so now the DataSource is pre-loaded, and refreshing each panel is no longer necessary.

However, there is a small limitation—the DataSource does not automatically pick up the URL from the YAML file. Users will need to manually add it once in the DataSource settings. We thoroughly reviewed the documentation but couldn't find a solution for this.

Let us know if you come across any insights on this!

- Added  to test HTTP publishing inside .
- Updated  to expose port 5000 for internal HTTP testing.
- Verified HTTP publishing via Webhook.site without modifying .

Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
…tion

- Updated Kafka Docker configuration for improved stability and compatibility.
- Added  to verify message consumption from any user-defined topic.
- Supports command-line arguments for dynamic topic selection and timeout configuration.
- Ensures Kafka messages are received correctly and provides meaningful test results.

Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
- Added a README file inside the common-service directory.
- Includes details on setup, environment variables, and testing procedures.

Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
return jsonify({"status": "success"}), 200

if __name__ == '__main__':
app.run(port=5000, debug=True)

Check failure

Code scanning / CodeQL

Flask app is run in debug mode High

A Flask app appears to be run in debug mode. This may allow an attacker to run arbitrary code through the debugger.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you run without debug mode - to avoid attackers running code through the debugger?

Comment on lines +66 to +68
image: my-mosquitto:latest
ports:
- "1884:1883"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we just use the public mqtt broker here? there should be no need to build a custom one.

Suggested change
image: my-mosquitto:latest
ports:
- "1884:1883"
image: eclipse-mosquitto:2.0.18
ports:
- "1883:1883"

This allows these components to be reused across the project instead of building a custom one. The configuration should be passed to the container not built into it.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I built a custom Mosquitto image to copy the configuration file that enables anonymous connections during the build process. The base image is still the public eclipse-mosquitto:2.0.18, but the custom build ensures that the required configuration is in place without needing manual setup.

dockerfile: Dockerfile

container_name: my_grafana
image: my_grafana:latest
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should use the public grafana image - the configuration needs to be passed to the image so that we can add other dashboards without being tied to rebuilding the image.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similarly, the base image for Grafana is still the public one, but the custom build allows us to copy the predefined dashboards and datasources during the build process. This ensures that the necessary configurations are in place from the start, without requiring manual setup, while still allowing additional dashboards to be added dynamically. The public image is called from src/grafana-custom/Dockerfile

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed this by using volumes instead of a custom image. Previously, I encountered permission issues, which is why I initially built a custom image. With this update, the public Grafana image is used while ensuring dashboards and datasources are provisioned dynamically via volumes.

return jsonify({"status": "success"}), 200

if __name__ == '__main__':
app.run(port=5000, debug=True)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can you run without debug mode - to avoid attackers running code through the debugger?

Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
@naman-ranka naman-ranka requested a review from ejlee3 February 7, 2025 21:48
Replaced COPY commands with volumes in docker-compose.yml
Allows dynamic updates to dashboards and datasources without rebuilding the image

Signed-off-by: Naman Yeshwanth Kumar <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request Intermediate
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Build a microservice to get sensor details from a Lidar sensor
5 participants