The Retrieval Augmented Generation (RAG) Chatbot System is designed for the Municipality of Amsterdam to efficiently handle citizen signals (meldingen). When a melding is received, the system retrieves relevant information using RAG, with the aim of resolving the issue before it is forwarded to the Municipality’s official Signals system. This approach improves the overall user experience by reducing the number of meldingen that need to be escalated, ensuring faster and more efficient responses.
demo.mp4
The RAG Chatbot System for Citizen Signals was developed to streamline the process of managing citizen reports (meldingen) for the Municipality of Amsterdam. By using Retrieval Augmented Generation (RAG), the system can pull in relevant information related to a melding and attempt to resolve it before escalating it to the Municipality's official Signals system. This proactive approach helps reduce the workload on the municipal team and enhances the user experience by providing quicker resolutions.
The system is designed to efficiently manage citizen-reported issues, improve response times, and minimize unnecessary entries into the Signals system. By integrating RAG, the chatbot retrieves contextual information and assists citizens directly, minimizing the need for manual handling. This innovative approach enhances overall efficiency and helps the municipality better serve its citizens.
src
: All source code files specific to this project.
- Clone this repository:
git clone https://github.com/Amsterdam-AI-Team/citizen-signals-rag-chatbot-system.git
- Install pyaudio driver:
sudo apt-get install portaudio19-dev
- Install all dependencies:
pip install -r requirements.txt
The code has been tested with Python 3.10.0 on Linux/MacOS/Windows.
First, navigate to the source directory:
cd src
You can run the chatbot locally by executing the following command:
python3 app.py
This will start the chatbot on your localhost, allowing you to interact with it via a web interface.
If you wish to run this code via Azure ML services, you should open repository and run app.py in VS Desktop mode. This will ensure the localhost application works properly.
-
You will need an OpenAI API key for answer generation and image processing. This API keys should also be specified in the configuration file. It is possible to use different LLMs of your choice, but doing so will require modifying the code accordingly.
-
Only the melding type 'Afval' and subtypes 'restafval' (e.g., garbage bags) and 'grof afval' (e.g., sofas and chairs) is now supported. You can add more protocols in the processors.py file.
-
Retrieval using RAG for type 'Afval' is now dynamic and pulls garbage collecting times given an address. The prompt which is created to format the answer which includes these times could be further tweaked for better alignment with the melding conversation.
-
Current implementation for reading messages out loud is not compatible with Azure OpenAI API because the tts model is not (yet) supported.
We welcome contributions! Feel free to open an issue, submit a pull request, or contact us directly.
This repository was created by Amsterdam Intelligence for the City of Amsterdam.
Optional: add citation or references here.
This project is licensed under the terms of the European Union Public License 1.2 (EUPL-1.2).