A mono-repo for generative AI assistants, based on RAG architecture.
Currently deployed as a Slack application, we also plan to integrate with web applications, in particular Altinn Studio.
This diagram shows the main functional blocks and how data flows between them:
#altinn-assistant on Altinn Devops Slack
- Install dependencies:
$ yarn install
- Build all packages and apps
$ yarn build
- Run slack-app and admin ui
$ yarn run:slack-app
Note: in order for your local bot endpoint to receive traffic from Slack, you need to configure a proxy service such as ngrok
, and configure a Slack app to use the URL allocated by ngrok.
- Configure a new Slack app in your test workspace
It is recommended to create a few Slack workspace specifically for development and testing of Slack apps
- Create a resouce group
- Create a Container App Environment
- Create a container repository
- Create a Container App
For step by step instructions see: Create a Container App
-
Update crawler index
-
Update search phrase index
$ cd cli
$ yarn run:generateSearchPhrases <typesense collection name>
Example collection name: TEST_altinn-studio-docs-search-phrases`
The Altinn Assistants architecture consists of the following technology components:
- Front-end (Slack app, Admin web app)
- Database-as-a-service (Supabase)
- Serverless function runtime (Supabase Functions)
- GPU-accelerated (Python app running in a GPU-enabled VM)
- Text search index (Typesense Cloud)
Sequence diagram showing execution of a typical end user query:
sequenceDiagram
participant slack.com
participant slack-app
participant llm-api
participant supabase-db
participant supabase-functions
participant semantic-search
participant gpu-services
slack.com->>slack-app: new text message
slack-app->>supabase-db: GET /config (cached)
supabase-db->>slack-app: workspace + channel config
slack-app->>llm-api: stage 1: analyze user query, translate if necessary
llm-api->>slack-app: query analysis, translation
slack-app->>llm-api: stage 2: query relaxation
llm-api->>slack-app: generated search phrases
slack-app->>semantic-search: hybrid search (cosine similarity + keyword)
semantic-search->>slack-app: sorted doc list
slack-app->>gpu-services: POST /colbert/rerank w/user query
gpu-services->>slack-app: Re-ranked doc list
slack-app->>llm-api: Prompt + retrieved context
llm-api->>slack-app: Helpful response (streamed)
slack-app->>slack.com: Helpful response (streamed response)
slack-app->>slack.com: Finalized response w/metadata
slack-app->>supabase-functions: Log query, response and metadata
supabase-functions->>supabase-db: upsert team, channel, <br>user and message data
slack-app->>llm-api: Translate to user's language
llm-api->>slack-app: (streamed response)
slack-app->>slack.com: (streamed response)
slack-app->>slack.com: Finalized response w/metadata
slack-app->>supabase-db: Log query, response and metadata