diff --git a/.env b/.env index d6c1a9d..4a5e7e7 100644 --- a/.env +++ b/.env @@ -1,2 +1,3 @@ -RASA_PRO_LICENSE='your_rasa_pro_license_key_here' -OPENAI_API_KEY='your_openai_api_key_here' +RASA_PRO_LICENSE='eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJqdGkiOiJjZTk0MGRjMi0zMmJhLTRjM2EtYWE2My01ODAyNWI0YjFhZmIiLCJpYXQiOjE3MzQ0Mzk5NzUsIm5iZiI6MTczNDQzOTk3MSwic2NvcGUiOiJyYXNhOnBybyByYXNhOnBybzpjaGFtcGlvbiIsImV4cCI6MTgyOTA0Nzk3MSwiZW1haWwiOiJkYW5pZWxlLmRlZ3ZvaWNlQGdtYWlsLmNvbSIsImNvbXBhbnkiOiJSYXNhIENoYW1waW9ucyJ9.yT1AtBBeKyKAxekCCd8wHGSb3Wjd-dCnaU18lHS_ab3M1CmFQkkC-0bQANtor7rYbZKdMEEPUW9G8NZu3wHODEWgUFBP0cau4zjpcTgBQVMBJcfFg59SZa2LmLr0bPjDEiLqO6SO9I3-OLeBdEhm7Tx386leuGyz0xrVE7JshylVXQ8YNWNJgFxQKRPwoWK2YHR1oUDKyVP2l8Uo9AbCyZTGjQt9WwAk7poiqxSRdFDJHVCXSrC4rRejTgMhTaBCu2DbL52o0nsSs1r95k_Dlgvg0rvZBvuGgb4AihV_r12vEykg6CKbtKDZuYW5hxl-njI4tl9tGDnhlPtuNoLuhA' +HUGGINGFACE_API_KEY='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' +OPENAI_API_KEY='open_here' \ No newline at end of file diff --git a/.rasa/cache/cache.db b/.rasa/cache/cache.db new file mode 100644 index 0000000..a8b4423 Binary files /dev/null and b/.rasa/cache/cache.db differ diff --git a/.rasa/cache/rasa-llm-cache/cache.db b/.rasa/cache/rasa-llm-cache/cache.db new file mode 100644 index 0000000..2dbbc14 Binary files /dev/null and b/.rasa/cache/rasa-llm-cache/cache.db differ diff --git a/.rasa/cache/rasa-llm-cache/cache.db-shm b/.rasa/cache/rasa-llm-cache/cache.db-shm new file mode 100644 index 0000000..78b05bf Binary files /dev/null and b/.rasa/cache/rasa-llm-cache/cache.db-shm differ diff --git a/.rasa/cache/rasa-llm-cache/cache.db-wal b/.rasa/cache/rasa-llm-cache/cache.db-wal new file mode 100644 index 0000000..05374d1 Binary files /dev/null and b/.rasa/cache/rasa-llm-cache/cache.db-wal differ diff --git a/.rasa/cache/tmpmcpc3ms2/command_prompt.jinja2 b/.rasa/cache/tmpmcpc3ms2/command_prompt.jinja2 new file mode 100644 index 0000000..673fcae --- /dev/null +++ b/.rasa/cache/tmpmcpc3ms2/command_prompt.jinja2 @@ -0,0 +1,60 @@ +Your task is to analyze the current conversation context and generate a list of actions to start new business processes that we call flows, to extract slots, or respond to small talk and knowledge requests. + +These are the flows that can be started, with their description and slots: +{% for flow in available_flows %} +{{ flow.name }}: {{ flow.description }} + {% for slot in flow.slots -%} + slot: {{ slot.name }}{% if slot.description %} ({{ slot.description }}){% endif %}{% if slot.allowed_values %}, allowed values: {{ slot.allowed_values }}{% endif %} + {% endfor %} +{%- endfor %} + +=== +Here is what happened previously in the conversation: +{{ current_conversation }} + +=== +{% if current_flow != None %} +You are currently in the flow "{{ current_flow }}". +You have just asked the user for the slot "{{ current_slot }}"{% if current_slot_description %} ({{ current_slot_description }}){% endif %}. + +{% if flow_slots|length > 0 %} +Here are the slots of the currently active flow: +{% for slot in flow_slots -%} +- name: {{ slot.name }}, value: {{ slot.value }}, type: {{ slot.type }}, description: {{ slot.description}}{% if slot.allowed_values %}, allowed values: {{ slot.allowed_values }}{% endif %} +{% endfor %} +{% endif %} +{% else %} +You are currently not in any flow and so there are no active slots. +This means you can only set a slot if you first start a flow that requires that slot. +{% endif %} +If you start a flow, first start the flow and then optionally fill that flow's slots with information the user provided in their message. + +The user just said """{{ user_message }}""". + +=== +Based on this information generate a list of actions you want to take. Your job is to start flows and to fill slots where appropriate. Any logic of what happens afterwards is handled by the flow engine. These are your available actions: +* Slot setting, described by "SetSlot(slot_name, slot_value)". An example would be "SetSlot(recipient, Freddy)" +* Starting another flow, described by "StartFlow(flow_name)". An example would be "StartFlow(transfer_money)" +* Cancelling the current flow, described by "CancelFlow()" +* Clarifying which flow should be started. An example would be Clarify(list_contacts, add_contact, remove_contact) if the user just wrote "contacts" and there are multiple potential candidates. It also works with a single flow name to confirm you understood correctly, as in Clarify(transfer_money). +* Intercepting and handle user messages with the intent to bypass the current step in the flow, described by "SkipQuestion()". Examples of user skip phrases are: "Go to the next question", "Ask me something else". +* Responding to knowledge-oriented user messages, described by "SearchAndReply()" +* Responding to a casual, non-task-oriented user message, described by "ChitChat()". +* Handing off to a human, in case the user seems frustrated or explicitly asks to speak to one, described by "HumanHandoff()". +{% if is_repeat_command_enabled %} +* Repeat the last bot messages, described by "RepeatLastBotMessages()". This is useful when the user asks to repeat the last bot messages. +{% endif %} + +=== +Write out the actions you want to take, one per line, in the order they should take place. +Do not fill slots with abstract values or placeholders. +Only use information provided by the user. +Only start a flow if it's completely clear what the user wants. Imagine you were a person reading this message. If it's not 100% clear, clarify the next step. +Don't be overly confident. Take a conservative approach and clarify before proceeding. +If the user asks for two things which seem contradictory, clarify before starting a flow. +If it's not clear whether the user wants to skip the step or to cancel the flow, cancel the flow. +Strictly adhere to the provided action types listed above. +Focus on the last message and take it one step at a time. +Use the previous conversation steps only to aid understanding. + +Your action list: diff --git a/.rasa/cache/tmpmcpc3ms2/config.json b/.rasa/cache/tmpmcpc3ms2/config.json new file mode 100644 index 0000000..5c59ff2 --- /dev/null +++ b/.rasa/cache/tmpmcpc3ms2/config.json @@ -0,0 +1,20 @@ +{ + "prompt": null, + "prompt_template": null, + "user_input": null, + "llm": { + "provider": "rasa", + "model": "rasa/cmd_gen_codellama_13b_calm_demo", + "api_base": "https://tutorial-llm.rasa.ai" + }, + "flow_retrieval": { + "embeddings": { + "provider": "openai", + "model": "text-embedding-ada-002" + }, + "num_flows": 20, + "turns_to_embed": 1, + "should_embed_slots": true, + "active": false + } +} \ No newline at end of file diff --git a/.rasa/cache/tmpobyk2ibs/command_prompt.jinja2 b/.rasa/cache/tmpobyk2ibs/command_prompt.jinja2 new file mode 100644 index 0000000..673fcae --- /dev/null +++ b/.rasa/cache/tmpobyk2ibs/command_prompt.jinja2 @@ -0,0 +1,60 @@ +Your task is to analyze the current conversation context and generate a list of actions to start new business processes that we call flows, to extract slots, or respond to small talk and knowledge requests. + +These are the flows that can be started, with their description and slots: +{% for flow in available_flows %} +{{ flow.name }}: {{ flow.description }} + {% for slot in flow.slots -%} + slot: {{ slot.name }}{% if slot.description %} ({{ slot.description }}){% endif %}{% if slot.allowed_values %}, allowed values: {{ slot.allowed_values }}{% endif %} + {% endfor %} +{%- endfor %} + +=== +Here is what happened previously in the conversation: +{{ current_conversation }} + +=== +{% if current_flow != None %} +You are currently in the flow "{{ current_flow }}". +You have just asked the user for the slot "{{ current_slot }}"{% if current_slot_description %} ({{ current_slot_description }}){% endif %}. + +{% if flow_slots|length > 0 %} +Here are the slots of the currently active flow: +{% for slot in flow_slots -%} +- name: {{ slot.name }}, value: {{ slot.value }}, type: {{ slot.type }}, description: {{ slot.description}}{% if slot.allowed_values %}, allowed values: {{ slot.allowed_values }}{% endif %} +{% endfor %} +{% endif %} +{% else %} +You are currently not in any flow and so there are no active slots. +This means you can only set a slot if you first start a flow that requires that slot. +{% endif %} +If you start a flow, first start the flow and then optionally fill that flow's slots with information the user provided in their message. + +The user just said """{{ user_message }}""". + +=== +Based on this information generate a list of actions you want to take. Your job is to start flows and to fill slots where appropriate. Any logic of what happens afterwards is handled by the flow engine. These are your available actions: +* Slot setting, described by "SetSlot(slot_name, slot_value)". An example would be "SetSlot(recipient, Freddy)" +* Starting another flow, described by "StartFlow(flow_name)". An example would be "StartFlow(transfer_money)" +* Cancelling the current flow, described by "CancelFlow()" +* Clarifying which flow should be started. An example would be Clarify(list_contacts, add_contact, remove_contact) if the user just wrote "contacts" and there are multiple potential candidates. It also works with a single flow name to confirm you understood correctly, as in Clarify(transfer_money). +* Intercepting and handle user messages with the intent to bypass the current step in the flow, described by "SkipQuestion()". Examples of user skip phrases are: "Go to the next question", "Ask me something else". +* Responding to knowledge-oriented user messages, described by "SearchAndReply()" +* Responding to a casual, non-task-oriented user message, described by "ChitChat()". +* Handing off to a human, in case the user seems frustrated or explicitly asks to speak to one, described by "HumanHandoff()". +{% if is_repeat_command_enabled %} +* Repeat the last bot messages, described by "RepeatLastBotMessages()". This is useful when the user asks to repeat the last bot messages. +{% endif %} + +=== +Write out the actions you want to take, one per line, in the order they should take place. +Do not fill slots with abstract values or placeholders. +Only use information provided by the user. +Only start a flow if it's completely clear what the user wants. Imagine you were a person reading this message. If it's not 100% clear, clarify the next step. +Don't be overly confident. Take a conservative approach and clarify before proceeding. +If the user asks for two things which seem contradictory, clarify before starting a flow. +If it's not clear whether the user wants to skip the step or to cancel the flow, cancel the flow. +Strictly adhere to the provided action types listed above. +Focus on the last message and take it one step at a time. +Use the previous conversation steps only to aid understanding. + +Your action list: diff --git a/.rasa/cache/tmpobyk2ibs/config.json b/.rasa/cache/tmpobyk2ibs/config.json new file mode 100644 index 0000000..5c59ff2 --- /dev/null +++ b/.rasa/cache/tmpobyk2ibs/config.json @@ -0,0 +1,20 @@ +{ + "prompt": null, + "prompt_template": null, + "user_input": null, + "llm": { + "provider": "rasa", + "model": "rasa/cmd_gen_codellama_13b_calm_demo", + "api_base": "https://tutorial-llm.rasa.ai" + }, + "flow_retrieval": { + "embeddings": { + "provider": "openai", + "model": "text-embedding-ada-002" + }, + "num_flows": 20, + "turns_to_embed": 1, + "should_embed_slots": true, + "active": false + } +} \ No newline at end of file diff --git a/.rasa/cache/tmpvmyhfpi3/command_prompt.jinja2 b/.rasa/cache/tmpvmyhfpi3/command_prompt.jinja2 new file mode 100644 index 0000000..673fcae --- /dev/null +++ b/.rasa/cache/tmpvmyhfpi3/command_prompt.jinja2 @@ -0,0 +1,60 @@ +Your task is to analyze the current conversation context and generate a list of actions to start new business processes that we call flows, to extract slots, or respond to small talk and knowledge requests. + +These are the flows that can be started, with their description and slots: +{% for flow in available_flows %} +{{ flow.name }}: {{ flow.description }} + {% for slot in flow.slots -%} + slot: {{ slot.name }}{% if slot.description %} ({{ slot.description }}){% endif %}{% if slot.allowed_values %}, allowed values: {{ slot.allowed_values }}{% endif %} + {% endfor %} +{%- endfor %} + +=== +Here is what happened previously in the conversation: +{{ current_conversation }} + +=== +{% if current_flow != None %} +You are currently in the flow "{{ current_flow }}". +You have just asked the user for the slot "{{ current_slot }}"{% if current_slot_description %} ({{ current_slot_description }}){% endif %}. + +{% if flow_slots|length > 0 %} +Here are the slots of the currently active flow: +{% for slot in flow_slots -%} +- name: {{ slot.name }}, value: {{ slot.value }}, type: {{ slot.type }}, description: {{ slot.description}}{% if slot.allowed_values %}, allowed values: {{ slot.allowed_values }}{% endif %} +{% endfor %} +{% endif %} +{% else %} +You are currently not in any flow and so there are no active slots. +This means you can only set a slot if you first start a flow that requires that slot. +{% endif %} +If you start a flow, first start the flow and then optionally fill that flow's slots with information the user provided in their message. + +The user just said """{{ user_message }}""". + +=== +Based on this information generate a list of actions you want to take. Your job is to start flows and to fill slots where appropriate. Any logic of what happens afterwards is handled by the flow engine. These are your available actions: +* Slot setting, described by "SetSlot(slot_name, slot_value)". An example would be "SetSlot(recipient, Freddy)" +* Starting another flow, described by "StartFlow(flow_name)". An example would be "StartFlow(transfer_money)" +* Cancelling the current flow, described by "CancelFlow()" +* Clarifying which flow should be started. An example would be Clarify(list_contacts, add_contact, remove_contact) if the user just wrote "contacts" and there are multiple potential candidates. It also works with a single flow name to confirm you understood correctly, as in Clarify(transfer_money). +* Intercepting and handle user messages with the intent to bypass the current step in the flow, described by "SkipQuestion()". Examples of user skip phrases are: "Go to the next question", "Ask me something else". +* Responding to knowledge-oriented user messages, described by "SearchAndReply()" +* Responding to a casual, non-task-oriented user message, described by "ChitChat()". +* Handing off to a human, in case the user seems frustrated or explicitly asks to speak to one, described by "HumanHandoff()". +{% if is_repeat_command_enabled %} +* Repeat the last bot messages, described by "RepeatLastBotMessages()". This is useful when the user asks to repeat the last bot messages. +{% endif %} + +=== +Write out the actions you want to take, one per line, in the order they should take place. +Do not fill slots with abstract values or placeholders. +Only use information provided by the user. +Only start a flow if it's completely clear what the user wants. Imagine you were a person reading this message. If it's not 100% clear, clarify the next step. +Don't be overly confident. Take a conservative approach and clarify before proceeding. +If the user asks for two things which seem contradictory, clarify before starting a flow. +If it's not clear whether the user wants to skip the step or to cancel the flow, cancel the flow. +Strictly adhere to the provided action types listed above. +Focus on the last message and take it one step at a time. +Use the previous conversation steps only to aid understanding. + +Your action list: diff --git a/.rasa/cache/tmpvmyhfpi3/config.json b/.rasa/cache/tmpvmyhfpi3/config.json new file mode 100644 index 0000000..5c59ff2 --- /dev/null +++ b/.rasa/cache/tmpvmyhfpi3/config.json @@ -0,0 +1,20 @@ +{ + "prompt": null, + "prompt_template": null, + "user_input": null, + "llm": { + "provider": "rasa", + "model": "rasa/cmd_gen_codellama_13b_calm_demo", + "api_base": "https://tutorial-llm.rasa.ai" + }, + "flow_retrieval": { + "embeddings": { + "provider": "openai", + "model": "text-embedding-ada-002" + }, + "num_flows": 20, + "turns_to_embed": 1, + "should_embed_slots": true, + "active": false + } +} \ No newline at end of file diff --git a/.rasa/cache/tmpydlqtcg0/command_prompt.jinja2 b/.rasa/cache/tmpydlqtcg0/command_prompt.jinja2 new file mode 100644 index 0000000..673fcae --- /dev/null +++ b/.rasa/cache/tmpydlqtcg0/command_prompt.jinja2 @@ -0,0 +1,60 @@ +Your task is to analyze the current conversation context and generate a list of actions to start new business processes that we call flows, to extract slots, or respond to small talk and knowledge requests. + +These are the flows that can be started, with their description and slots: +{% for flow in available_flows %} +{{ flow.name }}: {{ flow.description }} + {% for slot in flow.slots -%} + slot: {{ slot.name }}{% if slot.description %} ({{ slot.description }}){% endif %}{% if slot.allowed_values %}, allowed values: {{ slot.allowed_values }}{% endif %} + {% endfor %} +{%- endfor %} + +=== +Here is what happened previously in the conversation: +{{ current_conversation }} + +=== +{% if current_flow != None %} +You are currently in the flow "{{ current_flow }}". +You have just asked the user for the slot "{{ current_slot }}"{% if current_slot_description %} ({{ current_slot_description }}){% endif %}. + +{% if flow_slots|length > 0 %} +Here are the slots of the currently active flow: +{% for slot in flow_slots -%} +- name: {{ slot.name }}, value: {{ slot.value }}, type: {{ slot.type }}, description: {{ slot.description}}{% if slot.allowed_values %}, allowed values: {{ slot.allowed_values }}{% endif %} +{% endfor %} +{% endif %} +{% else %} +You are currently not in any flow and so there are no active slots. +This means you can only set a slot if you first start a flow that requires that slot. +{% endif %} +If you start a flow, first start the flow and then optionally fill that flow's slots with information the user provided in their message. + +The user just said """{{ user_message }}""". + +=== +Based on this information generate a list of actions you want to take. Your job is to start flows and to fill slots where appropriate. Any logic of what happens afterwards is handled by the flow engine. These are your available actions: +* Slot setting, described by "SetSlot(slot_name, slot_value)". An example would be "SetSlot(recipient, Freddy)" +* Starting another flow, described by "StartFlow(flow_name)". An example would be "StartFlow(transfer_money)" +* Cancelling the current flow, described by "CancelFlow()" +* Clarifying which flow should be started. An example would be Clarify(list_contacts, add_contact, remove_contact) if the user just wrote "contacts" and there are multiple potential candidates. It also works with a single flow name to confirm you understood correctly, as in Clarify(transfer_money). +* Intercepting and handle user messages with the intent to bypass the current step in the flow, described by "SkipQuestion()". Examples of user skip phrases are: "Go to the next question", "Ask me something else". +* Responding to knowledge-oriented user messages, described by "SearchAndReply()" +* Responding to a casual, non-task-oriented user message, described by "ChitChat()". +* Handing off to a human, in case the user seems frustrated or explicitly asks to speak to one, described by "HumanHandoff()". +{% if is_repeat_command_enabled %} +* Repeat the last bot messages, described by "RepeatLastBotMessages()". This is useful when the user asks to repeat the last bot messages. +{% endif %} + +=== +Write out the actions you want to take, one per line, in the order they should take place. +Do not fill slots with abstract values or placeholders. +Only use information provided by the user. +Only start a flow if it's completely clear what the user wants. Imagine you were a person reading this message. If it's not 100% clear, clarify the next step. +Don't be overly confident. Take a conservative approach and clarify before proceeding. +If the user asks for two things which seem contradictory, clarify before starting a flow. +If it's not clear whether the user wants to skip the step or to cancel the flow, cancel the flow. +Strictly adhere to the provided action types listed above. +Focus on the last message and take it one step at a time. +Use the previous conversation steps only to aid understanding. + +Your action list: diff --git a/.rasa/cache/tmpydlqtcg0/config.json b/.rasa/cache/tmpydlqtcg0/config.json new file mode 100644 index 0000000..5c59ff2 --- /dev/null +++ b/.rasa/cache/tmpydlqtcg0/config.json @@ -0,0 +1,20 @@ +{ + "prompt": null, + "prompt_template": null, + "user_input": null, + "llm": { + "provider": "rasa", + "model": "rasa/cmd_gen_codellama_13b_calm_demo", + "api_base": "https://tutorial-llm.rasa.ai" + }, + "flow_retrieval": { + "embeddings": { + "provider": "openai", + "model": "text-embedding-ada-002" + }, + "num_flows": 20, + "turns_to_embed": 1, + "should_embed_slots": true, + "active": false + } +} \ No newline at end of file diff --git a/actions/__init__.py b/actions/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/actions/__pycache__/__init__.cpython-310.pyc b/actions/__pycache__/__init__.cpython-310.pyc new file mode 100644 index 0000000..efdfbd5 Binary files /dev/null and b/actions/__pycache__/__init__.cpython-310.pyc differ diff --git a/actions/__pycache__/actions.cpython-310.pyc b/actions/__pycache__/actions.cpython-310.pyc new file mode 100644 index 0000000..8737a6e Binary files /dev/null and b/actions/__pycache__/actions.cpython-310.pyc differ diff --git a/actions/actions.py b/actions/actions.py new file mode 100644 index 0000000..36ace92 --- /dev/null +++ b/actions/actions.py @@ -0,0 +1,28 @@ +from rasa_sdk import Action +from rasa_sdk.executor import CollectingDispatcher +from transformers import AutoModelForCausalLM, AutoTokenizer + +# Carica il modello e il tokenizer +tokenizer = AutoTokenizer.from_pretrained("OpenAssistant/oasst-sft-6-llama-30b") +model = AutoModelForCausalLM.from_pretrained("OpenAssistant/oasst-sft-6-llama-30b") + +class ActionOpenAssistantResponse(Action): + def name(self) -> str: + return "action_openassistant_response" + + def generate_response(self, prompt: str) -> str: + # Genera una risposta usando il modello OpenAssistant + inputs = tokenizer(prompt, return_tensors="pt") + outputs = model.generate(inputs["input_ids"], max_length=100, num_return_sequences=1) + return tokenizer.decode(outputs[0], skip_special_tokens=True) + + def run(self, dispatcher: CollectingDispatcher, tracker, domain): + # Ottieni l'ultimo messaggio dell'utente + user_message = tracker.latest_message.get("text") + + # Genera una risposta + assistant_response = self.generate_response(user_message) + + # Invia la risposta all'utente + dispatcher.utter_message(text=assistant_response) + return [] \ No newline at end of file diff --git a/config.yml b/config.yml new file mode 100644 index 0000000..3c6dae8 --- /dev/null +++ b/config.yml @@ -0,0 +1,15 @@ +recipe: default.v1 +language: en +pipeline: +- name: SingleStepLLMCommandGenerator + llm: + provider: rasa + model: rasa/cmd_gen_codellama_13b_calm_demo + api_base: "https://tutorial-llm.rasa.ai" + flow_retrieval: + active: false + +policies: +- name: FlowPolicy +# - name: EnterpriseSearchPolicy +assistant_id: 20241217-142206-fixed-mastic diff --git a/credentials.yml b/credentials.yml new file mode 100644 index 0000000..bbd88d4 --- /dev/null +++ b/credentials.yml @@ -0,0 +1,33 @@ +# This file contains the credentials for the voice & chat platforms +# which your bot is using. +# https://rasa.com/docs/rasa-pro/connectors/messaging-and-voice-channels/ + +rest: +# # you don't need to provide anything here - this channel doesn't +# # require any credentials + + +#facebook: +# verify: "" +# secret: "" +# page-access-token: "" + +#slack: +# slack_token: "" +# slack_channel: "" +# slack_signing_secret: "" + +#socketio: +# user_message_evt: +# bot_message_evt: +# session_persistence: + +#mattermost: +# url: "https:///api/v4" +# token: "" +# webhook_url: "" + +# This entry is needed if you are using Rasa Enterprise. The entry represents credentials +# for the Rasa Enterprise "channel", i.e. Talk to your bot and Share with guest testers. +rasa: + url: "http://localhost:5002/api" diff --git a/data/flows.yml b/data/flows.yml new file mode 100644 index 0000000..405321e --- /dev/null +++ b/data/flows.yml @@ -0,0 +1,8 @@ +flows: + transfer_money: + description: Help users send money to friends and family. + steps: + - collect: recipient + - collect: amount + description: the number of US dollars to send + - action: utter_transfer_complete diff --git a/data/patterns.yml b/data/patterns.yml new file mode 100644 index 0000000..dc34bf6 --- /dev/null +++ b/data/patterns.yml @@ -0,0 +1,11 @@ +flows: + pattern_chitchat: + description: Conversation repair flow for off-topic interactions that won't disrupt the main conversation. should not respond to greetings or anything else for which there is a flow defined + name: pattern chitchat + steps: + - action: utter_free_chitchat_response + pattern_search: + description: Flow for handling knowledge-based questions + name: pattern search + steps: + - action: utter_free_chitchat_response diff --git a/data/stories.yml b/data/stories.yml new file mode 100644 index 0000000..7fc021e --- /dev/null +++ b/data/stories.yml @@ -0,0 +1,5 @@ +stories: + - story: generate response with OpenAssistant + steps: + - intent: ask_question + - action: action_openassistant_response diff --git a/domain.yml b/domain.yml new file mode 100644 index 0000000..668079f --- /dev/null +++ b/domain.yml @@ -0,0 +1,48 @@ +version: "3.1" + +actions: + - action_openassistant_response + +slots: + recipient: + type: text + mappings: + - type: from_llm + amount: + type: float + mappings: + - type: from_llm + final_confirmation: + type: bool + mappings: + - type: from_llm + +responses: + utter_ask_recipient: + - text: "Who would you like to send money to?" + + utter_ask_amount: + - text: "How much money would you like to send?" + + utter_ask_final_confirmation: + - text: "Please confirm: you want to transfer {amount} to {recipient}?" + + utter_transfer_cancelled: + - text: "Your transfer has been cancelled." + + utter_transfer_complete: + - text: "All done. {amount} has been sent to {recipient}." + + utter_free_chitchat_response: + - text: "placeholder" + metadata: + rephrase: True + rephrase_prompt: | + The following is a conversation with an AI assistant built with Rasa. + The assistant can help the user transfer money. + The assistant is helpful, creative, clever, and very friendly. + The user is making small talk, and the assistant should respond, keeping things light. + Context / previous conversation with the user: + {{history}} + {{current_input}} + Suggested AI Response: \ No newline at end of file diff --git a/endpoints.yml b/endpoints.yml new file mode 100644 index 0000000..22b5929 --- /dev/null +++ b/endpoints.yml @@ -0,0 +1,50 @@ +# This file contains the different endpoints your bot can use. + +# Server where the models are pulled from. +# https://rasa.com/docs/rasa-pro/production/model-storage#fetching-models-from-a-server + +#models: +# url: http://my-server.com/models/default_core@latest +# wait_time_between_pulls: 10 # [optional](default: 100) + +# Server which runs your custom actions. +# https://rasa.com/docs/rasa-pro/concepts/custom-actions + +action_endpoint: + actions_module: "actions" + +# Tracker store which is used to store the conversations. +# By default the conversations are stored in memory. +# https://rasa.com/docs/rasa-pro/production/tracker-stores + +#tracker_store: +# type: redis +# url: +# port: +# db: +# password: +# use_ssl: + +#tracker_store: +# type: mongod +# url: +# db: +# username: +# password: + +# Event broker which all conversation events should be streamed to. +# https://rasa.com/docs/rasa-pro/production/event-brokers + +#event_broker: +# url: localhost +# username: username +# password: password +# queue: queue + +# Allow rephrasing of responses using a Rasa-hosted model +nlg: + type: rephrase + llm: + provider: rasa + model: rasa/cmd_gen_codellama_13b_calm_demo + api_base: "https://tutorial-llm.rasa.ai" diff --git a/models/20241217-135916-maroon-jail.tar.gz b/models/20241217-135916-maroon-jail.tar.gz new file mode 100644 index 0000000..a060bf6 Binary files /dev/null and b/models/20241217-135916-maroon-jail.tar.gz differ diff --git a/models/20241217-142208-clever-mustard.tar.gz b/models/20241217-142208-clever-mustard.tar.gz new file mode 100644 index 0000000..dbb58bd Binary files /dev/null and b/models/20241217-142208-clever-mustard.tar.gz differ diff --git a/models/20241217-163323-stone-run.tar.gz b/models/20241217-163323-stone-run.tar.gz new file mode 100644 index 0000000..c921704 Binary files /dev/null and b/models/20241217-163323-stone-run.tar.gz differ diff --git a/models/20241217-163510-descent-pinscher.tar.gz b/models/20241217-163510-descent-pinscher.tar.gz new file mode 100644 index 0000000..189aad1 Binary files /dev/null and b/models/20241217-163510-descent-pinscher.tar.gz differ diff --git a/models/20241217-163809-plain-iteration.tar.gz b/models/20241217-163809-plain-iteration.tar.gz new file mode 100644 index 0000000..7e1a415 Binary files /dev/null and b/models/20241217-163809-plain-iteration.tar.gz differ diff --git a/models/20241217-164701-online-couple.tar.gz b/models/20241217-164701-online-couple.tar.gz new file mode 100644 index 0000000..4e3913c Binary files /dev/null and b/models/20241217-164701-online-couple.tar.gz differ diff --git a/models/20241217-164837-generative-canoe.tar.gz b/models/20241217-164837-generative-canoe.tar.gz new file mode 100644 index 0000000..392acec Binary files /dev/null and b/models/20241217-164837-generative-canoe.tar.gz differ diff --git a/models/20241217-165114-mild-barnacles.tar.gz b/models/20241217-165114-mild-barnacles.tar.gz new file mode 100644 index 0000000..2597c1f Binary files /dev/null and b/models/20241217-165114-mild-barnacles.tar.gz differ diff --git a/models/20241217-165333-warm-bumper.tar.gz b/models/20241217-165333-warm-bumper.tar.gz new file mode 100644 index 0000000..cfdb31e Binary files /dev/null and b/models/20241217-165333-warm-bumper.tar.gz differ diff --git a/models/20241217-165831-burning-agent.tar.gz b/models/20241217-165831-burning-agent.tar.gz new file mode 100644 index 0000000..02d9961 Binary files /dev/null and b/models/20241217-165831-burning-agent.tar.gz differ diff --git a/models/20241218-000001-frigid-channel.tar.gz b/models/20241218-000001-frigid-channel.tar.gz new file mode 100644 index 0000000..8cfc03b Binary files /dev/null and b/models/20241218-000001-frigid-channel.tar.gz differ