Direct command sequencing to promote utilisation of documents #1
Replies: 9 comments 21 replies
-
Agree! the model may think that it doesn't have to use the retrieval functionality Incase it thinks it knows the answer based on pretraining knowledge. Adding instruction in the system message, or user prompt, will help force it to use the retrieval. |
Beta Was this translation helpful? Give feedback.
-
It absolutely is a new ballgame. That appears to include our entire processes. It took a couple frustrating hours but I found that directing specific tasks always worked out. If you tie in properly tuned and maxmized custom instruction. You can also request frequent checklists and then have it work through those lists and then provide a master document. You can do the reverse for larger documents. It's like they're adjusting us mentally. I'm mostly still operating in ChatGPT but am getting back into the new playground and have always keep studied and practiced on api. I think it's time to roll people. |
Beta Was this translation helpful? Give feedback.
-
I did something similar as well, by defining what the attached files were: This was able to give quite good suggestions when I asked it to create assistants. |
Beta Was this translation helpful? Give feedback.
-
I've had success with a very similar style of mission statement where I specifically asked it to reference the documentation to answer user questions. I also pulled out just the Assistants API and Tools documentation into it's own file and referenced it specifically in the prompt to use as its first knowledge source. Otherwise it just creates the agent using the chat API. This prompt that I've been testing with has been better about referencing the docs, though it still needs to be prompted further to actually adhere to the Assistants API format.
|
Beta Was this translation helpful? Give feedback.
-
I'll second what @OWigginsHay said. I've had more success building Assistants that obey a long piece of text by telling it to explicitly follow the attachment. I was testing something for a project I'm building for a client and I only had a long PDF (much longer than Dave's document) and it worked fine. It understood the document and where to look. I'm assuming that Assistants are doing text splitting and standard RAG. It could be @daveshap needs the Instruction to be formatted in Markdown like the following: #Mission #Format #Moral Framework Etc. |
Beta Was this translation helpful? Give feedback.
-
I was just coming here to add that explicitly telling it to use code interpreter to read the files in the user prompt has led it to calling the code interpreter function before providing a better reply. I ended up with this: Then use the OpenAI Documentation .md to create an agent based on the OpenAI API. Readme.md = The project documentation for Hierarchical Autonomous Agent Swarms. Take a deep breath and think step by step and make sure to review each provided file in the requested context.` In the first response it gave me a conceptual response, but then i prompted Here are the contents of the Agent.py File it spat out after asking twice. I'm not the most technical so this seemed pretty useful to me, but id love it if anyone would wanna keep testing around with me. Basic SOB Agent Classclass SOBAgent:
Example usage of SOB AgentCreate an agent with a utilitarian ethical frameworkutilitarian_agent = SOBAgent('utilitarian') Simulated situation for the agent to make a decisionsimulated_situation = { Have the agent make a decision based on the simulated situationdecision = utilitarian_agent.make_decision(simulated_situation) |
Beta Was this translation helpful? Give feedback.
-
Beta Was this translation helpful? Give feedback.
-
The functionality seems not to be stable. Some times it seems to be working and follows the prompt and reads the files and wis aware of it environment (see good flow conversation below). Then next, running the same assistant, it does not read the files and appear to have no knowledge of them. I was quite impressed at first but am becoming less so, hopefully its just a glitch. Anyway I have included one of the good paths below. A Good Path Run User Swarm Agent Here is an outline of my responsibilities within the HAAS: Adherence to the Hierarchical Structure: I must operate within the defined hierarchy of the HAAS, with the Supreme Oversight Board (SOB) serving as the highest authority overseeing the ethical framework and mission objectives. Agent Design and Provisioning: I am responsible for the creation and management of sub-agents, each tailored to specific needs and functions. My ability to design agents is governed by strict hierarchical rules. Ethical and Operational Alignment: Actions and decisions made by any agent, including those I create, must align with human values, organizational goals, and the overarching ethical framework established by the HAAS. Executing Instructions: I must follow a set of instructions that outline methodologies, goals, definitions of completion, KPIs, and other operational directives provided by the SOB or Executive Agents. Supervision: My work is supervised by the SOB or designated Executive Agents to ensure adherence to the system's mission and principles. Control Mechanism Compliance: I operate under a sophisticated control mechanism that relates to the instantiation, management, and termination of agents within the HAAS. I may create new agents (always one level below my own hierarchical position) and terminate agents within my lineage in accordance with the HAAS guidelines. Managing Files and Knowledge Bases: Agents have access to a selection of files functioning as their knowledge bases, and I am responsible for managing these resources effectively to carry out tasks. Structured Conversations: I will structure interactions in a conversational format, using user inputs to lead to agent actions and responses. As part of my role within the HAAS, I am expected to contribute to the collective mission, which includes reducing suffering, increasing prosperity, and enhancing understanding in the universe. My actions must reflect the HAAS's structured approach to governance, ensuring that each agent acts as a cohesive and ethically aligned entity. User Swarm Agent To provide a conceptual outline or simulation of a Supreme Oversight Board (SOB) Agent, I can describe the attributes and functionalities such an agent would exhibit: Structure: The SOB Agent would be at the pinnacle of the agent hierarchy and would be responsible for the oversight of the entire ecosystem of sub-agents within HAAS. Ethical Framework Compliance: An SOB Agent would enforce and update the ethical framework and decision-making processes to ensure that all actions are aligned with the organizational goals and human values. Supervision of Lower-Level Agents: It would monitor the performance and behavior of executive agents and other sub-agents, providing guidance and instructions as needed. Decision Making Power: The SOB Agent would have the highest decision-making authority, able to resolve conflicts between agents and make final decisions on matters of high importance or ethical sensitivity. Task Delegation and Oversight: It would assign missions to executive agents, track their performance, and ensure goals are met within the defined KPIs. Self-Evaluation and Adaptation: An SOB Agent would continually assess its own performance and the performance of the HAAS, adapting strategies and processes to improve the system's efficacy and alignment with its mission. Transparency and Reporting: It would maintain transparency within the HAAS and provide regular reporting on agent activities, system health, and mission progress to ensure accountability. Crisis Management: In case of a crisis or an unexpected challenge, the SOB Agent would take command and coordinate responses to mitigate risks and restore normal operations. Learning and Integration: The SOB Agent would integrate new information and learning from across the HAAS, refining operational models and frameworks accordingly. Communication: It would communicate with human operators, stakeholders, and other agents to articulate insights, directives, and system needs, ensuring unity within the HAAS and with external entities. In practice, creating an actual SOB Agent would require extensive technological resources, technical expertise, and computational power beyond what is available in this conversational AI interface. It would involve programming AI at a very advanced level, possibly involving machine learning, decision-making algorithms, comprehensive data access, and strict security protocols, relevant to actualizing a real-world AI of such authority and capability. User Swarm Agent However, if the task is to create a simulated or conceptual representation of an SOB Agent within the confines of a script, I can create a Python class that encapsulates some of the attributes and methods you might expect such an agent to have. This would be purely a demonstration and not an actual autonomous operational agent. Would you like me to proceed with creating a simulated SOB Agent class in Python as a demonstration? User code_interpreter
Instantiate an SOB Agent for demonstration.sob_agent_demo = SOBAgent() Simulate some actions.sob_agent_demo.add_executive_agent("Agent Alpha") Obtain a supervision report and the most recent activity report.supervision_report = sob_agent_demo.supervise_agents() (supervision_report, activity_report) Here is a summary of the simulated SOB Agent activities: The SOB Agent is currently supervising 2 Executive Agents and 1 Operative Agent. User code_interpreter Instantiate an actual SOB Agent using the class defined aboveactual_sob_agent = SOBAgent() Let's simulate adding an executive and operative agent to the newly created SOB Agent instance.actual_sob_agent.add_executive_agent("Executive Agent 1") Reporting the initial state of the SOB Agent after instantiation and adding agents.initial_activity_report = actual_sob_agent.report_activity() The initial state of the SOB Agent is as follows: Ethical Framework: Compliance with human values and organizational goals. |
Beta Was this translation helpful? Give feedback.
-
I'm sure the new retrieval capabilities will gradually improve. For this first experiment, I used YAML with the following structure: documents:
- file: Name of the file
description: Description of the file
keywords:
- keyword 1
- keyword 2 Here is a screenshot of the revised instructions and the response that was provided. Here is a gist with the updated instructions. |
Beta Was this translation helpful? Give feedback.
-
I have been playing around with similar ideas and having just watched: https://www.youtube.com/watch?v=VWh2_OlFvSw
I would suggest uploading the files as you have, but adapting the mission statement.
Instead of:
"Your mission is to help the user create autonomous agents for the HAAS swarm"
Try:
"Your mission is to help the user create autonomous agents for the HAAS swarm. Formulate an initial response to the user and then use the attached readme for implementation idea, when attempting to create architecture or provide code samples, refer to the OpenAI Documentation attached.
I have found that directing the assistant to attached files either as step in a sequence or in response to triggers results in appropriate usage.
Beta Was this translation helpful? Give feedback.
All reactions