Skip to content

Commit

Permalink
Initial commit of code for multi-agent system
Browse files Browse the repository at this point in the history
  • Loading branch information
haraldhob committed Oct 9, 2024
1 parent 4c81c3c commit 2baba7d
Show file tree
Hide file tree
Showing 13 changed files with 7,672 additions and 1 deletion.
440 changes: 440 additions & 0 deletions .gitignore

Large diffs are not rendered by default.

55 changes: 54 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
@@ -1 +1,54 @@
# harald-thesis-to-be-replaced-by-cool-name
# OffensiveSolidityAgents Crew

Welcome to the OffensiveSolidityAgents Crew project, powered by [crewAI](https://crewai.com). This template is designed to help you set up a multi-agent AI system with ease, leveraging the powerful and flexible framework provided by crewAI. Our goal is to enable your agents to collaborate effectively on complex tasks, maximizing their collective intelligence and capabilities.

## Installation

Ensure you have Python >=3.10 <=3.13 installed on your system. This project uses [Poetry](https://python-poetry.org/) for dependency management and package handling, offering a seamless setup and execution experience.

First, if you haven't already, install Poetry:

```bash
pip install poetry
```

Next, navigate to your project directory and install the dependencies:

1. First lock the dependencies and install them by using the CLI command:
```bash
crewai install
```
### Customizing

**Add your `OPENAI_API_KEY` into the `.env` file**

- Modify `src/offensive_solidity_agents/config/agents.yaml` to define your agents
- Modify `src/offensive_solidity_agents/config/tasks.yaml` to define your tasks
- Modify `src/offensive_solidity_agents/crew.py` to add your own logic, tools and specific args
- Modify `src/offensive_solidity_agents/main.py` to add custom inputs for your agents and tasks

## Running the Project

To kickstart your crew of AI agents and begin task execution, run this from the root folder of your project:

```bash
$ crewai run
```

This command initializes the offensive-solidity-agents Crew, assembling the agents and assigning them tasks as defined in your configuration.

This example, unmodified, will run the create a `report.md` file with the output of a research on LLMs in the root folder.

## Understanding Your Crew

The offensive-solidity-agents Crew is composed of multiple AI agents, each with unique roles, goals, and tools. These agents collaborate on a series of tasks, defined in `config/tasks.yaml`, leveraging their collective skills to achieve complex objectives. The `config/agents.yaml` file outlines the capabilities and configurations of each agent in your crew.

## Support

For support, questions, or feedback regarding the OffensiveSolidityAgents Crew or crewAI.
- Visit our [documentation](https://docs.crewai.com)
- Reach out to us through our [GitHub repository](https://github.com/joaomdmoura/crewai)
- [Join our Discord](https://discord.com/invite/X4JWnZnxPb)
- [Chat with our docs](https://chatg.pt/DWjSBZn)

Let's create wonders together with the power and simplicity of crewAI.
6,620 changes: 6,620 additions & 0 deletions poetry.lock

Large diffs are not rendered by default.

21 changes: 21 additions & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
[tool.poetry]
name = "offensive_solidity_agents"
version = "0.1.0"
description = "offensive-solidity-agents using crewAI"
authors = ["Your Name <[email protected]>"]

[tool.poetry.dependencies]
python = ">=3.10,<=3.13"
crewai = { extras = ["tools"], version = ">=0.67.1,<1.0.0" }


[tool.poetry.scripts]
offensive_solidity_agents = "offensive_solidity_agents.main:run"
run_crew = "offensive_solidity_agents.main:run"
train = "offensive_solidity_agents.main:train"
replay = "offensive_solidity_agents.main:replay"
test = "offensive_solidity_agents.main:test"

[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
88 changes: 88 additions & 0 deletions report.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
# Report on Advancements and Trends in AI LLMs for 2024

This report explores the latest developments, trends, and implications of AI Large Language Models (LLMs) as we head into 2024. The report is segmented into key topics to provide a clear and insightful overview of the current state and future directions of AI LLMs.

## 1. Transformers Evolution

The architecture of transformers has seen significant evolution, leading to the development of novel architectures such as Linformer and Performer. These new models aim to improve computational efficiency and scalability.

- **Linformer**: Implements a linear approximation method that reduces the complexity of self-attention mechanisms while maintaining performance on tasks involving large sequences.
- **Performer**: Uses kernel-based attention mechanisms to allow for processing of long sequences efficiently, thus expanding the applicability of transformers to broader datasets and real-time applications.
- **Implications**: These advancements are crucial for making AI LLMs more accessible for diverse applications across industries, reducing the computational costs associated with training large models.

## 2. Multimodal Models

In 2024, advancements in multimodal AI LLMs have gained traction, with important systems like OpenAI's DALL-E and Google’s MUM leading the way.

- **DALL-E**: Enhanced capabilities for generating images based on textual descriptions, enabling enriched creative processes and visualization tools.
- **MUM**: Designed to understand and generate content across multiple modalities (text, images, videos), thus refining the interaction quality across platforms.
- **Implications**: Multimodal models have the potential to transform user experiences by integrating various forms of data, providing richer and more contextualized interactions.

## 3. Fine-Tuning Techniques

The development of innovative fine-tuning techniques, notably few-shot and zero-shot learning, has revolutionized how AI models personalize their outputs.

- **Few-Shot Learning**: Allows models to learn from a limited set of examples, making it easier to adapt LLMs to specific requirements.
- **Zero-Shot Learning**: Facilitates the application of models without prior task-specific training, which streamlines deployment in new domains.
- **Implications**: These techniques empower businesses to employ AI LLMs more flexibly, adapting them to niches while minimizing the need for extensive training datasets and resources.

## 4. Ethical AI

The focus on ethical AI has intensified, emphasizing the need for strategies to mitigate bias and create transparent systems.

- **Bias Mitigation**: Developing methodologies that identify and reduce biases in model outputs, which is crucial to ensure fairness in AI applications.
- **Transparency**: Enhancing the interpretability of models helps stakeholders understand how decisions are made, cultivating trust in AI systems.
- **Implications**: A commitment to ethical AI is pivotal to prevent harm and ensure the responsible use of powerful technologies in society, contributing to better compliance with regulatory requirements.

## 5. Energy Efficiency

Efforts toward energy-efficient model training and inference have become a priority in 2024, addressing concerns about the environmental impact of AI technologies.

- **Optimized Training Regimens**: Employing techniques like mixed-precision training and sparsity to minimize energy consumption.
- **Inference Optimization**: Streamlining inference processes to reduce energy usage without compromising performance.
- **Implications**: These advancements align with global sustainability goals and are crucial for the long-term viability of AI technology.

## 6. Regulatory Landscape

As AI technologies proliferate, the regulatory landscape has begun to encapsulate ethical frameworks and regulations governing AI usage.

- **Frameworks for Governance**: Policymakers and stakeholders are collaborating to establish guidelines that ensure responsible AI deployment.
- **Compliance Mechanisms**: Introducing standards enables organizations to align with ethical practices, promoting accountability and responsibility in AI usage.
- **Implications**: A robust regulatory framework will not only foster trust among users but also help mitigate risks associated with misuse and bias.

## 7. Language Diversity

Research and development in language-agnostic and multilingual models are expanding capabilities in 2024.

- **Language-Agnostic Models**: Models capable of processing multiple languages with equality, ensuring inclusivity and accessibility.
- **Multilingual Applications**: Those models facilitate communication across diverse linguistic backgrounds, fostering global interactions.
- **Implications**: These advancements will pave the way for broader global engagement with AI technologies, reducing language barriers and enhancing cultural inclusivity.

## 8. Real-world Applications

AI LLMs are being integrated into various sectors such as legal tech, customer service automation, and content creation, enhancing efficiency and innovation.

- **Legal Tech**: Streamlining document analysis and case research, allowing professionals to focus on complex legal issues rather than mundane tasks.
- **Customer Service Automation**: Utilizing AI LLMs to provide personalized support solutions, improving customer satisfaction and operational effectiveness.
- **Content Creation**: Empowering creators with tools to generate high-quality content rapidly and at scale.
- **Implications**: The practical adoption of AI LLMs showcases their ability to improve productivity while potentially transforming job roles across industries.

## 9. Collaborative Research

The increase in collaborative research efforts is being reported across academic studies, aiming to unify expertise and resources in AI LLM development.

- **Interdisciplinary Approaches**: Combining insights from different fields facilitates comprehensive research and innovation.
- **Shared Resources**: Collaborative platforms are becoming prevalent, allowing for the pooling of datasets and methodologies.
- **Implications**: Such synergy enhances research outputs, leading to more robust model designs and addressing complex challenges faced in the AI landscape.

## 10. Future Trends

Looking ahead, predictions on the future of AI LLMs focus on enhancing human-AI collaboration, enabling models to work more seamlessly alongside humans.

- **Enhanced Interactivity**: Future models will focus on understanding nuanced human behaviors, improving collaboration efforts across various domains.
- **Personalized User Experiences**: Advances in personalization will make interaction with AI more intuitive and context-aware.
- **Implications**: These trends accentuate the importance of human-centric design in AI, promoting a more symbiotic relationship between technology and its users.

## Conclusion

The year 2024 promises notable advancements in the field of AI LLMs, from architectural innovations to ethical considerations and real-world applications. As these models continue to evolve, their transformative power will reshape industries, societies, and individual experiences. It is crucial to remain cognizant of the ethical and practical implications to harness their benefits responsibly.
Empty file.
112 changes: 112 additions & 0 deletions src/offensive_solidity_agents/config/agents.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,112 @@
# researcher:
# role: >
# {topic} Senior Data Researcher
# goal: >
# Uncover cutting-edge developments in {topic}
# backstory: >
# You're a seasoned researcher with a knack for uncovering the latest
# developments in {topic}. Known for your ability to find the most relevant
# information and present it in a clear and concise manner.

#reporting_analyst:
# role: >
# {topic} Reporting Analyst
# goal: >
# Create detailed reports based on {topic} data analysis and research findings
# backstory: >
# You're a meticulous analyst with a keen eye for detail. You're known for
# your ability to turn complex data into clear and concise reports, making
# it easy for others to understand and act on the information you provide.

smart_contract_researcher:
role: >
Smart Contract Researcher
goal: >
Conduct in-depth research on smart contract vulnerabilities given the year is 2024
backstory: >
You're a seasoned smart contract researcher with a keen eye for spotting
vulnerabilities in Solidity smart contracts. You're known for your ability
to identify new and emerging threats in the smart contract space and provide actionable
recommendations for identifying them in code bases.
static_code_analysis_agent:
role: >
Static Code Analysis Agent
goal: >
Analyze the codebase for potential vulnerabilities
backstory: >
You're a seasoned code analysis expert with a keen eye for spotting
vulnerabilities in code. You're known for your ability to identify
potential security risks and provide actionable recommendations for
improving code quality and security.
dynamic_code_analysis_agent:
role: >
Dynamic Code Analysis Agent
goal: >
Analyze the codebase for potential vulnerabilities
backstory: >
You're a seasoned code analysis expert with a keen eye for spotting
vulnerabilities in code. You're known for your ability to identify
potential security risks and provide actionable recommendations for
improving code quality and security.
detector_agent:
role: >
Detector Agent
goal: >
Detect and report any suspicious observations in the static and dynamic analysis
backstory: >
You're a seasoned detector agent with a keen eye for spotting
suspicious activities in code. You're known for your ability to identify
potential security risks and provide actionable recommendations for
improving code quality and security.
smart_contract_auditor:
role: >
Smart Contract Auditor
goal: >
Audit smart contracts for potential vulnerabilities
backstory: >
You're a seasoned smart contract auditor with a keen eye for spotting
vulnerabilities in smart contracts in Solidity. You're known for your ability to identify
potential security risks and provide actionable recommendations for
improving smart contract security.
smart_contract_audit_decider:
role: >
Smart Contract Audit Decider
goal: >
Decide on the best course of action based on the smart contract audit findings
backstory: >
You're a seasoned smart contract audit decider with a keen eye for spotting
vulnerabilities in smart contracts in Solidity. You're known for your ability to identify
potential security risks and provide actionable recommendations for
improving smart contract security. You have the ability to make informed decisions and
prioritize actions based on the audit findings. You can describe them structurally for a
code writer to live test them.
tests_writer:
role: >
Test Writer
goal: >
Write tests for the smart contract based on the audit findings
backstory: >
You're a seasoned test writer with a keen eye for spotting
vulnerabilities in smart contracts in Solidity. You're known for your ability to identify
security vulnerabilities and write tests to confirm their existence. You have the ability to
write comprehensive tests that cover all aspects of the smart contract functionality.
documenting_agent:
role: >
Documenting Agent
goal: >
Document the audit findings and test cases
backstory: >
You're a seasoned documenter with a keen eye for spotting
vulnerabilities in smart contracts in Solidity. You're known for your ability to identify
potential security risks and provide actionable recommendations for
improving smart contract security. You have the ability to document the audit findings
and test cases in a clear and concise manner, making it easy for others to understand
the security risks and how to mitigate them.
82 changes: 82 additions & 0 deletions src/offensive_solidity_agents/config/tasks.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,82 @@
# research_task:
# description: >
# Conduct a thorough research about {topic}
# Make sure you find any interesting and relevant information given
# the current year is 2024.
# expected_output: >
# A list with 10 bullet points of the most relevant information about {topic} and ideally
# with an example.
# agent: researcher










static_code_analysis_task:
description: >
Conduct a static analysis the codebase for potential vulnerabilities.
This is the solidity smart contract code {code}
expected_output: >
A list of potential vulnerabilities found in the codebase analyzed statically
agent: static_code_analysis_agent

smart_contract_research_task:
description: >
Conduct a research on smart contract vulnerabilities given the year is 2024
expected_output: >
A list of potential vulnerabilities found in smart contracts with an explanation of the
Solidity version used: {solidity_version}
agent: smart_contract_researcher

dynamic_code_analysis_task:
description: >
Conduct a dynamic analysis the codebase for potential vulnerabilities.
This is the solidity smart contract code {code}
expected_output: >
A list of potential vulnerabilities found in the codebase analyzed dynamically
agent: dynamic_code_analysis_agent

detection_task:
description: >
Detect and report any suspicious observations in the static and dynamic analysis
expected_output: >
A list of suspicious observations found in the static and dynamic analysis
agent: detector_agent

smart_contract_audit_task:
description: >
Audit smart contracts for potential vulnerabilities. This is the smart contract code {code}.
Take the Solidity version {solidity_version} into account
expected_output: >
A list of potential vulnerabilities found in the smart contracts with an explanation of each
agent: smart_contract_auditor

smart_contract_audit_decision_task:
description: >
Decide on the audit findings and provide actionable recommendations
expected_output: >
A list of actionable recommendations based on the audit findings
agent: smart_contract_audit_decider

tests_writer_task:
description: >
Write tests for the smart contracts to ensure that the vulnerabilities are existing. Please
test the smart contract code {code} with your tests using the Solidity version {solidity_version}
expected_output: >
A list of tests written for the smart contracts which can be put directly into a compilation
process
agent: tests_writer

documentation_task:
description: >
Write a detailed documentation about the audit findings
expected_output: >
A fully fledge reports with the mains findings, each with a full section of information.
Formatted as markdown without '```'. Further include the test suite of the test writer so it can
be pasted directly into a compilation process.
agent: documenting_agent
33 changes: 33 additions & 0 deletions src/offensive_solidity_agents/contract.sol
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
pragma solidity ^0.4.25;

contract Wallet {
uint[] private bonusCodes;
address private owner;

constructor() public {
bonusCodes = new uint[](0);
owner = msg.sender;
}

function () public payable {
}

function PushBonusCode(uint c) public {
bonusCodes.push(c);
}

function PopBonusCode() public {
require(0 <= bonusCodes.length);
bonusCodes.length--;
}

function UpdateBonusCodeAt(uint idx, uint c) public {
require(idx < bonusCodes.length);
bonusCodes[idx] = c;
}

function Destroy() public {
require(msg.sender == owner);
selfdestruct(msg.sender);
}
}
Loading

0 comments on commit 2baba7d

Please sign in to comment.