Skip to content

Commit

Permalink
Budget allocation (#1)
Browse files Browse the repository at this point in the history
* handling multiple charts for budget allocation

* moving to gymnasium

* improvements

* fixed pygame and improved example

* tests

* working on the example

* example.ipynb

* requirements.txt updated

* requirements.txt updated

* requirements.txt updated

* requirements.txt updated

* black format

* requirements and pylint fix

* pylint.yml updated

* pylint fix

* pylint fix

* pylint tests fix

* pylint.yml updated
  • Loading branch information
damiano1996 authored Sep 28, 2023
1 parent 6ceb545 commit 9d6f9ee
Show file tree
Hide file tree
Showing 22 changed files with 2,104 additions and 790 deletions.
21 changes: 13 additions & 8 deletions .github/workflows/pylint.yml
Original file line number Diff line number Diff line change
Expand Up @@ -4,21 +4,26 @@ on: [ push ]

jobs:
build:

runs-on: ubuntu-latest
strategy:
matrix:
python-version: [ "3.8", "3.9", "3.10" ]

steps:
- uses: actions/checkout@v3
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v3
- uses: actions/checkout@v4
- name: Set up Python 3.x
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
# Semantic version range syntax or exact version of a Python version
python-version: '3.x'
# Optional - x64 or x86 architecture, defaults to x64
architecture: 'x64'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install pylint
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Analysing the code with pylint
run: |
pylint --exit-zero $(git ls-files '*.py')
pylint --fail-under=9 $(git ls-files 'gym_trading/*.py')
- name: Run Tests
run: |
python -m unittest
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -27,3 +27,5 @@ requirements.txt
# Miscellaneous
.DS_Store
Thumbs.db

*.csv
62 changes: 43 additions & 19 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,16 +1,36 @@
# Trading Gym
The Trading Gym is a gym environment for simulating and testing trading strategies using historical price data.
It is built upon the OpenAI Gym framework and provides a customizable environment for developing and evaluating trading algorithms.
# Trading Gym: A Reinforcement Learning Environment for Trading and Budget Allocation

Review this [jupyter notebook](examples/example.ipynb) to learn more about how to use the library.
The Trading Gym is a versatile Python library that offers a comprehensive environment for simulating and testing trading strategies, as well as performing budget allocation across a portfolio of assets. Built on the foundation of the OpenAI Gym framework, it provides researchers and traders with a powerful toolkit to develop and evaluate trading algorithms.

## Features
- Integration with OpenAI Gym: The Trading Gym extends the functionality of OpenAI Gym to provide a trading-specific environment for reinforcement learning and algorithmic trading research.
- Customizable Data Loader: Load historical price data from various sources and formats, such as CSV files, API calls, or databases, using the flexible Data Loader interface.
- Exchange Simulation: Simulate trading actions, order execution, and portfolio management with the Exchange component. It provides an interface to interact with the market and simulate trading decisions.
- Rendering Options: Visualize price data, trading actions, and portfolio performance using different rendering options, such as plotting, logging, or custom renderers.
- Reward Calculation: Define custom reward functions to evaluate the performance of trading strategies based on specific criteria, such as profit and loss, risk-adjusted returns, or other metrics.
- Observation Window: Define the number of previous price points to include in the observation space, allowing agents to capture historical trends and patterns.
## Key Features

### Integration with OpenAI Gym
- The Trading Gym seamlessly integrates with OpenAI Gym, enhancing its capabilities to cater specifically to reinforcement learning and algorithmic trading research.

### Customizable Data Loader
- Load historical price data from a variety of sources and formats, including CSV files, API calls, or databases, using the flexible Data Loader interface. This feature enables you to work with real-world data or synthetic data tailored to your needs.

### Exchange Simulation
- Simulate trading actions, order execution, and portfolio management using the Exchange component. This interface allows you to interact with the market, execute trades, and evaluate trading decisions within a controlled environment.

### Rendering Options
- Visualize price data, trading actions, and portfolio performance through diverse rendering options. You can choose from various visualization methods, including plotting, logging, or even implement custom renderers to suit your visualization requirements.

### Reward Calculation
- Define and implement custom reward functions to evaluate the performance of your trading strategies. You can tailor these functions to measure various criteria, such as profit and loss, risk-adjusted returns, or other specific metrics relevant to your trading objectives.

### Budget Allocation
- In addition to trading, the Trading Gym extends its utility to budget allocation. It allows you to allocate funds across a set of assets, making it suitable for a broader range of financial optimization tasks beyond pure trading strategies.

Whether you're a researcher exploring reinforcement learning in finance or a trader looking to develop and test your trading strategies, the Trading Gym offers a versatile and adaptable environment to meet your needs. To dive deeper into its functionalities and see practical examples, refer to the [jupyter notebook](examples/example.ipynb) provided in the repository.

## Action Representation for Asset Allocation

In the context of the Trading Gym, actions provided by the agent are represented as vectors. Each vector signifies a budget allocation strategy, where each value in the vector corresponds to the budget allocated to a specific asset. The size of the vector aligns with the number of assets under consideration.

This action representation accommodates a wide spectrum of asset allocation scenarios, ranging from the allocation of the entire budget to specific assets to not allocating any budget to certain assets. In essence, it encompasses both traditional trading actions of buying and selling, where the agent decides how much capital to allocate to each asset, and cases where assets are excluded from the investment portfolio by allocating zero budget to them.

This flexibility in action representation enables the Trading Gym to handle various asset allocation and trading strategies, making it a versatile tool for experimenting with and evaluating different financial decision-making approaches.

## Installation
To install the Trading Gym, follow these steps:
Expand Down Expand Up @@ -60,22 +80,21 @@ The following code snippet demonstrates a basic usage example of the Trading Gym

```python
# Import necessary packages
import gym
import numpy as np
import gymnasium as gym

from gym_trading.envs.data_loader import ListDataLoader
from gym_trading.envs.data_loader import ListAssetChartDataLoader
from gym_trading.envs.exchange import BaseExchange
from gym_trading.envs.renderer import PlotRenderer
from gym_trading.envs.renderer import PyGamePlotRenderer
from gym_trading.envs.rewards import ProfitRewarder

# Create the Trading Gym environment
env = gym.make(
'gym_trading:trading-v0',
data_loader=ListDataLoader(...),
data_loader=ListAssetChartDataLoader(...),
exchange=BaseExchange(...),
rewarder=ProfitRewarder(),
renderer=PlotRenderer(),
observation_window_size=10
renderer=PyGamePlotRenderer(),
final_report_plot=False
)

# Reset the environment and obtain the initial observation
Expand All @@ -85,7 +104,7 @@ observation = env.reset()[0]
done = False
while not done:
# Choose a random action
action = np.random.randint(0, env.action_space.n)
action = env.action_space.sample() # Sample a random action from the action space

# Perform the action and receive the next observation and reward
observation, reward, done, truncated, _ = env.step(action)
Expand All @@ -94,8 +113,13 @@ while not done:

# Render the final state of the environment
env.render()

# Close the environment
env.close()
```

![examples/images/fig_01.png](examples/images/fig_01.png)

For more details on the Trading Gym API review the [jupyter notebook example](examples/example.ipynb).

## License
Expand Down
3 changes: 3 additions & 0 deletions examples/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
/models/
/logs/
/crypto_datasets_*
1,261 changes: 974 additions & 287 deletions examples/example.ipynb

Large diffs are not rendered by default.

Binary file added examples/images/fig_01.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
6 changes: 3 additions & 3 deletions gym_trading/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@
Registering the environment.
"""

from gym.envs.registration import register
from gymnasium.envs.registration import register

register(
id='trading-v0',
entry_point='gym_trading.envs:TradingEnv',
id="trading-v0",
entry_point="gym_trading.envs:TradingEnv",
)
52 changes: 52 additions & 0 deletions gym_trading/envs/action_space.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
from typing import Any

import numpy as np
from gymnasium.spaces import Box
from numpy._typing import NDArray


class BudgetAllocationSpace(Box):
"""
Custom Gym space for budget allocation.
This class defines a custom Gym space for representing budget allocation. It inherits from the Box space and enforces
that the allocation vector is within the range [0, 1] and sums up to 1.
Parameters:
num_assets (int): The number of assets in the allocation.
Example usage:
space = BudgetAllocationSpace(num_assets=3)
action = space.sample()
"""

def __init__(self, num_assets):
"""
Initialize the BudgetAllocationSpace.
Args:
num_assets (int): The number of assets in the allocation.
"""
super().__init__(
low=np.zeros(num_assets, dtype=np.float32),
high=np.ones(num_assets, dtype=np.float32),
shape=(num_assets,),
)

def sample(self, mask: None = None) -> NDArray[Any]:
"""
Generate a normalized random sample within the defined space.
This method generates a random sample within the defined space, typically used for generating initial action
values in reinforcement learning tasks. The generated sample is then normalized so that the sum of its components
equals 1.
Args:
mask: An optional mask that can be applied to restrict the sampling.
Returns:
NDArray[Any]: A normalized random sample within the space.
"""
sample = super().sample(mask)
normalized_sample = sample / np.sum(sample)
return normalized_sample
Loading

0 comments on commit 9d6f9ee

Please sign in to comment.