The Trading Gym is a versatile Python library that offers a comprehensive environment for simulating and testing trading strategies, as well as performing budget allocation across a portfolio of assets. Built on the foundation of the OpenAI Gym framework, it provides researchers and traders with a powerful toolkit to develop and evaluate trading algorithms.
- The Trading Gym seamlessly integrates with OpenAI Gym, enhancing its capabilities to cater specifically to reinforcement learning and algorithmic trading research.
- Load historical price data from a variety of sources and formats, including CSV files, API calls, or databases, using the flexible Data Loader interface. This feature enables you to work with real-world data or synthetic data tailored to your needs.
- Simulate trading actions, order execution, and portfolio management using the Exchange component. This interface allows you to interact with the market, execute trades, and evaluate trading decisions within a controlled environment.
- Visualize price data, trading actions, and portfolio performance through diverse rendering options. You can choose from various visualization methods, including plotting, logging, or even implement custom renderers to suit your visualization requirements.
- Define and implement custom reward functions to evaluate the performance of your trading strategies. You can tailor these functions to measure various criteria, such as profit and loss, risk-adjusted returns, or other specific metrics relevant to your trading objectives.
- In addition to trading, the Trading Gym extends its utility to budget allocation. It allows you to allocate funds across a set of assets, making it suitable for a broader range of financial optimization tasks beyond pure trading strategies.
Whether you're a researcher exploring reinforcement learning in finance or a trader looking to develop and test your trading strategies, the Trading Gym offers a versatile and adaptable environment to meet your needs. To dive deeper into its functionalities and see practical examples, refer to the jupyter notebook provided in the repository.
In the context of the Trading Gym, actions provided by the agent are represented as vectors. Each vector signifies a budget allocation strategy, where each value in the vector corresponds to the budget allocated to a specific asset. The size of the vector aligns with the number of assets under consideration.
This action representation accommodates a wide spectrum of asset allocation scenarios, ranging from the allocation of the entire budget to specific assets to not allocating any budget to certain assets. In essence, it encompasses both traditional trading actions of buying and selling, where the agent decides how much capital to allocate to each asset, and cases where assets are excluded from the investment portfolio by allocating zero budget to them.
This flexibility in action representation enables the Trading Gym to handle various asset allocation and trading strategies, making it a versatile tool for experimenting with and evaluating different financial decision-making approaches.
To install the Trading Gym, follow these steps:
git clone https://github.com/damiano1996/gym-trading.git
cd trading-gym
python3 -m venv venv
venv\Scripts\activate
source venv/bin/activate
pip install -r requirements.txt
The following code snippet demonstrates a basic usage example of the Trading Gym:
# Import necessary packages
import gymnasium as gym
from gym_trading.envs.data_loader import ListAssetChartDataLoader
from gym_trading.envs.exchange import BaseExchange
from gym_trading.envs.renderer import PyGamePlotRenderer
from gym_trading.envs.rewards import ProfitRewarder
# Create the Trading Gym environment
env = gym.make(
'gym_trading:trading-v0',
data_loader=ListAssetChartDataLoader(...),
exchange=BaseExchange(...),
rewarder=ProfitRewarder(),
renderer=PyGamePlotRenderer(),
final_report_plot=False
)
# Reset the environment and obtain the initial observation
observation = env.reset()[0]
# Simulate a trading session
done = False
while not done:
# Choose a random action
action = env.action_space.sample() # Sample a random action from the action space
# Perform the action and receive the next observation and reward
observation, reward, done, truncated, _ = env.step(action)
# Custom logic and analysis can be performed here
# Render the final state of the environment
env.render()
# Close the environment
env.close()
For more details on the Trading Gym API review the jupyter notebook example.
The Trading Gym is released under the MIT License. Feel free to use, modify, and distribute the code as permitted by the license.
The Trading Gym was inspired by the OpenAI Gym and aims to provide a specialized environment for trading research and algorithmic trading development.