Reinforcement Learning Application Project for Optimal Control of Reactive QR Queues in High Frequency Trading
This project aims to apply reinforcement learning techniques to optimize the control of reactive QR (Quick Response) queues in a high-frequency trading context. It combines advanced concepts of machine learning, queueing theory, and optimization to improve resource management and performance of trading systems.
- NumPy: Utilisé pour le calcul numérique et le traitement des données.
- Scikit-learn: Utilisé pour les algorithmes d'apprentissage automatique et l'évaluation des modèles.
- Model reactive QR queues in a high-frequency trading environment
- Implement reinforcement learning algorithms for optimal control
- Evaluate and optimize system performance in terms of execution time and resource utilization
- Visualize results and performance metrics
The project is organized into several key components:
model.py
: Implementation of the QR queue modelrl_agent.py
: Reinforcement learning agentenvironment.py
: Simulation environment for agent-model interactionvisualization.py
: Results visualization modulemain.py
: Main script for running experiments
-
Clone the repository:
git clone https://github.com/janisaiad/HFT_QR_RL.git cd HFT_QR_RL
-
Grant execution rights to the launch script:
chmod +x launch.sh
-
Launch the environment:
./launch.sh
-
To set up the environment and install dependencies:
@make setup
-
To run the main script:
@make run
-
To download necessary data:
@make data
-
To generate visualizations:
@make visualize
-
To clean the environment and generated files:
@make clean
-
To display help information:
@make help
-
To check the license:
@make check-license
-
To show the README content:
@make show-readme
-
To run the main script:
poetry run python main.py
-
To download necessary data:
poetry run python data/script.py
-
To generate visualizations:
poetry run python data/visualization.py
-
To display project information:
poetry show
-
To update dependencies:
poetry update
-
To add a new dependency:
poetry add <package_name>
Results will be generated in the databento/
folder.
This file contains the implementation of the reactive QR queue model. It simulates the behavior of trading orders and their processing in a multi-queue system.
Implements the reinforcement learning agent using algorithms such as Q-Learning or Deep Q-Network (DQN) to learn the optimal control policy.
Defines the simulation environment in which the agent interacts with the queue model. It manages states, actions, and rewards.
Module responsible for generating graphs and visualizations to analyze system performance and learning results.
Main script that orchestrates the entire process, from initialization to running experiments and generating results.
Experiment results will be stored in the databento/
folder. They will include:
- Learning convergence graphs
- System performance metrics (average response time, resource utilization, etc.)
- Visualizations of learned policies
Contributions to this project are welcome. Please follow these steps to contribute:
- Fork the project
- Create a development branch for your feature (
git checkout -b dev/NewFeature
) - Make your changes and commit them (
git commit -m 'Add NewFeature'
) - Push to the branch (
git push origin dev/NewFeature
) - Open a Pull Request to the
dev
branch
- Always use the
dev
branch for ongoing development. - Ensure your commits are clear and descriptive.
- Before submitting a Pull Request, make sure your code is well-commented and follows the project's style conventions.
- For technical discussions or questions, contact [email protected].
This project is licensed under the MIT License. See the LICENSE
file for more details.
Project Link: https://github.com/janisaiad/HFT_QR_RL
For any questions or suggestions, please contact: [email protected]