Navigation auf uzh.ch

Suche

Department of Informatics Blockchain and Distributed Ledger Technologies

Deep Reinforcement Learning in High-Volatility Cryptocurrency Markets

Level: MA
Responsible Person: Mostafa Chegeni
Keywords: Cryptocurrency trading, Machine Learning, (Deep) Reinforcement Learning, Portfolio Management


This Master's thesis explores the use of Deep Reinforcement Learning (DRL) for cryptocurrency trading, a sector characterized by its high volatility and unpredictability. The core objective is to develop a model capable of making informed buying and selling decisions in the cryptocurrency market. The approach integrates a comprehensive analysis of market signals, including historical cryptocurrency market data (OHLCV), news headline sentiments, social media sentiment, and other relevant financial indicators.
The thesis constructs the trading environment using these diverse data sources, defining the state of the environment through recent price movements, technical indicators, and sentiment analysis results. The action space focuses on the top ten cryptocurrencies by market cap, including Bitcoin, Ethereum, and others, allowing the model to strategize asset allocation and trading decisions.
A key aspect of the research is the emphasis on risk management in the trading strategy. The model's reward function is designed to maximize portfolio value while minimizing risk, using metrics like the Sharpe ratio. The thesis evaluates various DRL algorithms, including value-based and actor-critic-based models, to determine the most effective approach for cryptocurrency trading.
The research builds upon existing studies and frameworks in the field. It references a collaborative multi-agent scheme [1], strategies for minimizing execution costs in cryptocurrency exchanges [2], and a multi-modal approach combining different data sources [3]. Additionally, it draws on research that integrates technical analysis with market trend forecasts [4]. The Twin-Delayed DDPG approach's effectiveness in a continuous action space is also examined [5]. A comprehensive summary of RL-based quantitative trading models is provided in [6]. 
We leverage the FinRL framework [7] for initial tests and comparisons, evaluating the DRL model proposed in the thesis against established strategies documented in FinRL [8].
In essence, this thesis represents an in-depth investigation into the application of DRL in the dynamic and complex world of cryptocurrency trading, combining theoretical research with practical applications and risk management strategies.

References:

[1]  Kumlungmak, K. and Vateekul, P., 2023. Multi-Agent Deep Reinforcement Learning With Progressive Negative Reward for Cryptocurrency Trading. IEEE Access.
[2] Schnaubelt, M., 2022. Deep reinforcement learning for the optimal placement of cryptocurrency limit orders. European Journal of Operational Research, 296(3), pp.993-1006.
[3] Avramelou, L., Nousi, P., Passalis, N. and Tefas, A., 2024. Deep reinforcement learning for financial trading using multi-modal features. Expert Systems with Applications, 238, p.121849.
[4] Kochliaridis, V., Kouloumpris, E. and Vlahavas, I., 2022, June. Tradernet-cr: cryptocurrency trading with deep reinforcement learning. In IFIP International Conference on Artificial Intelligence Applications and Innovations (pp. 304-315). Cham: Springer International Publishing.
[5] Majidi, N., Shamsi, M. and Marvasti, F., 2024. Algorithmic trading using continuous action space deep reinforcement learning. Expert Systems with Applications, 235, p.121245. 
[6] Sun, S., Wang, R. and An, B., 2023. Reinforcement learning for quantitative trading. ACM Transactions on Intelligent Systems and Technology, 14(3), pp.1-29.
[7] Liu, X.Y., Yang, H., Gao, J. and Wang, C.D., 2021, November. FinRL: Deep reinforcement learning framework to automate trading in quantitative finance. In Proceedings of the second ACM international conference on AI in finance (pp. 1-9).
[8] https://finrl.readthedocs.io/en/latest/