Forecasting Stock Price Movements and Stock Trading Automation Using Deep Learning and Reinforcement Learning
Date
Authors
ORCID
Journal Title
Journal ISSN
Volume Title
Publisher
item.page.doi
Abstract
Artificial intelligence algorithms and big data analysis approaches are becoming more signif- icant in a variety of application fields, including stock market trading and automation. Few research, on the other hand, have concentrated on predicting the future directional change in stock prices, particularly when utilizing strong machine learning methods like deep recurrent neural networks (DRNNs) to conduct the analysis. For algorithmic trading and investment management, developing a forecasting system that accurately anticipates future changes in a stock price is critical. To address that, we propose a hybrid deep learning model with self attention mechanism, dense layers, and a stacked bidirectional long-short term memory neural network. Many scholars have used technical analysis for financial forecasting with great success. When computing the technical indicators, a time horizon parameter needs to be specified as an input window size. In this work, the input size is set same as the prediction horizon or time step. This is due to the fact that the stock price’s behavior over a prediction horizon may, to some extent, mirror its previous behavior over the same time period. For extracting temporal features from stock sequential data, a stacked bidirectional long- short term memory neural network is proposed. The self-attention mechanism which directs the neural network to place more weight on important temporal information is also proposed. Following the model evaluation methods, experiments demonstrated that the proposed model’s trading strategy is better than the buy-and hold trading strategy and the model also out- performs other state-of-the art learning algorithms based on the evaluation metrics. Majority of trades are now entirely automated, and algorithmic stock trading has become a standard in today’s financial market. In many difficult games, like Go and Chess, Reinforce- ment Learning (RL) agents have shown to be a formidable opponent. The historical prices and movements of the stock market may be seen as a complicated, chaotic and imperfect environment in which we aim to optimize return while minimizing risk. Three deep rein- forcement learning agents are trained and an ensemble trading strategy is obtained using policy gradient based algorithms: Deep Deterministic Policy Gradient (DDPG), Proximal Policy Optimization (PPO) and Soft Actor Critic (SAC). To improve the robustness and accuracy for representing stock market conditions, the Long Short Term Memory (LSTM) is proposed to extract important and informative features from raw financial data and technical indicators. The ensemble approach takes the best elements of the three techniques and combines them, allowing it to adapt to changing market conditions with ease. We use a load-on-demand strategy for processing extremely big data to prevent significant memory usage in training networks with continuous action space. The algorithms are tested on the 30 Dow Jones equities. The trading agent’s performance is assessed and compared to the Dow Jones Industrial Average index and the classic min-variance portfolio allocation approach. In terms of risk-adjusted return as evaluated by the Sortino ratio, the proposed deep ensemble method outperforms the two baselines and a state-of-the-art deep reinforcement algorithm.