Summary: | 碩士 === 國立交通大學 === 資訊管理研究所 === 107 === Portfolio selection and optimization are two basic and important issues in financial transactions. An effective system is proposed, and two techniques has been applied to solve the problem of portfolio selection and optimization in the system. First, for portfolio selection, the market neutral strategy is a common trading strategy. It operates long and short positions at the same time and profits from the spread between them. In order to find stocks with price-increasing and price-decreasing potential, we believe that deep ranking is good for finding such relative relationships. A deep learning network is introduced to solve the problem of stock ranking, and build portfolios based on the ranking results and our trading strategy.
Portfolio optimization is a process of finding a good set of portfolio weights. After selecting the stocks, how to allocate the portfolio weights of these stocks also affect the performance of the portfolios. Reinforcement learning is very suitable for this kind of problem. It trains the model based on rewards and is very similar to the investors’ behavior of pursuing good portfolios. Reinforcement learning framework is implemented to train an agent to decide the weight of each stock in the portfolios and to balance the risks and rewards of the portfolios.
In our experiments, the dataset of S&P 500 index component stocks in 2015, 2016, and 2017 is used to evaluate the proposed system. In our framework, the ranking model is applied to predict the ranking of nearly 500 stocks based on their future returns. According to the market neutral strategy, we take the top-ranked and bottom-ranked groups of stocks, and utilize the reinforcement learning agent to determine the portfolio weight of each stock, and finally put the portfolios into the market to measure their performance. The experimental results show that the proposed system improves the Sharpe ratio by about 4% to 12% compared with the performance of the market, and is better than the other approaches.
|