Journal Volumes


Visitors
ALL : 886,460
TODAY : 514
ONLINE : 35



















  JOURNAL DETAIL



Risk-Sensitive Portfolio Management by Using C51 Algorithm


Paper Type 
Contributed Paper
Title 
Risk-Sensitive Portfolio Management by Using C51 Algorithm
Author 
Thammasorn Harnpadungkij, Warasinee Chaisangmongkon and Phond Phunchongharn
Email 
phond.p@mail.kmutt.ac.th
Abstract:

     Financial trading is one of the most popular problems for reinforcement learning in recent years. One of the important challenges is that investment is a multi-objective problem. That is, professional investors do not act solely on expected profi t but also carefully consider the potential risk of a given investment. To handle such a challenge, previous studies have explored various kinds of risk-sensitive rewards, for example, the Sharpe ratio as computed by a fi xed length of previous returns. This work proposes a new approach to deal with the profi t-to-risk tradeoff by applying distributional reinforcement learning to build a risk awareness policy instead of a simple risk-based reward function. Our new policy, termed C51-Sharpe, is to select the action based on the Sharpe ratio computed from the probability mass function of the return. This produces a signifi cantly higher Sharpe ratio and lower maximum drawdown without sacrifi cing profi t compared to the C51algorithm utilizing a purely profi t-based policy. Moreover, it can outperform other benchmarks, such as a Deep Q-Network (DQN) with a Sharpe ratio reward function. Besides the policy, we also studied the effect of using double networks and the choice of exploration strategies with our approach to identify the optimal training confi guration. We fi nd that the epsilon-greedy policy is the most suitable exploration for C51-Sharpe and that the use of double network has no signifi cant impact on performance. Our study provides statistical evidence of the effi ciency in risk-sensitive policy implemented by using distributional reinforcement algorithms along with an optimized training process.

Start & End Page 
1458 - 1482
Received Date 
2022-03-04
Revised Date 
2022-06-27
Accepted Date 
2022-06-27
Full Text 
  Download
Keyword 
algorithmic trading, reinforcement learning, deep neural network
Volume 
Vol.49 No.5 (September 2022)
DOI 
https://doi.org/10.12982/CMJS.2022.094
SDGs
View:540 Download:361

Search in this journal


Document Search


Author Search

A | B | C | D | E | F | G | H | I | J | K | L | M | N | O | P | Q | R | S | T | U | V | W | X | Y | Z

Popular Search






Chiang Mai Journal of Science

Faculty of Science, Chiang Mai University
239 Huaykaew Road, Tumbol Suthep, Amphur Muang, Chiang Mai 50200 THAILAND
Tel: +6653-943-467




Faculty of Science,
Chiang Mai University




EMAIL
cmjs@cmu.ac.th




Copyrights © Since 2021 All Rights Reserved by Chiang Mai Journal of Science