PST: Improving Quantitative Trading via Program Sketch-based Tuning

9 Oct 2023  ·  Zhiming Li, Junzhe Jiang, Yushi Cao, Aixin Cui, Bozhi Wu, Bo Li, Yang Liu, Dongning Sun ·

Deep reinforcement learning (DRL) has revolutionized quantitative finance by achieving decent performance without significant human expert knowledge. Despite its achievements, we observe that the current state-of-the-art DRL models are still ineffective in identifying the market trend, causing them to miss good trading opportunities or suffer from large drawdowns when encountering market crashes. To tackle this limitation, a natural idea is to embed human expert knowledge regarding the market trend. Whereas, such knowledge is abstract and hard to be quantified. In this paper, we propose a universal neuro-symbolic tuning framework, called program sketch-based tuning (PST). Particularly, PST first proposes using a novel symbolic program sketch to embed the abstract human expert knowledge of market trends. Then we utilize the program sketch to tune a trained DRL policy according to the different market trend of the moment. Finally, in order to optimize this neural-symbolic framework, we propose a novel hybrid optimization method. Extensive evaluations on two popular quantitative trading tasks demonstrate that PST can significantly enhance the performance of previous state-of-the-art DRL strategies while being extremely lightweight.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here