Search Results for author: Li Xia

Found 15 papers, 2 papers with code

Fundamental Analysis based Neural Network for Stock Movement Prediction

no code implementations CCL 2022 Zheng Yangjia, Li Xia, Ma Junteng, Chen Yuan

In practice, news or prices of a stock in one day are normally impacted by different days with different weights, and they can influence each other.

Global Algorithms for Mean-Variance Optimization in Markov Decision Processes

no code implementations27 Feb 2023 Li Xia, Shuai Ma

In this paper, we propose a new approach to find the globally optimal policy for combined metrics of steady-state mean and variance in an infinite-horizon undiscounted MDP.

valid

Optimal Systemic Risk Bailout: A PGO Approach Based on Neural Network

no code implementations10 Dec 2022 Shuhua Xiao, Jiali Ma, Li Xia, Shushang Zhu

In this paper, we regard the issue of the optimal bailout (capital injection) as a black-box optimization problem, where the black box is characterized as a fixed-point system that follows the E-N framework for measuring the systemic risk of the financial system.

Management

Risk-Sensitive Markov Decision Processes with Long-Run CVaR Criterion

no code implementations17 Oct 2022 Li Xia, Peter W. Glynn

CVaR (Conditional Value at Risk) is a risk metric widely used in finance.

Management

Distributionally Robust Offline Reinforcement Learning with Linear Function Approximation

no code implementations14 Sep 2022 Xiaoteng Ma, Zhipeng Liang, Jose Blanchet, Mingwen Liu, Li Xia, Jiheng Zhang, Qianchuan Zhao, Zhengyuan Zhou

Among the reasons hindering reinforcement learning (RL) applications to real-world problems, two factors are critical: limited data and the mismatch between the testing environment (real environment in which the policy is deployed) and the training environment (e. g., a simulator).

Offline RL reinforcement-learning +1

Mean-Semivariance Policy Optimization via Risk-Averse Reinforcement Learning

no code implementations15 Jun 2022 Xiaoteng Ma, Shuai Ma, Li Xia, Qianchuan Zhao

Keeping risk under control is often more crucial than maximizing expected rewards in real-world decision-making situations, such as finance, robotics, autonomous driving, etc.

Autonomous Driving Continuous Control +3

A unified algorithm framework for mean-variance optimization in discounted Markov decision processes

no code implementations15 Jan 2022 Shuai Ma, Xiaoteng Ma, Li Xia

To deal with this unorthodox problem, we introduce a pseudo mean to transform the untreatable MDP to a standard one with a redefined reward function in standard form and derive a discounted mean-variance performance difference formula.

Bilevel Optimization Management

Average-Reward Reinforcement Learning with Trust Region Methods

no code implementations7 Jun 2021 Xiaoteng Ma, Xiaohang Tang, Li Xia, Jun Yang, Qianchuan Zhao

Our work provides a unified framework of the trust region approach including both the discounted and average criteria, which may complement the framework of reinforcement learning beyond the discounted objectives.

Continuous Control reinforcement-learning +1

Risk-Sensitive Markov Decision Processes with Combined Metrics of Mean and Variance

no code implementations9 Aug 2020 Li Xia

This paper investigates the optimization problem of an infinite stage discrete time Markov decision process (MDP) with a long-run average metric considering both mean and variance of rewards together.

Fairness

SOAC: The Soft Option Actor-Critic Architecture

no code implementations25 Jun 2020 Chenghao Li, Xiaoteng Ma, Chongjie Zhang, Jun Yang, Li Xia, Qianchuan Zhao

In these tasks, our approach learns a diverse set of options, each of whose state-action space has strong coherence.

Transfer Learning

Wasserstein Distance guided Adversarial Imitation Learning with Reward Shape Exploration

1 code implementation5 Jun 2020 Ming Zhang, Yawei Wang, Xiaoteng Ma, Li Xia, Jun Yang, Zhiheng Li, Xiu Li

The generative adversarial imitation learning (GAIL) has provided an adversarial learning framework for imitating expert policy from demonstrations in high-dimensional continuous tasks.

Continuous Control Imitation Learning

DSAC: Distributional Soft Actor Critic for Risk-Sensitive Reinforcement Learning

no code implementations30 Apr 2020 Xiaoteng Ma, Li Xia, Zhengyuan Zhou, Jun Yang, Qianchuan Zhao

In this paper, we present a new reinforcement learning (RL) algorithm called Distributional Soft Actor Critic (DSAC), which exploits the distributional information of accumulated rewards to achieve better performance.

Continuous Control reinforcement-learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.