Search Results for author: Ruiyang Xu

Found 10 papers, 2 papers with code

Optimizing Long-term Value for Auction-Based Recommender Systems via On-Policy Reinforcement Learning

no code implementations23 May 2023 Ruiyang Xu, Jalaj Bhandari, Dmytro Korenkevych, Fan Liu, Yuchen He, Alex Nikulkov, Zheqing Zhu

Auction-based recommender systems are prevalent in online advertising platforms, but they are typically optimized to allocate recommendation slots based on immediate expected return metrics, neglecting the downstream effects of recommendations on user behavior.

Recommendation Systems reinforcement-learning

AMOM: Adaptive Masking over Masking for Conditional Masked Language Model

1 code implementation13 Mar 2023 Yisheng Xiao, Ruiyang Xu, Lijun Wu, Juntao Li, Tao Qin, Yan-Tie Liu, Min Zhang

Experiments on \textbf{3} different tasks (neural machine translation, summarization, and code generation) with \textbf{15} datasets in total confirm that our proposed simple method achieves significant performance improvement over the strong CMLM model.

Code Generation Language Modelling +2

Online Sparse Streaming Feature Selection Using Adapted Classification

no code implementations25 Feb 2023 Ruiyang Xu, Di wu, Xin Luo

Traditional feature selections need to know the feature space before learning, and online streaming feature selection (OSFS) is proposed to process streaming features on the fly.

Classification Feature Correlation +1

A Validation Tool for Designing Reinforcement Learning Environments

no code implementations10 Dec 2021 Ruiyang Xu, Zhengxing Chen

Reinforcement learning (RL) has gained increasing attraction in the academia and tech industry with launches to a variety of impactful applications and products.

Offline RL reinforcement-learning +2

Semantic Parsing Natural Language into Relational Algebra

no code implementations25 Jun 2021 Ruiyang Xu, Ayush Singh

Natural interface to database (NLIDB) has been researched a lot during the past decades.

Semantic Parsing

Dual Monte Carlo Tree Search

no code implementations21 Mar 2021 Prashank Kadam, Ruiyang Xu, Karl Lieberherr

This technique is applicable to any MCTS based algorithm to reduce the number of updates to the tree.

Solving QSAT problems with neural MCTS

no code implementations17 Jan 2021 Ruiyang Xu, Karl Lieberherr

After training, an off-the-shelf QSAT solver is used to evaluate the performance of the algorithm.

Board Games

First-Order Problem Solving through Neural MCTS based Reinforcement Learning

no code implementations11 Jan 2021 Ruiyang Xu, Prashank Kadam, Karl Lieberherr

We propose a general framework, Persephone, to map the FOL description of a combinatorial problem to a semantic game so that it can be solved through a neural MCTS based reinforcement learning algorithm.

reinforcement-learning Reinforcement Learning (RL)

Learning Self-Game-Play Agents for Combinatorial Optimization Problems

no code implementations8 Mar 2019 Ruiyang Xu, Karl Lieberherr

Recent progress in reinforcement learning (RL) using self-game-play has shown remarkable performance on several board games (e. g., Chess and Go) as well as video games (e. g., Atari games and Dota2).

Atari Games Board Games +2

Cannot find the paper you are looking for? You can Submit a new open access paper.