no code implementations • 17 Apr 2023 • Xiaowen Shi, Ze Wang, Yuanying Cai, Xiaoxu Wu, Fan Yang, Guogang Liao, Yongkang Wang, Xingxing Wang, Dong Wang
There are two types of data employed to train reinforcement learning (RL) model for position allocation, named strategy data and random data.
1 code implementation • 6 Feb 2023 • Xiaowen Shi, Fan Yang, Ze Wang, Xiaoxu Wu, Muzhi Guan, Guogang Liao, Yongkang Wang, Xingxing Wang, Dong Wang
Then we design a novel omnidirectional attention mechanism in OCPM to capture the context information in the permutation.
no code implementations • 20 May 2022 • Guogang Liao, Xuejian Li, Ze Wang, Fan Yang, Muzhi Guan, Bingqi Zhu, Yongkang Wang, Xingxing Wang, Dong Wang
Although VCG-based multi-slot auctions (e. g., VCG, WVCG) make it theoretically possible to model global externalities (e. g., the order and positions of ads and so on), they lack an efficient balance of both revenue and social welfare.
no code implementations • 2 Apr 2022 • Ze Wang, Guogang Liao, Xiaowen Shi, Xiaoxu Wu, Chuheng Zhang, Bingqi Zhu, Yongkang Wang, Xingxing Wang, Dong Wang
Ads allocation, which involves allocating ads and organic items to limited slots in feed with the purpose of maximizing platform revenue, has become a research hotspot.
no code implementations • 2 Apr 2022 • Ze Wang, Guogang Liao, Xiaowen Shi, Xiaoxu Wu, Chuheng Zhang, Yongkang Wang, Xingxing Wang, Dong Wang
With the recent prevalence of reinforcement learning (RL), there have been tremendous interests in utilizing RL for ads allocation in recommendation platforms (e. g., e-commerce and news feed sites).
no code implementations • 1 Apr 2022 • Guogang Liao, Xiaowen Shi, Ze Wang, Xiaoxu Wu, Chuheng Zhang, Yongkang Wang, Xingxing Wang, Dong Wang
A mixed list of ads and organic items is usually displayed in feed and how to allocate the limited slots to maximize the overall revenue is a key problem.
1 code implementation • 9 Sep 2021 • Guogang Liao, Ze Wang, Xiaoxu Wu, Xiaowen Shi, Chuheng Zhang, Yongkang Wang, Xingxing Wang, Dong Wang
Our model results in higher revenue and better user experience than state-of-the-art baselines in offline experiments.