Search Results for author: Lanqing Li

Found 20 papers, 8 papers with code

An Autonomous Large Language Model Agent for Chemical Literature Data Mining

no code implementations20 Feb 2024 Kexin Chen, Hanqun Cao, Junyou Li, Yuyang Du, Menghao Guo, Xin Zeng, Lanqing Li, Jiezhong Qiu, Pheng Ann Heng, Guangyong Chen

The proposed approach marks a significant advancement in automating chemical literature extraction and demonstrates the potential for AI to revolutionize data management and utilization in chemistry.

Drug Discovery Language Modelling +2

Towards an Information Theoretic Framework of Context-Based Offline Meta-Reinforcement Learning

no code implementations4 Feb 2024 Lanqing Li, Hai Zhang, Xinyu Zhang, Shatong Zhu, Junqiao Zhao, Pheng-Ann Heng

As a marriage between offline RL and meta-RL, the advent of offline meta-reinforcement learning (OMRL) has shown great promise in enabling RL agents to multi-task and quickly adapt while acquiring knowledge safely.

Meta Reinforcement Learning Offline RL

MolKD: Distilling Cross-Modal Knowledge in Chemical Reactions for Molecular Property Prediction

no code implementations3 May 2023 Liang Zeng, Lanqing Li, Jian Li

This paper studies this problem and proposes to incorporate chemical domain knowledge, specifically related to chemical reactions, for learning effective molecular representations.

Drug Discovery Molecular Property Prediction +1

Reweighted Mixup for Subpopulation Shift

no code implementations9 Apr 2023 Zongbo Han, Zhipeng Liang, Fan Yang, Liu Liu, Lanqing Li, Yatao Bian, Peilin Zhao, QinGhua Hu, Bingzhe Wu, Changqing Zhang, Jianhua Yao

Subpopulation shift exists widely in many real-world applications, which refers to the training and test distributions that contain the same subpopulation groups but with different subpopulation proportions.

Fairness Generalization Bounds

Deploying Offline Reinforcement Learning with Human Feedback

no code implementations13 Mar 2023 Ziniu Li, Ke Xu, Liu Liu, Lanqing Li, Deheng Ye, Peilin Zhao

To address this issue, we propose an alternative framework that involves a human supervising the RL models and providing additional feedback in the online deployment phase.

Decision Making Model Selection +3

On the Pitfall of Mixup for Uncertainty Calibration

1 code implementation CVPR 2023 Deng-Bao Wang, Lanqing Li, Peilin Zhao, Pheng-Ann Heng, Min-Ling Zhang

It has been recently found that models trained with mixup also perform well on uncertainty calibration.

Class-Conditional Sharpness-Aware Minimization for Deep Long-Tailed Recognition

1 code implementation CVPR 2023 Zhipeng Zhou, Lanqing Li, Peilin Zhao, Pheng-Ann Heng, Wei Gong

It's widely acknowledged that deep learning models with flatter minima in its loss landscape tend to generalize better.

Long-tail Learning

Handling Missing Data via Max-Entropy Regularized Graph Autoencoder

no code implementations30 Nov 2022 Ziqi Gao, Yifan Niu, Jiashun Cheng, Jianheng Tang, Tingyang Xu, Peilin Zhao, Lanqing Li, Fugee Tsung, Jia Li

In this work, we present a regularized graph autoencoder for graph attribute imputation, named MEGAE, which aims at mitigating spectral concentration problem by maximizing the graph spectral entropy.

Attribute Imputation

UMIX: Improving Importance Weighting for Subpopulation Shift via Uncertainty-Aware Mixup

1 code implementation19 Sep 2022 Zongbo Han, Zhipeng Liang, Fan Yang, Liu Liu, Lanqing Li, Yatao Bian, Peilin Zhao, Bingzhe Wu, Changqing Zhang, Jianhua Yao

Importance reweighting is a normal way to handle the subpopulation shift issue by imposing constant or adaptive sampling weights on each sample in the training dataset.

Generalization Bounds

ImGCL: Revisiting Graph Contrastive Learning on Imbalanced Node Classification

no code implementations23 May 2022 Liang Zeng, Lanqing Li, Ziqi Gao, Peilin Zhao, Jian Li

Motivated by this observation, we propose a principled GCL framework on Imbalanced node classification (ImGCL), which automatically and adaptively balances the representations learned from GCL without labels.

Classification Contrastive Learning +2

Robust Imitation Learning from Corrupted Demonstrations

no code implementations29 Jan 2022 Liu Liu, Ziyang Tang, Lanqing Li, Dijun Luo

We consider offline Imitation Learning from corrupted demonstrations where a constant fraction of data can be noise or even arbitrary outliers.

Continuous Control Imitation Learning

Value Penalized Q-Learning for Recommender Systems

no code implementations15 Oct 2021 Chengqian Gao, Ke Xu, Kuangqi Zhou, Lanqing Li, Xueqian Wang, Bo Yuan, Peilin Zhao

To alleviate the action distribution shift problem in extracting RL policy from static trajectories, we propose Value Penalized Q-learning (VPQ), an uncertainty-based offline RL algorithm.

Offline RL Q-Learning +2

Local Augmentation for Graph Neural Networks

1 code implementation8 Sep 2021 Songtao Liu, Rex Ying, Hanze Dong, Lanqing Li, Tingyang Xu, Yu Rong, Peilin Zhao, Junzhou Huang, Dinghao Wu

To address this, we propose a simple and efficient data augmentation strategy, local augmentation, to learn the distribution of the node features of the neighbors conditioned on the central node's feature and enhance GNN's expressive power with generated features.

Open-Ended Question Answering

IGrow: A Smart Agriculture Solution to Autonomous Greenhouse Control

1 code implementation6 Jul 2021 Xiaoyan Cao, Yao Yao, Lanqing Li, Wanpeng Zhang, Zhicheng An, Zhong Zhang, Li Xiao, Shihui Guo, Xiaoyu Cao, Meihong Wu, Dijun Luo

However, the optimal control of autonomous greenhouses is challenging, requiring decision-making based on high-dimensional sensory data, and the scaling of production is limited by the scarcity of labor capable of handling this task.

Cloud Computing Decision Making

Provably Improved Context-Based Offline Meta-RL with Attention and Contrastive Learning

no code implementations22 Feb 2021 Lanqing Li, Yuanhao Huang, Mingzhe Chen, Siteng Luo, Dijun Luo, Junzhou Huang

Meta-learning for offline reinforcement learning (OMRL) is an understudied problem with tremendous potential impact by enabling RL algorithms in many real-world applications.

Contrastive Learning Meta-Learning +3

FOCAL: Efficient Fully-Offline Meta-Reinforcement Learning via Distance Metric Learning and Behavior Regularization

1 code implementation ICLR 2021 Lanqing Li, Rui Yang, Dijun Luo

In this work, we enforce behavior regularization on learned policy as a general approach to offline RL, combined with a deterministic context encoder for efficient task inference.

Meta Reinforcement Learning Metric Learning +3

Cannot find the paper you are looking for? You can Submit a new open access paper.