no code implementations • CRAC (ACL) 2021 • Liming Wang, Shengyu Feng, Xudong Lin, Manling Li, Heng Ji, Shih-Fu Chang
Event coreference resolution is critical to understand events in the growing number of online news with multiple modalities including text, video, speech, etc.
no code implementations • 19 Jan 2023 • Shengyu Feng, Hanghang Tong
The advances in deep learning have enabled machine learning methods to outperform human beings in various areas, but it remains a great challenge for a well-trained model to quickly adapt to a new task.
1 code implementation • 15 Aug 2022 • Shengyu Feng, Baoyu Jing, Yada Zhu, Hanghang Tong
In this work, by introducing an adversarial graph view for data augmentation, we propose a simple but effective method, Adversarial Graph Contrastive Learning (ARIEL), to extract informative contrastive samples within reasonable constraints.
1 code implementation • 14 Feb 2022 • Shengyu Feng, Baoyu Jing, Yada Zhu, Hanghang Tong
Contrastive learning is an effective unsupervised method in graph representation learning.
1 code implementation • 18 Dec 2021 • Shengyu Feng, Subarna Tripathi, Hesham Mostafa, Marcel Nassar, Somdeb Majumdar
Dynamic scene graph generation from a video is challenging due to the temporal dynamics of the scene and the inherent temporal fluctuations of predictions.
no code implementations • 8 Sep 2021 • Baoyu Jing, Shengyu Feng, Yuejia Xiang, Xi Chen, Yu Chen, Hanghang Tong
X-GOAL is comprised of two components: the GOAL framework, which learns node embeddings for each homogeneous graph layer, and an alignment regularization, which jointly models different layers by aligning layer-specific node embeddings.
no code implementations • ICLR 2021 • Yijie Guo, Shengyu Feng, Nicolas Le Roux, Ed Chi, Honglak Lee, Minmin Chen
Many real-world applications of reinforcement learning (RL) require the agent to learn from a fixed set of trajectories, without collecting new interactions.
no code implementations • NeurIPS 2020 • Yijie Guo, Jongwook Choi, Marcin Moczulski, Shengyu Feng, Samy Bengio, Mohammad Norouzi, Honglak Lee
Reinforcement learning with sparse rewards is challenging because an agent can rarely obtain non-zero rewards and hence, gradient-based optimization of parameterized policies can be incremental and slow.