Search Results for author: Shengyu Feng

Found 8 papers, 3 papers with code

Coreference by Appearance: Visually Grounded Event Coreference Resolution

no code implementations CRAC (ACL) 2021 Liming Wang, Shengyu Feng, Xudong Lin, Manling Li, Heng Ji, Shih-Fu Chang

Event coreference resolution is critical to understand events in the growing number of online news with multiple modalities including text, video, speech, etc.

coreference-resolution Event Coreference Resolution +2

Concept Discovery for Fast Adapatation

no code implementations19 Jan 2023 Shengyu Feng, Hanghang Tong

The advances in deep learning have enabled machine learning methods to outperform human beings in various areas, but it remains a great challenge for a well-trained model to quickly adapt to a new task.

Few-Shot Learning

ARIEL: Adversarial Graph Contrastive Learning

1 code implementation15 Aug 2022 Shengyu Feng, Baoyu Jing, Yada Zhu, Hanghang Tong

In this work, by introducing an adversarial graph view for data augmentation, we propose a simple but effective method, Adversarial Graph Contrastive Learning (ARIEL), to extract informative contrastive samples within reasonable constraints.

Contrastive Learning Data Augmentation +1

Exploiting Long-Term Dependencies for Generating Dynamic Scene Graphs

1 code implementation18 Dec 2021 Shengyu Feng, Subarna Tripathi, Hesham Mostafa, Marcel Nassar, Somdeb Majumdar

Dynamic scene graph generation from a video is challenging due to the temporal dynamics of the scene and the inherent temporal fluctuations of predictions.

Graph Generation Object +3

X-GOAL: Multiplex Heterogeneous Graph Prototypical Contrastive Learning

no code implementations8 Sep 2021 Baoyu Jing, Shengyu Feng, Yuejia Xiang, Xi Chen, Yu Chen, Hanghang Tong

X-GOAL is comprised of two components: the GOAL framework, which learns node embeddings for each homogeneous graph layer, and an alignment regularization, which jointly models different layers by aligning layer-specific node embeddings.

Contrastive Learning Graph Learning +2

Batch Reinforcement Learning Through Continuation Method

no code implementations ICLR 2021 Yijie Guo, Shengyu Feng, Nicolas Le Roux, Ed Chi, Honglak Lee, Minmin Chen

Many real-world applications of reinforcement learning (RL) require the agent to learn from a fixed set of trajectories, without collecting new interactions.

reinforcement-learning Reinforcement Learning (RL)

Memory Based Trajectory-conditioned Policies for Learning from Sparse Rewards

no code implementations NeurIPS 2020 Yijie Guo, Jongwook Choi, Marcin Moczulski, Shengyu Feng, Samy Bengio, Mohammad Norouzi, Honglak Lee

Reinforcement learning with sparse rewards is challenging because an agent can rarely obtain non-zero rewards and hence, gradient-based optimization of parameterized policies can be incremental and slow.

Efficient Exploration Imitation Learning +1

Cannot find the paper you are looking for? You can Submit a new open access paper.