Search Results for author: Zeyu Feng

Found 7 papers, 2 papers with code

Generating, Reconstructing, and Representing Discrete and Continuous Data: Generalized Diffusion with Learnable Encoding-Decoding

no code implementations29 Feb 2024 Guangyi Liu, Yu Wang, Zeyu Feng, Qiyu Wu, Liping Tang, Yuan Gao, Zhen Li, Shuguang Cui, Julian McAuley, Eric P. Xing, Zichao Yang, Zhiting Hu

The vast applications of deep generative models are anchored in three core capabilities -- generating new instances, reconstructing inputs, and learning compact representations -- across various data types, such as discrete text/protein sequences and continuous images.

Denoising

Synslator: An Interactive Machine Translation Tool with Online Learning

no code implementations8 Oct 2023 Jiayi Wang, Ke Wang, Fengming Zhou, Chengyu Wang, Zhiyong Fu, Zeyu Feng, Yu Zhao, Yuqi Zhang

Interactive machine translation (IMT) has emerged as a progression of the computer-aided translation paradigm, where the machine translation system and the human translator collaborate to produce high-quality translations.

Language Modelling Machine Translation +1

Safety-Constrained Policy Transfer with Successor Features

no code implementations10 Nov 2022 Zeyu Feng, BoWen Zhang, Jianxin Bi, Harold Soh

In this work, we focus on the problem of safe policy transfer in reinforcement learning: we seek to leverage existing policies when learning a new task with specified constraints.

Open-Set Hypothesis Transfer with Semantic Consistency

no code implementations1 Oct 2020 Zeyu Feng, Chang Xu, DaCheng Tao

Unsupervised open-set domain adaptation (UODA) is a realistic problem where unlabeled target data contain unknown classes.

Domain Adaptation

Self-Supervised Representation Learning From Multi-Domain Data

no code implementations ICCV 2019 Zeyu Feng, Chang Xu, Dacheng Tao

In contrast to previous self-supervised learning methods, our approach learns from multiple domains, which has the benefit of decreasing the build-in bias of individual domain, as well as leveraging information and allowing knowledge transfer across multiple domains.

Representation Learning Self-Supervised Learning +1

Self-Supervised Representation Learning by Rotation Feature Decoupling

1 code implementation CVPR 2019 Zeyu Feng, Chang Xu, Dacheng Tao

The method incorporates rotation invariance into the feature learning framework, one of many good and well-studied properties of visual representation, which is rarely appreciated or exploited by previous deep convolutional neural network based self-supervised representation learning methods.

Representation Learning Self-Supervised Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.