Search Results for author: Yubei Chen

Found 27 papers, 16 papers with code

Trajectory Regularization Enhances Self-Supervised Geometric Representation

no code implementations22 Mar 2024 Jiayun Wang, Stella X. Yu, Yubei Chen

To address this gap, we introduce a new pose-estimation benchmark for assessing SSL geometric representations, which demands training without semantic or pose labels and achieving proficiency in both semantic and geometric downstream tasks.

Pose Estimation Representation Learning +1

Gen4Gen: Generative Data Pipeline for Generative Multi-Concept Composition

1 code implementation23 Feb 2024 Chun-Hsiao Yeh, Ta-Ying Cheng, He-Yen Hsieh, Chuan-En Lin, Yi Ma, Andrew Markham, Niki Trigoni, H. T. Kung, Yubei Chen

First, current personalization techniques fail to reliably extend to multiple concepts -- we hypothesize this to be due to the mismatch between complex scenes and simple text descriptions in the pre-training dataset (e. g., LAION).

Image Generation

URLOST: Unsupervised Representation Learning without Stationarity or Topology

no code implementations6 Oct 2023 Zeyu Yun, Juexiao Zhang, Bruno Olshausen, Yann Lecun, Yubei Chen

Unsupervised representation learning has seen tremendous progress but is constrained by its reliance on data modality-specific stationarity and topology, a limitation not found in biological intelligence systems.

Representation Learning

Unsupervised Feature Learning with Emergent Data-Driven Prototypicality

no code implementations4 Jul 2023 Yunhui Guo, Youren Zhang, Yubei Chen, Stella X. Yu

With our feature mapper simply trained to spread out training instances in hyperbolic space, we observe that images move closer to the origin with congealing, validating our idea of unsupervised prototypicality discovery.

Metric Learning

Variance-Covariance Regularization Improves Representation Learning

no code implementations23 Jun 2023 Jiachen Zhu, Katrina Evtimova, Yubei Chen, Ravid Shwartz-Ziv, Yann Lecun

In summary, VCReg offers a universally applicable regularization framework that significantly advances transfer learning and highlights the connection between gradient starvation, neural collapse, and feature transferability.

Long-tail Learning Representation Learning +2

EMP-SSL: Towards Self-Supervised Learning in One Training Epoch

2 code implementations8 Apr 2023 Shengbang Tong, Yubei Chen, Yi Ma, Yann Lecun

Recently, self-supervised learning (SSL) has achieved tremendous success in learning image representation.

Quantization Self-Supervised Learning

Unsupervised Learning of Structured Representations via Closed-Loop Transcription

1 code implementation30 Oct 2022 Shengbang Tong, Xili Dai, Yubei Chen, Mingyang Li, Zengyi Li, Brent Yi, Yann Lecun, Yi Ma

This paper proposes an unsupervised method for learning a unified representation that serves both discriminative and generative purposes.

Simple Emergent Action Representations from Multi-Task Policy Training

no code implementations18 Oct 2022 Pu Hua, Yubei Chen, Huazhe Xu

The low-level sensory and motor signals in deep reinforcement learning, which exist in high-dimensional spaces such as image observations or motor torques, are inherently challenging to understand or utilize directly for downstream tasks.

Minimalistic Unsupervised Learning with the Sparse Manifold Transform

no code implementations30 Sep 2022 Yubei Chen, Zeyu Yun, Yi Ma, Bruno Olshausen, Yann Lecun

Though there remains a small performance gap between our simple constructive model and SOTA methods, the evidence points to this as a promising direction for achieving a principled and white-box approach to unsupervised learning.

Self-Supervised Learning Sparse Representation-based Classification +3

Joint Embedding Self-Supervised Learning in the Kernel Regime

no code implementations29 Sep 2022 Bobak T. Kiani, Randall Balestriero, Yubei Chen, Seth Lloyd, Yann Lecun

The fundamental goal of self-supervised learning (SSL) is to produce useful representations of data without access to any labels for classifying the data.

Self-Supervised Learning

On the duality between contrastive and non-contrastive self-supervised learning

no code implementations3 Jun 2022 Quentin Garrido, Yubei Chen, Adrien Bardes, Laurent Najman, Yann Lecun

Recent approaches in self-supervised learning of image representations can be categorized into different families of methods and, in particular, can be divided into contrastive and non-contrastive approaches.

Self-Supervised Learning

Neural Manifold Clustering and Embedding

1 code implementation24 Jan 2022 Zengyi Li, Yubei Chen, Yann Lecun, Friedrich T. Sommer

We argue that achieving manifold clustering with neural networks requires two essential ingredients: a domain-specific constraint that ensures the identification of the manifolds, and a learning algorithm for embedding each manifold to a linear subspace in the feature space.

Clustering Data Augmentation +2

Decoupled Contrastive Learning

4 code implementations13 Oct 2021 Chun-Hsiao Yeh, Cheng-Yao Hong, Yen-Chi Hsu, Tyng-Luh Liu, Yubei Chen, Yann Lecun

Further, DCL can be combined with the SOTA contrastive learning method, NNCLR, to achieve 72. 3% ImageNet-1K top-1 accuracy with 512 batch size in 400 epochs, which represents a new SOTA in contrastive learning.

Contrastive Learning Self-Supervised Learning

Compact and Optimal Deep Learning with Recurrent Parameter Generators

1 code implementation15 Jul 2021 Jiayun Wang, Yubei Chen, Stella X. Yu, Brian Cheung, Yann Lecun

We propose a drastically different approach to compact and optimal deep learning: We decouple the Degrees of freedom (DoF) and the actual number of parameters of a model, optimize a small DoF with predefined random linear constraints for a large model of arbitrary architecture, in one-stage end-to-end learning.

Ranked #97 on Image Classification on ObjectNet (using extra training data)

Image Classification Model Compression

Disentangling images with Lie group transformations and sparse coding

1 code implementation11 Dec 2020 Ho Yin Chau, Frank Qiu, Yubei Chen, Bruno Olshausen

Discrete spatial patterns and their continuous transformations are two important regularities contained in natural signals.

A Neural Network MCMC sampler that maximizes Proposal Entropy

1 code implementation7 Oct 2020 Zengyi Li, Yubei Chen, Friedrich T. Sommer

However, in the continuous case, unfavorable geometry of the target distribution can greatly limit the efficiency of MCMC methods.

RG-Flow: A hierarchical and explainable flow model based on renormalization group and sparse prior

1 code implementation30 Sep 2020 Hong-Ye Hu, Dian Wu, Yi-Zhuang You, Bruno Olshausen, Yubei Chen

In this work, we incorporate the key ideas of renormalization group (RG) and sparse prior distribution to design a hierarchical flow-based generative model, RG-Flow, which can separate information at different scales of images and extract disentangled representations at each scale.

Disentanglement Image Inpainting +2

3D Shape Reconstruction from Free-Hand Sketches

1 code implementation17 Jun 2020 Jiayun Wang, Jierui Lin, Qian Yu, Runtao Liu, Yubei Chen, Stella X. Yu

Additionally, we propose a sketch standardization module to handle different sketch distortions and styles.

3D Reconstruction 3D Shape Reconstruction

Orthogonal Convolutional Neural Networks

1 code implementation CVPR 2020 Jiayun Wang, Yubei Chen, Rudrasis Chakraborty, Stella X. Yu

We develop an efficient approach to impose filter orthogonality on a convolutional layer based on the doubly block-Toeplitz matrix representation of the convolutional kernel instead of using the common kernel orthogonality approach, which we show is only necessary but not sufficient for ensuring orthogonal convolutions.

Image Classification Image Retrieval

Learning Energy-Based Models in High-Dimensional Spaces with Multi-scale Denoising Score Matching

2 code implementations17 Oct 2019 Zengyi Li, Yubei Chen, Friedrich T. Sommer

Recently, \citet{song2019generative} have shown that a generative model trained by denoising score matching accomplishes excellent sample synthesis, when trained with data samples corrupted with multiple levels of noise.

Denoising Image Inpainting +1

Word Embedding Visualization Via Dictionary Learning

1 code implementation9 Oct 2019 Juexiao Zhang, Yubei Chen, Brian Cheung, Bruno A. Olshausen

Co-occurrence statistics based word embedding techniques have proved to be very useful in extracting the semantic and syntactic representation of words as low dimensional continuous vectors.

Dictionary Learning

Superposition of many models into one

1 code implementation NeurIPS 2019 Brian Cheung, Alex Terekhov, Yubei Chen, Pulkit Agrawal, Bruno Olshausen

We present a method for storing multiple models within a single set of parameters.

The Sparse Manifold Transform

no code implementations NeurIPS 2018 Yubei Chen, Dylan M. Paiton, Bruno A. Olshausen

We present a signal representation framework called the sparse manifold transform that combines key ideas from sparse coding, manifold learning, and slow feature analysis.

Self-Supervised Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.