no code implementations • 22 Mar 2024 • Jiayun Wang, Stella X. Yu, Yubei Chen
To address this gap, we introduce a new pose-estimation benchmark for assessing SSL geometric representations, which demands training without semantic or pose labels and achieving proficiency in both semantic and geometric downstream tasks.
1 code implementation • 23 Feb 2024 • Chun-Hsiao Yeh, Ta-Ying Cheng, He-Yen Hsieh, Chuan-En Lin, Yi Ma, Andrew Markham, Niki Trigoni, H. T. Kung, Yubei Chen
First, current personalization techniques fail to reliably extend to multiple concepts -- we hypothesize this to be due to the mismatch between complex scenes and simple text descriptions in the pre-training dataset (e. g., LAION).
no code implementations • 6 Oct 2023 • Zeyu Yun, Juexiao Zhang, Bruno Olshausen, Yann Lecun, Yubei Chen
Unsupervised representation learning has seen tremendous progress but is constrained by its reliance on data modality-specific stationarity and topology, a limitation not found in biological intelligence systems.
no code implementations • 4 Jul 2023 • Yunhui Guo, Youren Zhang, Yubei Chen, Stella X. Yu
With our feature mapper simply trained to spread out training instances in hyperbolic space, we observe that images move closer to the origin with congealing, validating our idea of unsupervised prototypicality discovery.
no code implementations • 23 Jun 2023 • Jiachen Zhu, Katrina Evtimova, Yubei Chen, Ravid Shwartz-Ziv, Yann Lecun
In summary, VCReg offers a universally applicable regularization framework that significantly advances transfer learning and highlights the connection between gradient starvation, neural collapse, and feature transferability.
2 code implementations • 8 Apr 2023 • Shengbang Tong, Yubei Chen, Yi Ma, Yann Lecun
Recently, self-supervised learning (SSL) has achieved tremendous success in learning image representation.
1 code implementation • 30 Oct 2022 • Shengbang Tong, Xili Dai, Yubei Chen, Mingyang Li, Zengyi Li, Brent Yi, Yann Lecun, Yi Ma
This paper proposes an unsupervised method for learning a unified representation that serves both discriminative and generative purposes.
no code implementations • 18 Oct 2022 • Pu Hua, Yubei Chen, Huazhe Xu
The low-level sensory and motor signals in deep reinforcement learning, which exist in high-dimensional spaces such as image observations or motor torques, are inherently challenging to understand or utilize directly for downstream tasks.
no code implementations • 30 Sep 2022 • Yubei Chen, Zeyu Yun, Yi Ma, Bruno Olshausen, Yann Lecun
Though there remains a small performance gap between our simple constructive model and SOTA methods, the evidence points to this as a promising direction for achieving a principled and white-box approach to unsupervised learning.
Ranked #1 on Unsupervised MNIST on MNIST
Self-Supervised Learning Sparse Representation-based Classification +3
no code implementations • 29 Sep 2022 • Bobak T. Kiani, Randall Balestriero, Yubei Chen, Seth Lloyd, Yann Lecun
The fundamental goal of self-supervised learning (SSL) is to produce useful representations of data without access to any labels for classifying the data.
no code implementations • 17 Jun 2022 • Yubei Chen, Adrien Bardes, Zengyi Li, Yann Lecun
Even with 32x32 patch representation, BagSSL achieves 62% top-1 linear probing accuracy on ImageNet.
no code implementations • 3 Jun 2022 • Quentin Garrido, Yubei Chen, Adrien Bardes, Laurent Najman, Yann Lecun
Recent approaches in self-supervised learning of image representations can be categorized into different families of methods and, in particular, can be divided into contrastive and non-contrastive approaches.
1 code implementation • 24 Jan 2022 • Zengyi Li, Yubei Chen, Yann Lecun, Friedrich T. Sommer
We argue that achieving manifold clustering with neural networks requires two essential ingredients: a domain-specific constraint that ensures the identification of the manifolds, and a learning algorithm for embedding each manifold to a linear subspace in the feature space.
4 code implementations • 13 Oct 2021 • Chun-Hsiao Yeh, Cheng-Yao Hong, Yen-Chi Hsu, Tyng-Luh Liu, Yubei Chen, Yann Lecun
Further, DCL can be combined with the SOTA contrastive learning method, NNCLR, to achieve 72. 3% ImageNet-1K top-1 accuracy with 512 batch size in 400 epochs, which represents a new SOTA in contrastive learning.
1 code implementation • CVPR 2022 • Yunhui Guo, Xudong Wang, Yubei Chen, Stella X. Yu
Hyperbolic space can naturally embed hierarchies, unlike Euclidean space.
1 code implementation • 15 Jul 2021 • Jiayun Wang, Yubei Chen, Stella X. Yu, Brian Cheung, Yann Lecun
We propose a drastically different approach to compact and optimal deep learning: We decouple the Degrees of freedom (DoF) and the actual number of parameters of a model, optimize a small DoF with predefined random linear constraints for a large model of arbitrary architecture, in one-stage end-to-end learning.
Ranked #97 on Image Classification on ObjectNet (using extra training data)
1 code implementation • NAACL (DeeLIO) 2021 • Zeyu Yun, Yubei Chen, Bruno A Olshausen, Yann Lecun
Transformer networks have revolutionized NLP representation learning since they were introduced.
1 code implementation • 11 Dec 2020 • Ho Yin Chau, Frank Qiu, Yubei Chen, Bruno Olshausen
Discrete spatial patterns and their continuous transformations are two important regularities contained in natural signals.
1 code implementation • 7 Oct 2020 • Zengyi Li, Yubei Chen, Friedrich T. Sommer
However, in the continuous case, unfavorable geometry of the target distribution can greatly limit the efficiency of MCMC methods.
1 code implementation • 30 Sep 2020 • Hong-Ye Hu, Dian Wu, Yi-Zhuang You, Bruno Olshausen, Yubei Chen
In this work, we incorporate the key ideas of renormalization group (RG) and sparse prior distribution to design a hierarchical flow-based generative model, RG-Flow, which can separate information at different scales of images and extract disentangled representations at each scale.
1 code implementation • 17 Jun 2020 • Jiayun Wang, Jierui Lin, Qian Yu, Runtao Liu, Yubei Chen, Stella X. Yu
Additionally, we propose a sketch standardization module to handle different sketch distortions and styles.
1 code implementation • CVPR 2020 • Jiayun Wang, Yubei Chen, Rudrasis Chakraborty, Stella X. Yu
We develop an efficient approach to impose filter orthogonality on a convolutional layer based on the doubly block-Toeplitz matrix representation of the convolutional kernel instead of using the common kernel orthogonality approach, which we show is only necessary but not sufficient for ensuring orthogonal convolutions.
2 code implementations • 17 Oct 2019 • Zengyi Li, Yubei Chen, Friedrich T. Sommer
Recently, \citet{song2019generative} have shown that a generative model trained by denoising score matching accomplishes excellent sample synthesis, when trained with data samples corrupted with multiple levels of noise.
1 code implementation • 9 Oct 2019 • Juexiao Zhang, Yubei Chen, Brian Cheung, Bruno A. Olshausen
Co-occurrence statistics based word embedding techniques have proved to be very useful in extracting the semantic and syntactic representation of words as low dimensional continuous vectors.
no code implementations • 25 Sep 2019 • Zengyi Li, Yubei Chen, Friedrich T. Sommer
Energy based models outputs unmormalized log-probability values given datasamples.
1 code implementation • NeurIPS 2019 • Brian Cheung, Alex Terekhov, Yubei Chen, Pulkit Agrawal, Bruno Olshausen
We present a method for storing multiple models within a single set of parameters.
no code implementations • NeurIPS 2018 • Yubei Chen, Dylan M. Paiton, Bruno A. Olshausen
We present a signal representation framework called the sparse manifold transform that combines key ideas from sparse coding, manifold learning, and slow feature analysis.