2 code implementations • 26 Apr 2024 • Wei Cui, Rasa Hosseinzadeh, Junwei Ma, Tongzi Wu, Yi Sui, Keyvan Golestan
Contrastive learning is a model pre-training technique by first creating similar views of the original data, and then encouraging the data and its corresponding views to be close in the embedding space.
1 code implementation • 24 Jan 2024 • Jesse C. Cresswell, Yi Sui, Bhargava Kumar, Noël Vouitsis
In response to everyday queries, humans explicitly signal uncertainty and offer alternative answers when they are unsure.
no code implementations • 13 Oct 2023 • Haoqian Chen, Jian Liu, Minghe Li, Kaiwen Jiang, Ziheng Xu, Rencheng Sun, Yi Sui
In addition, there are few publicly dataset of equirectangular images with labels, which presents a challenge for standard CNNs models to process equirectangular images effectively.
1 code implementation • 11 Oct 2023 • Yi Sui, Tongzi Wu, Jesse C. Cresswell, Ga Wu, George Stein, Xiao Shi Huang, Xiaochen Zhang, Maksims Volkovs
Self-supervised representation learning~(SSRL) has advanced considerably by exploiting the transformation invariance assumption under artificially designed data augmentations.
2 code implementations • NeurIPS 2023 • George Stein, Jesse C. Cresswell, Rasa Hosseinzadeh, Yi Sui, Brendan Leigh Ross, Valentin Villecroze, Zhaoyan Liu, Anthony L. Caterini, J. Eric T. Taylor, Gabriel Loaiza-Ganem
Comparing to 17 modern metrics for evaluating the overall performance, fidelity, diversity, rarity, and memorization of generative models, we find that the state-of-the-art perceptual realism of diffusion models as judged by humans is not reflected in commonly reported metrics such as FID.
no code implementations • 12 Oct 2022 • Yi Sui, Junfeng Wen, Yenson Lau, Brendan Leigh Ross, Jesse C. Cresswell
In the traditional federated learning setting, a central server coordinates a network of clients to train one global model.
1 code implementation • NeurIPS 2021 • Yi Sui, Ga Wu, Scott Sanner
Explaining the influence of training data on deep neural network predictions is a critical tool for debugging models through data curation.
1 code implementation • 5 Oct 2021 • Yi Sui, Ga Wu, Scott Sanner
We additionally introduce a novel Frobenius norm-based contrastive learning objective to improve latent representational generalization. Empirically, we validate MAPSED on two publicly accessible urban crime datasets for spatiotemporal sparse event prediction, where MAPSED outperforms both classical and state-of-the-art deep learning models.