Search Results for author: Yun Cheng

Found 12 papers, 10 papers with code

Multimodal Learning Without Labeled Multimodal Data: Guarantees and Applications

1 code implementation7 Jun 2023 Paul Pu Liang, Chun Kai Ling, Yun Cheng, Alex Obolenskiy, Yudong Liu, Rohan Pandey, Alex Wilf, Louis-Philippe Morency, Ruslan Salakhutdinov

We propose two lower bounds based on the amount of shared information between modalities and the disagreement between separately trained unimodal classifiers, and derive an upper bound through connections to approximate algorithms for min-entropy couplings.

Self-Supervised Learning

Multimodal Fusion Interactions: A Study of Human and Automatic Quantification

1 code implementation7 Jun 2023 Paul Pu Liang, Yun Cheng, Ruslan Salakhutdinov, Louis-Philippe Morency

In order to perform multimodal fusion of heterogeneous signals, we need to understand their interactions: how each modality individually provides information useful for a task and how this information changes in the presence of other modalities.

counterfactual

CamDiff: Camouflage Image Augmentation via Diffusion Model

1 code implementation11 Apr 2023 Xue-Jing Luo, Shuo Wang, Zongwei Wu, Christos Sakaridis, Yun Cheng, Deng-Ping Fan, Luc van Gool

Specifically, we leverage the latent diffusion model to synthesize salient objects in camouflaged scenes, while using the zero-shot image classification ability of the Contrastive Language-Image Pre-training (CLIP) model to prevent synthesis failures and ensure the synthesized object aligns with the input prompt.

Image Augmentation Image Classification +3

Quantifying & Modeling Multimodal Interactions: An Information Decomposition Framework

1 code implementation NeurIPS 2023 Paul Pu Liang, Yun Cheng, Xiang Fan, Chun Kai Ling, Suzanne Nie, Richard Chen, Zihao Deng, Nicholas Allen, Randy Auerbach, Faisal Mahmood, Ruslan Salakhutdinov, Louis-Philippe Morency

The recent explosion of interest in multimodal applications has resulted in a wide selection of datasets and methods for representing and integrating information from different modalities.

Model Selection

MultiBench: Multiscale Benchmarks for Multimodal Representation Learning

2 code implementations15 Jul 2021 Paul Pu Liang, Yiwei Lyu, Xiang Fan, Zetian Wu, Yun Cheng, Jason Wu, Leslie Chen, Peter Wu, Michelle A. Lee, Yuke Zhu, Ruslan Salakhutdinov, Louis-Philippe Morency

In order to accelerate progress towards understudied modalities and tasks while ensuring real-world robustness, we release MultiBench, a systematic and unified large-scale benchmark spanning 15 datasets, 10 modalities, 20 prediction tasks, and 6 research areas.

Representation Learning

AKG: Automatic Kernel Generation for Neural Processing Units using Polyhedral Transformations

1 code implementation Proceedings of the 42nd ACM SIGPLAN International Conference on Programming Language Design and Implementation 2021 Jie Zhao, Bojie Li, Wang Nie, Zhen Geng, Renwei Zhang, Xiong Gao, Bin Cheng, Chen Wu, Yun Cheng, Zheng Li, Peng Di, Kun Zhang, Xuefeng Jin

Existing tensor compilers have proven their effectiveness in deploying deep neural networks on general-purpose hardware like CPU and GPU, but optimizing for neural processing units (NPUs) is still challenging due to the heterogeneous compute units and complicated memory hierarchy.

Code Generation Management +1

Optical manipulation of electronic dimensionality in a quantum material

no code implementations21 Jan 2021 Shaofeng Duan, Yun Cheng, Wei Xia, Yuanyuan Yang, Fengfeng Qi, Tianwei Tang, Yanfeng Guo, Dong Qian, Dao Xiang, Jie Zhang, Wentao Zhang

Exotic phenomenon can be achieved in quantum materials by confining electronic states into two dimensions.

Strongly Correlated Electrons Materials Science Superconductivity

Interpretable and Transferable Models to Understand the Impact of Lockdown Measures on Local Air Quality

1 code implementation19 Nov 2020 Johanna Einsiedler, Yun Cheng, Franz Papst, Olga Saukh

In this work, we estimate pollution reduction over the lockdown period by using the measurements from ground air pollution monitoring stations, training a long-term prediction model and comparing its predictions to measured values over the lockdown month. We show that our models achieve state-of-the-art performance on the data from air pollution measurement stations in Switzerland and in China: evaluate up to -15. 8% / +34. 4% change in NO2 / PM10 in Zurich; -35. 3 % / -3. 5 % and -42. 4 % / -34. 7 % in NO2 / PM2. 5 in Beijing and Wuhan respectively.

Transfer Learning

Adaptive Loss-aware Quantization for Multi-bit Networks

1 code implementation CVPR 2020 Zhongnan Qu, Zimu Zhou, Yun Cheng, Lothar Thiele

We investigate the compression of deep neural networks by quantizing their weights and activations into multiple binary bases, known as multi-bit networks (MBNs), which accelerate the inference and reduce the storage for the deployment on low-resource mobile and embedded platforms.

Quantization

Cannot find the paper you are looking for? You can Submit a new open access paper.