3 code implementations • ICLR 2021 • Michael Zhang, Karan Sapra, Sanja Fidler, Serena Yeung, Jose M. Alvarez
While federated learning traditionally aims to train a single global model across decentralized local datasets, one model may not always be ideal for all participating clients.
no code implementations • ECCV 2020 • Arun Mallya, Ting-Chun Wang, Karan Sapra, Ming-Yu Liu
This is because they lack knowledge of the 3D world being rendered and generate each frame only based on the past few frames.
no code implementations • 14 Jul 2020 • Guilin Liu, Rohan Taori, Ting-Chun Wang, Zhiding Yu, Shiqiu Liu, Fitsum A. Reda, Karan Sapra, Andrew Tao, Bryan Catanzaro
Specifically, we directly treat the whole encoded feature map of the input texture as transposed convolution filters and the features' self-similarity map, which captures the auto-correlation information, as input to the transposed convolution.
8 code implementations • 21 May 2020 • Andrew Tao, Karan Sapra, Bryan Catanzaro
Multi-scale inference is commonly used to improve the results of semantic segmentation.
Ranked #6 on Semantic Segmentation on Cityscapes val (using extra training data)
no code implementations • CVPR 2020 • Aysegul Dundar, Karan Sapra, Guilin Liu, Andrew Tao, Bryan Catanzaro
Conditional image synthesis for generating photorealistic images serves various applications for content editing to content generation.
5 code implementations • CVPR 2019 • Yi Zhu, Karan Sapra, Fitsum A. Reda, Kevin J. Shih, Shawn Newsam, Andrew Tao, Bryan Catanzaro
In this paper, we present a video prediction-based methodology to scale up training sets by synthesizing new training samples in order to improve the accuracy of semantic segmentation networks.
Ranked #2 on Semantic Segmentation on KITTI Semantic Segmentation (using extra training data)
4 code implementations • 28 Nov 2018 • Guilin Liu, Kevin J. Shih, Ting-Chun Wang, Fitsum A. Reda, Karan Sapra, Zhiding Yu, Andrew Tao, Bryan Catanzaro
In this paper, we present a simple yet effective padding scheme that can be used as a drop-in module for existing convolutional neural networks.