no code implementations • ICON 2020 • Nitin Bansal, Ajit Kumar
For example, during the development of Punjabi to Urdu MTS, many issues were recognized while preparing lexical resources for both the language.
no code implementations • 25 Oct 2022 • Zhiqi Zhang, Nitin Bansal, Changjiang Cai, Pan Ji, Qingan Yan, Xiangyu Xu, Yi Xu
To this end, we propose CLIP-FLow, a semi-supervised iterative pseudo-labeling framework to transfer the pretraining knowledge to the target real domain.
no code implementations • 21 Jun 2022 • Nitin Bansal, Pan Ji, Junsong Yuan, Yi Xu
Multi-task learning (MTL) paradigm focuses on jointly learning two or more tasks, aiming for significant improvement w. r. t model's generalizability, performance, and training/inference memory footprint.
no code implementations • 5 May 2022 • Qingan Yan, Pan Ji, Nitin Bansal, Yuxin Ma, Yuan Tian, Yi Xu
In this paper, we deal with the problem of monocular depth estimation for fisheye cameras in a self-supervised manner.
no code implementations • CVPR 2022 • Jiachen Liu, Pan Ji, Nitin Bansal, Changjiang Cai, Qingan Yan, Xiaolei Huang, Yi Xu
The semantic plane detection branch is based on a single-view plane detection framework but with differences.
1 code implementation • NeurIPS 2018 • Nitin Bansal, Xiaohan Chen, Zhangyang Wang
This paper seeks to answer the question: as the (near-) orthogonality of weights is found to be a favorable property for training deep convolutional neural networks, how can we enforce it in more effective and easy-to-use ways?
1 code implementation • NeurIPS 2018 • Nitin Bansal, Xiaohan Chen, Zhangyang Wang
This paper seeks to answer the question: as the (near-) orthogonality of weights is found to be a favorable property for training deep convolutional neural networks, how can we enforce it in more effective and easy-to-use ways?