no code implementations • ICLR 2019 • Xilai Li, Yingbo Zhou, Tianfu Wu, Richard Socher, Caiming Xiong
During structure learning, the model optimizes for the best structure for the current task.
no code implementations • 3 Feb 2024 • Xilai Li, Xiaosong Li, Haishu Tan
Infrared and visible image fusion has emerged as a prominent research in computer vision.
no code implementations • 3 Feb 2024 • Xilai Li, Wuyang Liu, Xiaosong Li, Haishu Tan
To bridge this research gap, we proposed an all-weather MMIF model.
1 code implementation • 16 Jan 2024 • Xilai Li, Xiaosong Li, Haishu Tan, Jinyang Li
Existing multi-focus image fusion (MFIF) methods often fail to preserve the uncertain transition region and detect small focus areas within large defocused regions accurately.
1 code implementation • 3 Nov 2023 • Xilai Li, Xiaosong Li, Tao Ye, Xiaoqi Cheng, Wuyang Liu, Haishu Tan
However, the fusion of multiple visible images with different focal regions and infrared images is a unprecedented challenge in real MMIF applications.
no code implementations • 13 Jun 2023 • Goeric Huybrechts, Srikanth Ronanki, Xilai Li, Hadis Nosrati, Sravan Bodapati, Katrin Kirchhoff
To address this issue, we propose the integration of a novel dynamic contextual carry-over mechanism in a state-of-the-art (SOTA) unified ASR system.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 11 May 2023 • Jinglun Cai, Monica Sunkara, Xilai Li, Anshu Bhatia, Xiao Pan, Sravan Bodapati
Masked Language Models (MLMs) have proven to be effective for second-pass rescoring in Automatic Speech Recognition (ASR) systems.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • 18 Apr 2023 • Xilai Li, Goeric Huybrechts, Srikanth Ronanki, Jeff Farris, Sravan Bodapati
Overall, our proposed model reduces the degradation of the streaming mode over the non-streaming full-contextual model from 41. 7% and 45. 7% to 16. 7% and 26. 2% on the LibriSpeech test-clean and test-other datasets respectively, while improving by a relative 15. 5% WER over the previous state-of-the-art unified model.
2 code implementations • ECCV 2020 • Xilai Li, Wei Sun, Tianfu Wu
In state-of-the-art deep neural networks, both feature normalization and feature attention have become ubiquitous.
Ranked #71 on Instance Segmentation on COCO minival
no code implementations • 31 Mar 2019 • Xilai Li, Yingbo Zhou, Tianfu Wu, Richard Socher, Caiming Xiong
Addressing catastrophic forgetting is one of the key challenges in continual learning where machine learning systems are trained with sequential or streaming tasks.
4 code implementations • CVPR 2019 • Xilai Li, Xi Song, Tianfu Wu
This paper presents deep compositional grammatical architectures which harness the best of two worlds: grammar models and DNNs.
1 code implementation • 14 Nov 2017 • Tianfu Wu, Wei Sun, Xilai Li, Xi Song, Bo Li
We focus on weakly-supervised extractive rationale generation, that is learning to unfold latent discriminative part configurations of object instances automatically and simultaneously in detection without using any supervision for part configurations.