Search Results for author: Dan Su

Found 74 papers, 26 papers with code

Dimsum @LaySumm 20

1 code implementation EMNLP (sdp) 2020 Tiezheng Yu, Dan Su, Wenliang Dai, Pascale Fung

Lay summarization aims to generate lay summaries of scientific papers automatically.

Lay Summarization Sentence

MM-LLMs: Recent Advances in MultiModal Large Language Models

no code implementations24 Jan 2024 Duzhen Zhang, Yahan Yu, Chenxing Li, Jiahua Dong, Dan Su, Chenhui Chu, Dong Yu

In the past year, MultiModal Large Language Models (MM-LLMs) have undergone substantial advancements, augmenting off-the-shelf LLMs to support MM inputs or outputs via cost-effective training strategies.

Decision Making

DurIAN-E: Duration Informed Attention Network For Expressive Text-to-Speech Synthesis

no code implementations22 Sep 2023 Yu Gu, Yianrao Bian, Guangzhi Lei, Chao Weng, Dan Su

This paper introduces an improved duration informed attention neural network (DurIAN-E) for expressive and high-fidelity text-to-speech (TTS) synthesis.

Denoising Speech Synthesis +1

Text-Only Domain Adaptation for End-to-End Speech Recognition through Down-Sampling Acoustic Representation

no code implementations4 Sep 2023 Jiaxu Zhu, Weinan Tong, Yaoxun Xu, Changhe Song, Zhiyong Wu, Zhao You, Dan Su, Dong Yu, Helen Meng

Mapping two modalities, speech and text, into a shared representation space, is a research topic of using text-only data to improve end-to-end automatic speech recognition (ASR) performance in new domains.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Model Debiasing via Gradient-based Explanation on Representation

no code implementations20 May 2023 Jindi Zhang, Luning Wang, Dan Su, Yongxiang Huang, Caleb Chen Cao, Lei Chen

Machine learning systems produce biased results towards certain demographic groups, known as the fairness problem.

Disentanglement Fairness

Learn What NOT to Learn: Towards Generative Safety in Chatbots

no code implementations21 Apr 2023 Leila Khalatbari, Yejin Bang, Dan Su, Willy Chung, Saeed Ghadimi, Hossein Sameti, Pascale Fung

Our approach differs from the standard contrastive learning framework in that it automatically obtains positive and negative signals from the safe and unsafe language distributions that have been learned beforehand.

Contrastive Learning

TriNet: stabilizing self-supervised learning from complete or slow collapse on ASR

no code implementations12 Dec 2022 Lixin Cao, Jun Wang, Ben Yang, Dan Su, Dong Yu

Self-supervised learning (SSL) models confront challenges of abrupt informational collapse or slow dimensional collapse.

Self-Supervised Learning

Generative Long-form Question Answering: Relevance, Faithfulness and Succinctness

no code implementations15 Nov 2022 Dan Su

We pioneered the research direction to improve the answer quality in terms of 1) query-relevance, 2) answer faithfulness, and 3) answer succinctness.

Long Form Question Answering

Plausible May Not Be Faithful: Probing Object Hallucination in Vision-Language Pre-training

1 code implementation14 Oct 2022 Wenliang Dai, Zihan Liu, Ziwei Ji, Dan Su, Pascale Fung

Large-scale vision-language pre-trained (VLP) models are prone to hallucinate non-existent visual objects when generating text based on visual information.

Hallucination Image Augmentation +3

Cross-Age Speaker Verification: Learning Age-Invariant Speaker Embeddings

1 code implementation13 Jul 2022 Xiaoyi Qin, Na Li, Chao Weng, Dan Su, Ming Li

In this paper, we mine cross-age test sets based on the VoxCeleb dataset and propose our age-invariant speaker representation(AISR) learning method.

Age Estimation Speaker Verification

End-to-End Voice Conversion with Information Perturbation

no code implementations15 Jun 2022 Qicong Xie, Shan Yang, Yi Lei, Lei Xie, Dan Su

The ideal goal of voice conversion is to convert the source speaker's speech to sound naturally like the target speaker while maintaining the linguistic content and the prosody of the source speech.

Voice Conversion

Towards Answering Open-ended Ethical Quandary Questions

no code implementations12 May 2022 Yejin Bang, Nayeon Lee, Tiezheng Yu, Leila Khalatbari, Yan Xu, Samuel Cahyawijaya, Dan Su, Bryan Wilie, Romain Barraud, Elham J. Barezi, Andrea Madotto, Hayden Kee, Pascale Fung

We explore the current capability of LLMs in providing an answer with a deliberative exchange of different perspectives to an ethical quandary, in the approach of Socratic philosophy, instead of providing a closed answer like an oracle.

Few-Shot Learning Generative Question Answering +2

FastDiff: A Fast Conditional Diffusion Model for High-Quality Speech Synthesis

2 code implementations21 Apr 2022 Rongjie Huang, Max W. Y. Lam, Jun Wang, Dan Su, Dong Yu, Yi Ren, Zhou Zhao

Also, FastDiff enables a sampling speed of 58x faster than real-time on a V100 GPU, making diffusion models practically applicable to speech synthesis deployment for the first time.

Ranked #7 on Text-To-Speech Synthesis on LJSpeech (using extra training data)

Denoising Speech Synthesis +2

BDDM: Bilateral Denoising Diffusion Models for Fast and High-Quality Speech Synthesis

1 code implementation ICLR 2022 Max W. Y. Lam, Jun Wang, Dan Su, Dong Yu

We propose a new bilateral denoising diffusion model (BDDM) that parameterizes both the forward and reverse processes with a schedule network and a score network, which can train with a novel bilateral modeling objective.

Image Generation Speech Synthesis

VCVTS: Multi-speaker Video-to-Speech synthesis via cross-modal knowledge transfer from voice conversion

no code implementations18 Feb 2022 Disong Wang, Shan Yang, Dan Su, Xunying Liu, Dong Yu, Helen Meng

Though significant progress has been made for speaker-dependent Video-to-Speech (VTS) synthesis, little attention is devoted to multi-speaker VTS that can map silent video to speech, while allowing flexible control of speaker identity, all in a single system.

Quantization Speech Synthesis +2

QA4QG: Using Question Answering to Constrain Multi-Hop Question Generation

1 code implementation14 Feb 2022 Dan Su, Peng Xu, Pascale Fung

Multi-hop question generation (MQG) aims to generate complex questions which require reasoning over multiple pieces of information of the input passage.

Multi-hop Question Answering Question Answering +2

Survey of Hallucination in Natural Language Generation

no code implementations8 Feb 2022 Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Yejin Bang, Delong Chen, Ho Shu Chan, Wenliang Dai, Andrea Madotto, Pascale Fung

This advancement has led to more fluent and coherent NLG, leading to improved development in downstream tasks such as abstractive summarization, dialogue generation and data-to-text generation.

Abstractive Text Summarization Data-to-Text Generation +4

The CUHK-TENCENT speaker diarization system for the ICASSP 2022 multi-channel multi-party meeting transcription challenge

no code implementations4 Feb 2022 Naijun Zheng, Na Li, Xixin Wu, Lingwei Meng, Jiawen Kang, Haibin Wu, Chao Weng, Dan Su, Helen Meng

This paper describes our speaker diarization system submitted to the Multi-channel Multi-party Meeting Transcription (M2MeT) challenge, where Mandarin meeting data were recorded in multi-channel format for diarization and automatic speech recognition (ASR) tasks.

Action Detection Activity Detection +6

DiffGAN-TTS: High-Fidelity and Efficient Text-to-Speech with Denoising Diffusion GANs

2 code implementations28 Jan 2022 Songxiang Liu, Dan Su, Dong Yu

Denoising diffusion probabilistic models (DDPMs) are expressive generative models that have been used to solve a variety of speech synthesis problems.

Denoising Speech Synthesis

SpeechMoE2: Mixture-of-Experts Model with Improved Routing

no code implementations23 Nov 2021 Zhao You, Shulin Feng, Dan Su, Dong Yu

Mixture-of-experts based acoustic models with dynamic routing mechanisms have proved promising results for speech recognition.

Computational Efficiency speech-recognition +1

Meta-Voice: Fast few-shot style transfer for expressive voice cloning using meta learning

no code implementations14 Nov 2021 Songxiang Liu, Dan Su, Dong Yu

The task of few-shot style transfer for voice cloning in text-to-speech (TTS) synthesis aims at transferring speaking styles of an arbitrary source speaker to a target speaker's voice using very limited amount of neutral data.

Disentanglement Meta-Learning +2

SynCLR: A Synthesis Framework for Contrastive Learning of out-of-domain Speech Representations

no code implementations29 Sep 2021 Rongjie Huang, Max W. Y. Lam, Jun Wang, Dan Su, Dong Yu, Zhou Zhao, Yi Ren

Learning generalizable speech representations for unseen samples in different domains has been a challenge with ever increasing importance to date.

Contrastive Learning Data Augmentation +4

Referee: Towards reference-free cross-speaker style transfer with low-quality data for expressive speech synthesis

no code implementations8 Sep 2021 Songxiang Liu, Shan Yang, Dan Su, Dong Yu

The S2W model is trained with high-quality target data, which is adopted to effectively aggregate style descriptors and generate high-fidelity speech in the target speaker's voice.

Expressive Speech Synthesis Sentence +1

AppQ: Warm-starting App Recommendation Based on View Graphs

no code implementations8 Sep 2021 Dan Su, Jiqiang Liu, Sencun Zhu, Xiaoyang Wang, Wei Wang, Xiangliang Zhang

In this work, we propose AppQ, a novel app quality grading and recommendation system that extracts inborn features of apps based on app source code.

Recommendation Systems

Bilateral Denoising Diffusion Models

no code implementations26 Aug 2021 Max W. Y. Lam, Jun Wang, Rongjie Huang, Dan Su, Dong Yu

In this paper, we propose novel bilateral denoising diffusion models (BDDMs), which take significantly fewer steps to generate high-quality samples.

Denoising Scheduling

Glow-WaveGAN: Learning Speech Representations from GAN-based Variational Auto-Encoder For High Fidelity Flow-based Speech Synthesis

no code implementations21 Jun 2021 Jian Cong, Shan Yang, Lei Xie, Dan Su

Current two-stage TTS framework typically integrates an acoustic model with a vocoder -- the acoustic model predicts a low resolution intermediate representation such as Mel-spectrum while the vocoder generates waveform from the intermediate representation.

Speech Synthesis

Controllable Context-aware Conversational Speech Synthesis

no code implementations21 Jun 2021 Jian Cong, Shan Yang, Na Hu, Guangzhi Li, Lei Xie, Dan Su

Specifically, we use explicit labels to represent two typical spontaneous behaviors filled-pause and prolongation in the acoustic model and develop a neural network based predictor to predict the occurrences of the two behaviors from text.

Speech Synthesis

GigaSpeech: An Evolving, Multi-domain ASR Corpus with 10,000 Hours of Transcribed Audio

2 code implementations13 Jun 2021 Guoguo Chen, Shuzhou Chai, Guanbo Wang, Jiayu Du, Wei-Qiang Zhang, Chao Weng, Dan Su, Daniel Povey, Jan Trmal, Junbo Zhang, Mingjie Jin, Sanjeev Khudanpur, Shinji Watanabe, Shuaijiang Zhao, Wei Zou, Xiangang Li, Xuchen Yao, Yongqing Wang, Yujun Wang, Zhao You, Zhiyong Yan

This paper introduces GigaSpeech, an evolving, multi-domain English speech recognition corpus with 10, 000 hours of high quality labeled audio suitable for supervised training, and 40, 000 hours of total audio suitable for semi-supervised and unsupervised training.

Sentence speech-recognition +1

Enhancing Speaking Styles in Conversational Text-to-Speech Synthesis with Graph-based Multi-modal Context Modeling

2 code implementations11 Jun 2021 Jingbei Li, Yi Meng, Chenyi Li, Zhiyong Wu, Helen Meng, Chao Weng, Dan Su

However, state-of-the-art context modeling methods in conversational TTS only model the textual information in context with a recurrent neural network (RNN).

Speech Synthesis Text-To-Speech Synthesis

Raw Waveform Encoder with Multi-Scale Globally Attentive Locally Recurrent Networks for End-to-End Speech Recognition

no code implementations8 Jun 2021 Max W. Y. Lam, Jun Wang, Chao Weng, Dan Su, Dong Yu

End-to-end speech recognition generally uses hand-engineered acoustic features as input and excludes the feature extraction module from its joint optimization.

speech-recognition Speech Recognition

DiffSVC: A Diffusion Probabilistic Model for Singing Voice Conversion

no code implementations28 May 2021 Songxiang Liu, Yuewen Cao, Dan Su, Helen Meng

Singing voice conversion (SVC) is one promising technique which can enrich the way of human-computer interaction by endowing a computer the ability to produce high-fidelity and expressive singing voice.

Denoising Voice Conversion +1

Retrieval-Free Knowledge-Grounded Dialogue Response Generation with Adapters

1 code implementation dialdoc (ACL) 2022 Yan Xu, Etsuko Ishii, Samuel Cahyawijaya, Zihan Liu, Genta Indra Winata, Andrea Madotto, Dan Su, Pascale Fung

This paper proposes KnowExpert, a framework to bypass the explicit retrieval process and inject knowledge into the pre-trained language models with lightweight adapters and adapt to the knowledge-grounded dialogue task.

Response Generation Retrieval

Latency-Controlled Neural Architecture Search for Streaming Speech Recognition

no code implementations8 May 2021 Liqiang He, Shulin Feng, Dan Su, Dong Yu

Extensive experiments show that: 1) Based on the proposed neural architecture, the neural networks with a medium latency of 550ms (millisecond) and a low latency of 190ms can be learned in the vanilla and revised operation space respectively.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

TeCANet: Temporal-Contextual Attention Network for Environment-Aware Speech Dereverberation

no code implementations31 Mar 2021 Helin Wang, Bo Wu, LianWu Chen, Meng Yu, Jianwei Yu, Yong Xu, Shi-Xiong Zhang, Chao Weng, Dan Su, Dong Yu

In this paper, we exploit the effective way to leverage contextual information to improve the speech dereverberation performance in real-world reverberant environments.

Room Impulse Response (RIR) Speech Dereverberation

Tune-In: Training Under Negative Environments with Interference for Attention Networks Simulating Cocktail Party Effect

no code implementations2 Mar 2021 Jun Wang, Max W. Y. Lam, Dan Su, Dong Yu

We study the cocktail party problem and propose a novel attention network called Tune-In, abbreviated for training under negative environments with interference.

Speaker Verification Speech Separation

Sandglasset: A Light Multi-Granularity Self-attentive Network For Time-Domain Speech Separation

2 code implementations1 Mar 2021 Max W. Y. Lam, Jun Wang, Dan Su, Dong Yu

One of the leading single-channel speech separation (SS) models is based on a TasNet with a dual-path segmentation technique, where the size of each segment remains unchanged throughout all layers.

Computational Efficiency Speech Separation

Contrastive Separative Coding for Self-supervised Representation Learning

no code implementations1 Mar 2021 Jun Wang, Max W. Y. Lam, Dan Su, Dong Yu

To extract robust deep representations from long sequential modeling of speech data, we propose a self-supervised learning approach, namely Contrastive Separative Coding (CSC).

Representation Learning Self-Supervised Learning +1

VARA-TTS: Non-Autoregressive Text-to-Speech Synthesis based on Very Deep VAE with Residual Attention

no code implementations12 Feb 2021 Peng Liu, Yuewen Cao, Songxiang Liu, Na Hu, Guangzhi Li, Chao Weng, Dan Su

This paper proposes VARA-TTS, a non-autoregressive (non-AR) text-to-speech (TTS) model using a very deep Variational Autoencoder (VDVAE) with Residual Attention mechanism, which refines the textual-to-acoustic alignment layer-wisely.

Speech Synthesis Text-To-Speech Synthesis

Effective Low-Cost Time-Domain Audio Separation Using Globally Attentive Locally Recurrent Networks

2 code implementations13 Jan 2021 Max W. Y. Lam, Jun Wang, Dan Su, Dong Yu

Recent research on the time-domain audio separation networks (TasNets) has brought great success to speech separation.

Speech Separation

Phonetic Posteriorgrams based Many-to-Many Singing Voice Conversion via Adversarial Training

1 code implementation3 Dec 2020 Haohan Guo, Heng Lu, Na Hu, Chunlei Zhang, Shan Yang, Lei Xie, Dan Su, Dong Yu

In order to make timbre conversion more stable and controllable, speaker embedding is further decomposed to the weighted sum of a group of trainable vectors representing different timbre clusters.

Audio Generation Disentanglement +1

FastSVC: Fast Cross-Domain Singing Voice Conversion with Feature-wise Linear Modulation

2 code implementations11 Nov 2020 Songxiang Liu, Yuewen Cao, Na Hu, Dan Su, Helen Meng

This paper presents FastSVC, a light-weight cross-domain singing voice conversion (SVC) system, which can achieve high conversion performance, with inference speed 4x faster than real-time on CPUs.

Voice Conversion

Non-Autoregressive Transformer ASR with CTC-Enhanced Decoder Input

no code implementations28 Oct 2020 Xingchen Song, Zhiyong Wu, Yiheng Huang, Chao Weng, Dan Su, Helen Meng

Non-autoregressive (NAR) transformer models have achieved significantly inference speedup but at the cost of inferior accuracy compared to autoregressive (AR) models in automatic speech recognition (ASR).

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Replay and Synthetic Speech Detection with Res2net Architecture

2 code implementations28 Oct 2020 Xu Li, Na Li, Chao Weng, Xunying Liu, Dan Su, Dong Yu, Helen Meng

This multiple scaling mechanism significantly improves the countermeasure's generalizability to unseen spoofing attacks.

Feature Engineering Synthetic Speech Detection

Multi-hop Question Generation with Graph Convolutional Network

1 code implementation Findings of the Association for Computational Linguistics 2020 Dan Su, Yan Xu, Wenliang Dai, Ziwei Ji, Tiezheng Yu, Pascale Fung

Multi-hop Question Generation (QG) aims to generate answer-related questions by aggregating and reasoning over multiple scattered evidence from different paragraphs.

Question Generation Question-Generation +1

Learned Transferable Architectures Can Surpass Hand-Designed Architectures for Large Scale Speech Recognition

no code implementations25 Aug 2020 Liqiang He, Dan Su, Dong Yu

Extensive experiments show that: (i) the architecture searched on the small proxy dataset can be transferred to the large dataset for the speech recognition tasks.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Speaker Independent and Multilingual/Mixlingual Speech-Driven Talking Head Generation Using Phonetic Posteriorgrams

no code implementations20 Jun 2020 Huirong Huang, Zhiyong Wu, Shiyin Kang, Dongyang Dai, Jia Jia, Tianxiao Fu, Deyi Tuo, Guangzhi Lei, Peng Liu, Dan Su, Dong Yu, Helen Meng

Recent approaches mainly have following limitations: 1) most speaker-independent methods need handcrafted features that are time-consuming to design or unreliable; 2) there is no convincing method to support multilingual or mixlingual speech as input.

Talking Head Generation

Investigating Robustness of Adversarial Samples Detection for Automatic Speaker Verification

no code implementations11 Jun 2020 Xu Li, Na Li, Jinghua Zhong, Xixin Wu, Xunying Liu, Dan Su, Dong Yu, Helen Meng

Orthogonal to prior approaches, this work proposes to defend ASV systems against adversarial attacks with a separate detection network, rather than augmenting adversarial data into ASV training.

Binary Classification Data Augmentation +1

CAiRE-COVID: A Question Answering and Query-focused Multi-Document Summarization System for COVID-19 Scholarly Information Management

1 code implementation EMNLP (NLP-COVID19) 2020 Dan Su, Yan Xu, Tiezheng Yu, Farhad Bin Siddique, Elham J. Barezi, Pascale Fung

We present CAiRE-COVID, a real-time question answering (QA) and multi-document summarization system, which won one of the 10 tasks in the Kaggle COVID-19 Open Research Dataset Challenge, judged by medical experts.

Document Summarization Information Retrieval +3

Enhancing End-to-End Multi-channel Speech Separation via Spatial Feature Learning

no code implementations9 Mar 2020 Rongzhi Gu, Shi-Xiong Zhang, Lian-Wu Chen, Yong Xu, Meng Yu, Dan Su, Yuexian Zou, Dong Yu

Hand-crafted spatial features (e. g., inter-channel phase difference, IPD) play a fundamental role in recent deep learning based multi-channel speech separation (MCSS) methods.

Speech Separation

Generalizing Question Answering System with Pre-trained Language Model Fine-tuning

no code implementations WS 2019 Dan Su, Yan Xu, Genta Indra Winata, Peng Xu, Hyeondey Kim, Zihan Liu, Pascale Fung

With a large number of datasets being released and new techniques being proposed, Question answering (QA) systems have witnessed great breakthroughs in reading comprehension (RC)tasks.

Language Modelling Multi-Task Learning +2

Mixup-breakdown: a consistency training method for improving generalization of speech separation models

no code implementations28 Oct 2019 Max W. Y. Lam, Jun Wang, Dan Su, Dong Yu

Deep-learning based speech separation models confront poor generalization problem that even the state-of-the-art models could abruptly fail when evaluating them in mismatch conditions.

Speech Separation

DFSMN-SAN with Persistent Memory Model for Automatic Speech Recognition

no code implementations28 Oct 2019 Zhao You, Dan Su, Jie Chen, Chao Weng, Dong Yu

Self-attention networks (SAN) have been introduced into automatic speech recognition (ASR) and achieved state-of-the-art performance owing to its superior ability in capturing long term dependency.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

A Random Gossip BMUF Process for Neural Language Modeling

no code implementations19 Sep 2019 Yiheng Huang, Jinchuan Tian, Lei Han, Guangsen Wang, Xingcheng Song, Dan Su, Dong Yu

One important challenge of training an NNLM is to leverage between scaling the learning process and handling big data.

Language Modelling speech-recognition +1

DurIAN: Duration Informed Attention Network For Multimodal Synthesis

4 code implementations4 Sep 2019 Chengzhu Yu, Heng Lu, Na Hu, Meng Yu, Chao Weng, Kun Xu, Peng Liu, Deyi Tuo, Shiyin Kang, Guangzhi Lei, Dan Su, Dong Yu

In this paper, we present a generic and robust multimodal synthesis system that produces highly natural speech and facial expression simultaneously.

Speech Synthesis

Phrase-Level Class based Language Model for Mandarin Smart Speaker Query Recognition

no code implementations2 Sep 2019 Yiheng Huang, Liqiang He, Lei Han, Guangsen Wang, Dan Su

In this work, we propose to train pruned language models for the word classes to replace the slots in the root n-gram.

Language Modelling

Maximizing Mutual Information for Tacotron

2 code implementations30 Aug 2019 Peng Liu, Xixin Wu, Shiyin Kang, Guangzhi Li, Dan Su, Dong Yu

End-to-end speech synthesis methods already achieve close-to-human quality performance.

Attribute Speech Synthesis

Teach an all-rounder with experts in different domains

no code implementations9 Jul 2019 Zhao You, Dan Su, Dong Yu

First, for each domain, a teacher model (domain-dependent model) is trained by fine-tuning a multi-condition model with domain-specific subset.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Learning discriminative features in sequence training without requiring framewise labelled data

no code implementations16 May 2019 Jun Wang, Dan Su, Jie Chen, Shulin Feng, Dongpeng Ma, Na Li, Dong Yu

We propose a novel method which simultaneously models both the sequence discriminative training and the feature discriminative learning within a single network architecture, so that it can learn discriminative deep features in sequence training that obviates the need for presegmented training data.

End-to-End Multi-Channel Speech Separation

no code implementations15 May 2019 Rongzhi Gu, Jian Wu, Shi-Xiong Zhang, Lian-Wu Chen, Yong Xu, Meng Yu, Dan Su, Yuexian Zou, Dong Yu

This paper extended the previous approach and proposed a new end-to-end model for multi-channel speech separation.

Speech Separation

RGB-D Salient Object Detection Based on Discriminative Cross-modal Transfer Learning

no code implementations1 Mar 2017 Hao Chen, Y. F. Li, Dan Su

In the proposed approach, we leverage the auxiliary data from the source modality effectively by training the RGB saliency detection network to obtain the task-specific pre-understanding layers for the target modality.

object-detection RGB-D Salient Object Detection +4

Cannot find the paper you are looking for? You can Submit a new open access paper.