1 code implementation • Findings (ACL) 2022 • Dawei Li, Yanran Li, Jiayi Zhang, Ke Li, Chen Wei, Jianwei Cui, Bin Wang
Existing commonsense knowledge bases often organize tuples in an isolated manner, which is deficient for commonsense conversational models to plan the next steps.
no code implementations • 5 May 2024 • Yuzhen Mao, Martin Ester, Ke Li
One limitation of existing Transformer-based models is that they cannot handle very long sequences as input since their self-attention operations exhibit quadratic time and space complexity.
no code implementations • 23 Apr 2024 • Wensheng Pan, Timin Gao, Yan Zhang, Runze Hu, Xiawu Zheng, Enwei Zhang, Yuting Gao, Yutao Liu, Yunhang Shen, Ke Li, Shengchuan Zhang, Liujuan Cao, Rongrong Ji
Image Quality Assessment (IQA) models benefit significantly from semantic information, which allows them to treat different types of objects distinctly.
no code implementations • 22 Apr 2024 • Ke Li
In the first part, we present a comprehensive survey of the development of MOEA/D from its origin to the current state-of-the-art approaches.
no code implementations • 22 Apr 2024 • Mingyu Huang, Ke Li
This paper presents the second part of the two-part survey series on decomposition-based evolutionary multi-objective optimization where we mainly focus on discussing the literature related to multi-objective evolutionary algorithms based on decomposition (MOEA/D).
1 code implementation • 8 Apr 2024 • Zhengde Zhang, Yiyu Zhang, Haodong Yao, Jianwen Luo, Rui Zhao, Bo Huang, Jiameng Zhao, Yipu Liao, Ke Li, Lina Zhao, Jun Cao, Fazhi Qi, Changzheng Yuan
To address this challenge, a sophisticated large language model system named as Xiwu has been developed, allowing you switch between the most advanced foundation models and quickly teach the model domain knowledge.
1 code implementation • 31 Mar 2024 • Wenxuan Huang, Yunhang Shen, Jiao Xie, Baochang Zhang, Gaoqi He, Ke Li, Xing Sun, Shaohui Lin
The remarkable performance of Vision Transformers (ViTs) typically requires an extremely large training cost.
no code implementations • 20 Mar 2024 • Chengzhe Feng, Yanan sun, Ke Li, Pan Zhou, Jiancheng Lv, Aojun Lu
We conduct GenAP on three popular code intelligence PLMs with three canonical code intelligence tasks including defect prediction, code summarization, and code translation.
1 code implementation • 10 Mar 2024 • Yuncheng Yang, Chuyan Zhang, Zuopeng Yang, Yuting Gao, Yulei Qin, Ke Li, Xing Sun, Jie Yang, Yun Gu
Prompt learning is effective for fine-tuning foundation models to improve their generalization across a variety of downstream tasks.
no code implementations • 8 Mar 2024 • Dingkang Yang, Dongling Xiao, Ke Li, Yuzheng Wang, Zhaoyu Chen, Jinjie Wei, Lihua Zhang
Multimodal intention understanding (MIU) is an indispensable component of human expression analysis (e. g., sentiment or humor) from heterogeneous modalities, including visual postures, linguistic contents, and acoustic behaviors.
no code implementations • 8 Mar 2024 • Dingkang Yang, Mingcheng Li, Dongling Xiao, Yang Liu, Kun Yang, Zhaoyu Chen, Yuzheng Wang, Peng Zhai, Ke Li, Lihua Zhang
In the inference phase, given a factual multimodal input, MCIS imagines two counterfactual scenarios to purify and mitigate these biases.
1 code implementation • 27 Feb 2024 • Xiao Cui, Yulei Qin, Yuting Gao, Enwei Zhang, Zihan Xu, Tong Wu, Ke Li, Xing Sun, Wengang Zhou, Houqiang Li
We propose the Sinkhorn Knowledge Distillation (SinKD) that exploits the Sinkhorn distance to ensure a nuanced and precise assessment of the disparity between teacher and student distributions.
no code implementations • 20 Feb 2024 • Xiaotian Zou, Yongkang Chen, Ke Li
To address this question, we conducted experiments in a stable GPT version gpt-3. 5-turbo-0613 to generated jailbreak prompts with varying system messages: short, long, and none.
no code implementations • 15 Feb 2024 • Ke Li, Fan Li
Real-world black-box optimization often involves time-consuming or costly experiments and simulations.
no code implementations • 22 Jan 2024 • Xudong Li, Jingyuan Zheng, Runze Hu, Yan Zhang, Ke Li, Yunhang Shen, Xiawu Zheng, Yutao Liu, Shengchuan Zhang, Pingyang Dai, Rongrong Ji
Blind Image Quality Assessment (BIQA) aims to evaluate image quality in line with human perception, without reference benchmarks.
no code implementations • 16 Jan 2024 • Kiyohiro Nakayama, Mikaela Angelina Uy, Yang You, Ke Li, Leonidas Guibas
We introduce ProvNeRF, a model that enriches a traditional NeRF representation by incorporating per-point provenance, modeling likely source locations for each point.
no code implementations • 4 Jan 2024 • Ke Li, Han Guo
The learned preference information is used to progressively guide policy optimization towards policies of interest.
no code implementations • 2 Jan 2024 • Shuang Li, Ke Li, Wei Li, Ming Yang
Constrained multi-objective optimization problems (CMOPs) pervade real-world applications in science, engineering, and design.
no code implementations • 20 Dec 2023 • Shichong Peng, Alireza Moazeni, Ke Li
We assess the validity of these models' outputs as solutions to the inverse problems and conduct a thorough analysis of the reliability of the models' estimates of uncertainty over the solution.
no code implementations • 19 Dec 2023 • Jianghang Lin, Yunhang Shen, Bingquan Wang, Shaohui Lin, Ke Li, Liujuan Cao
Despite weakly supervised object detection (WSOD) being a promising step toward evading strong instance-level annotations, its capability is confined to closed-set categories within a single training dataset.
2 code implementations • 19 Dec 2023 • Chaoyou Fu, Renrui Zhang, Zihan Wang, Yubo Huang, Zhengye Zhang, Longtian Qiu, Gaoxiang Ye, Yunhang Shen, Mengdan Zhang, Peixian Chen, Sirui Zhao, Shaohui Lin, Deqiang Jiang, Di Yin, Peng Gao, Ke Li, Hongsheng Li, Xing Sun
They endow Large Language Models (LLMs) with powerful capabilities in visual understanding, enabling them to tackle diverse multi-modal tasks.
1 code implementation • 13 Dec 2023 • Yunchen Li, Zhou Yu, Gaoqi He, Yunhang Shen, Ke Li, Xing Sun, Shaohui Lin
On the other hand, the model unconditionally learns the probability distribution of the data $p(X)$ and generates samples that conform to this distribution.
no code implementations • 11 Dec 2023 • Tao Chen, Enwei Zhang, Yuting Gao, Ke Li, Xing Sun, Yan Zhang, Hui Li
Although In-Context Learning (ICL) brings remarkable performance gains to Large Language Models (LLMs), the improvements remain lower than fine-tuning on downstream tasks.
no code implementations • 11 Dec 2023 • Xudong Li, Timin Gao, Xiawu Zheng, Runze Hu, Jingyuan Zheng, Yunhang Shen, Ke Li, Yutao Liu, Pingyang Dai, Yan Zhang, Rongrong Ji
The current state-of-the-art No-Reference Image Quality Assessment (NR-IQA) methods typically use feature extraction in upstream backbone networks, which assumes that all extracted features are relevant.
1 code implementation • 6 Dec 2023 • Shengbo Wang, Ke Li
We endeavor to design an efficient and provable method for expensive POCOPs under the framework of constrained Bayesian optimization.
no code implementations • 5 Dec 2023 • Mingyu Huang, Ke Li
However, due to the black-box nature of combinatorial optimization, it is far from trivial to infer such similarity in real-world scenarios.
2 code implementations • 4 Dec 2023 • Yunhang Shen, Chaoyou Fu, Peixian Chen, Mengdan Zhang, Ke Li, Xing Sun, Yunsheng Wu, Shaohui Lin, Rongrong Ji
However, predominant paradigms, driven by casting instance-level tasks as an object-word alignment, bring heavy cross-modality interaction, which is not effective in prompting object detection and visual grounding.
1 code implementation • 4 Dec 2023 • Jinguo Cheng, Ke Li, Yuxuan Liang, Lijun Sun, Junchi Yan, Yuankai Wu
To address this challenge, we present the Super-Multivariate Urban Mobility Transformer (SUMformer), which utilizes a specially designed attention mechanism to calculate temporal and cross-variable correlations and reduce computational costs stemming from a large number of time series.
no code implementations • 1 Dec 2023 • Xudong Li, Jingyuan Zheng, Xiawu Zheng, Runze Hu, Enwei Zhang, Yuting Gao, Yunhang Shen, Ke Li, Yutao Liu, Pingyang Dai, Yan Zhang, Rongrong Ji
Concretely, by innovatively introducing a novel feature distillation method in IQA, we propose a new framework to learn comparative knowledge from non-aligned reference images.
no code implementations • 23 Nov 2023 • Tian Huang, Ke Li
Finally, we deploy our method in a practical problem, specifically in protein structure prediction (PSP).
no code implementations • 23 Nov 2023 • Mingyu Huang, Ke Li
Despite the recent success in a plethora of hyperparameter optimization (HPO) methods for machine learning (ML) models, the intricate interplay between model hyperparameters (HPs) and predictive losses (a. k. a fitness), which is a key prerequisite for understanding HPO, remain notably underexplored in our community.
1 code implementation • 20 Nov 2023 • Tong Wu, Yulei Qin, Enwei Zhang, Zihan Xu, Yuting Gao, Ke Li, Xing Sun
However, existing embedding models for text retrieval usually have three non-negligible limitations.
no code implementations • 12 Nov 2023 • Yassir Fathullah, Chunyang Wu, Egor Lakomkin, Ke Li, Junteng Jia, Yuan Shangguan, Jay Mahadeokar, Ozlem Kalinli, Christian Fuegen, Mike Seltzer
In this work, we extend the instruction-tuned Llama-2 model with end-to-end general-purpose speech processing and reasoning abilities while maintaining the wide range of original LLM capabilities, without using any carefully curated paired data.
no code implementations • NeurIPS 2023 • Mikaela Angelina Uy, Kiyohiro Nakayama, Guandao Yang, Rahul Krishna Thomas, Leonidas Guibas, Ke Li
Volume rendering requires evaluating an integral along each ray, which is numerically approximated with a finite sum that corresponds to the exact integral along the ray under piecewise constant volume density.
1 code implementation • 26 Oct 2023 • Heng Yang, Ke Li
Instruction-based language modeling has received significant attention in pretrained language models.
1 code implementation • 24 Oct 2023 • Shukang Yin, Chaoyou Fu, Sirui Zhao, Tong Xu, Hao Wang, Dianbo Sui, Yunhang Shen, Ke Li, Xing Sun, Enhong Chen
Hallucination is a big shadow hanging over the rapidly evolving Multimodal Large Language Models (MLLMs), referring to the phenomenon that the generated text is inconsistent with the image content.
no code implementations • 19 Oct 2023 • huan zhang, Jinliang Ding, Liang Feng, Kay Chen Tan, Ke Li
Although data-driven evolutionary optimization and Bayesian optimization (BO) approaches have shown promise in solving expensive optimization problems in static environments, the attempts to develop such approaches in dynamic environments remain rarely unexplored.
no code implementations • 22 Sep 2023 • Jiamin Xie, Ke Li, Jinxi Guo, Andros Tjandra, Yuan Shangguan, Leda Sari, Chunyang Wu, Junteng Jia, Jay Mahadeokar, Ozlem Kalinli
In this work, we propose the use of an adaptive masking approach in two scenarios for pruning a multilingual ASR model efficiently, each resulting in sparse monolingual models or a sparse multilingual model (named as Dynamic ASR Pathways).
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
1 code implementation • ICCV 2023 • Jiang-Tian Zhai, Xialei Liu, Andrew D. Bagdanov, Ke Li, Ming-Ming Cheng
Moreover, MAEs can reliably reconstruct original input images from randomly selected patches, which we use to store exemplars from past tasks more efficiently for CIL.
1 code implementation • ICCV 2023 • Junkai Xu, Liang Peng, Haoran Cheng, Hao Li, Wei Qian, Ke Li, Wenxiao Wang, Deng Cai
To the best of our knowledge, this work is the first to introduce volume rendering for M3D, and demonstrates the potential of implicit reconstruction for image-based 3D perception.
1 code implementation • 24 Jul 2023 • Wolfgang Boettcher, Lukas Hoyer, Ozan Unal, Ke Li, Dengxin Dai
While using a single model, our method yields significantly better results than a non-adaptive baseline trained on different LiDAR patterns.
no code implementations • 21 Jul 2023 • Yassir Fathullah, Chunyang Wu, Egor Lakomkin, Junteng Jia, Yuan Shangguan, Ke Li, Jinxi Guo, Wenhan Xiong, Jay Mahadeokar, Ozlem Kalinli, Christian Fuegen, Mike Seltzer
Furthermore, we perform ablation studies to investigate whether the LLM can be completely frozen during training to maintain its original capabilities, scaling up the audio encoder, and increasing the audio encoder striding to generate fewer embeddings.
Abstractive Text Summarization Automatic Speech Recognition +3
1 code implementation • NeurIPS 2023 • Yanshu Zhang, Shichong Peng, Alireza Moazeni, Ke Li
PAPR effectively learns point cloud positions to represent the correct scene geometry, even when the initialization drastically differs from the target geometry.
no code implementations • 3 Jul 2023 • Shengbo Wang, Ke Li, Yin Yang, Yuting Cao, TingWen Huang, Shiping Wen
Specifically, with the help of CBF method, we learn the inherent and external uncertainties by a unified adaptive Bayesian linear regression (ABLR) model, which consists of a forward neural network (NN) and a Bayesian output layer.
1 code implementation • 29 Jun 2023 • Hongjie Cai, Nan Song, Zengzhi Wang, Qiming Xie, Qiankun Zhao, Ke Li, Siwei Wu, Shijie Liu, Jianfei Yu, Rui Xia
Aspect-based sentiment analysis is a long-standing research interest in the field of opinion mining, and in recent years, researchers have gradually shifted their focus from simple ABSA subtasks to end-to-end multi-element ABSA tasks.
1 code implementation • 23 Jun 2023 • Shukang Yin, Chaoyou Fu, Sirui Zhao, Ke Li, Xing Sun, Tong Xu, Enhong Chen
Recently, Multimodal Large Language Model (MLLM) represented by GPT-4V has been a new rising research hotspot, which uses powerful Large Language Models (LLMs) as a brain to perform multimodal tasks.
3 code implementations • 23 Jun 2023 • Chaoyou Fu, Peixian Chen, Yunhang Shen, Yulei Qin, Mengdan Zhang, Xu Lin, Jinrui Yang, Xiawu Zheng, Ke Li, Xing Sun, Yunsheng Wu, Rongrong Ji
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image.
no code implementations • NeurIPS 2023 • Bohan Zhou, Ke Li, Jiechuan Jiang, Zongqing Lu
Learning from visual observation (LfVO), aiming at recovering policies from only visual observation data, is promising yet a challenging problem.
1 code implementation • NeurIPS 2023 • Yifan Xu, Mengdan Zhang, Chaoyou Fu, Peixian Chen, Xiaoshan Yang, Ke Li, Changsheng Xu
To address the learning inertia problem brought by the frozen detector, a vision conditioned masked language prediction strategy is proposed.
Ranked #1 on Few-Shot Object Detection on ODinW-35
no code implementations • 24 May 2023 • Sid Wang, John Nguyen, Ke Li, Carole-Jean Wu
However, fine-tuning all pre-trained model parameters becomes impractical as the model size and number of tasks increase.
no code implementations • 17 May 2023 • Kuiliang Gao, Anzhu Yu, Xiong You, Wenyue Guo, Ke Li, Ningbo Huang
Firstly, a multi-branch segmentation network is built to learn an expert for each source RSI.
no code implementations • 10 May 2023 • Yong Qing, Ke Li, Peng-Fei Zhou, Shi-Ju Ran
In this work, we propose a general compression scheme that significantly reduces the variational parameters of NN by encoding them to deep automatically-differentiable tensor network (ADTN) that contains exponentially-fewer free parameters.
no code implementations • 6 May 2023 • Heng Yang, Ke Li
Recent studies have revealed the vulnerability of pre-trained language models to adversarial attacks.
no code implementations • ICCV 2023 • Kiyohiro Nakayama, Mikaela Angelina Uy, Jiahui Huang, Shi-Min Hu, Ke Li, Leonidas J Guibas
We propose a factorization that models independent part style and part configuration distributions and presents a novel cross-diffusion network that enables us to generate coherent and plausible shapes under our proposed factorization.
no code implementations • CVPR 2023 • Zhiyu Qu, Yulia Gryaditskaya, Ke Li, Kaiyue Pang, Tao Xiang, Yi-Zhe Song
Following this, we design a simple explainability-friendly sketch encoder that accommodates the intrinsic properties of strokes: shape, location, and order.
Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1
no code implementations • 12 Apr 2023 • Alexia Jolicoeur-Martineau, Kilian Fatras, Ke Li, Tal Kachman
Diffusion Models (DMs) are powerful generative models that add Gaussian noise to the data and learn to remove it.
no code implementations • 10 Apr 2023 • Yu Wang, Shuhui Bu, Lin Chen, Yifei Dong, Kun Li, Xuefeng Cao, Ke Li
First, the point cloud is divided into small patches, and a matching patch set is selected based on global descriptors and spatial distribution, which constitutes the coarse matching process.
1 code implementation • 6 Apr 2023 • Xin Zhang, Chen Liu, Degang Yang, Tingting Song, Yichen Ye, Ke Li, Yingze Song
In this paper, we propose a new perspective on the effectiveness of spatial attention, which is that the spatial attention mechanism essentially solves the problem of convolutional kernel parameter sharing.
no code implementations • 30 Mar 2023 • Yuting Gao, Jinfeng Liu, Zihan Xu, Tong Wu Enwei Zhang, Wei Liu, Jie Yang, Ke Li, Xing Sun
During the preceding biennium, vision-language pre-training has achieved noteworthy success on several downstream tasks.
no code implementations • CVPR 2023 • Mikaela Angelina Uy, Ricardo Martin-Brualla, Leonidas Guibas, Ke Li
To address this issue, we introduce SCADE, a novel technique that improves NeRF reconstruction quality on sparse, unconstrained input views for in-the-wild indoor scenes.
1 code implementation • 19 Mar 2023 • Ziluo Ding, Hao Luo, Ke Li, Junpeng Yue, Tiejun Huang, Zongqing Lu
One of the essential missions in the AI research community is to build an autonomous embodied agent that can attain high-level performance across a wide spectrum of tasks.
1 code implementation • 14 Feb 2023 • Meifang Zeng, Ke Li, Bingchuan Jiang, Liujuan Cao, Hui Li
With the idea of Cross-system Attack, we design a Practical Cross-system Shilling Attack (PC-Attack) framework that requires little information about the victim RS model and the target RS data for conducting attacks.
1 code implementation • 28 Jan 2023 • Ryoji Tanabe, Ke Li
Some quality indicators have been proposed for benchmarking preference-based evolutionary multi-objective optimization algorithms using a reference point.
1 code implementation • CVPR 2023 • Ke Li, Kaiyue Pang, Yi-Zhe Song
This lack of sketch data has imposed on the community a few "peculiar" design choices -- the most representative of them all is perhaps the coerced utilisation of photo-based pre-training (i. e., no sketch), for many core tasks that otherwise dictates specific sketch understanding.
no code implementations • CVPR 2023 • Phoenix Neale Williams, Ke Li
However, existing methods often struggle to simultaneously minimize the number of modified pixels and the size of the modifications, often requiring a large number of queries and assuming unrestricted access to the targeted DNN.
1 code implementation • 26 Dec 2022 • Xingxing Xie, Gong Cheng, Qingyang Li, Shicheng Miao, Ke Li, Junwei Han
Current mainstream object detection methods for large aerial images usually divide large images into patches and then exhaustively detect the objects of interest on all patches, no matter whether there exist objects or not.
1 code implementation • 16 Dec 2022 • Xialei Liu, Jiang-Tian Zhai, Andrew D. Bagdanov, Ke Li, Ming-Ming Cheng
EFCIL is of interest because it mitigates concerns about privacy and long-term storage of data, while at the same time alleviating the problem of catastrophic forgetting in incremental learning.
1 code implementation • CVPR 2023 • Yuqi Lin, Minghao Chen, Wenxiao Wang, Boxi Wu, Ke Li, Binbin Lin, Haifeng Liu, Xiaofei He
To efficiently generate high-quality segmentation masks from CLIP, we propose a novel WSSS framework called CLIP-ES.
Ranked #12 on Weakly-Supervised Semantic Segmentation on COCO 2014 val
no code implementations • 15 Dec 2022 • Ke Li, Jay Mahadeokar, Jinxi Guo, Yangyang Shi, Gil Keren, Ozlem Kalinli, Michael L. Seltzer, Duc Le
Experiments on Librispeech and in-house data show relative WER reductions (WERRs) from 3% to 5% with a slight increase in model size and negligible extra token emission latency compared with fast-slow encoder based transducer.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
1 code implementation • 25 Nov 2022 • Shichong Peng, Alireza Moazeni, Ke Li
A persistent challenge in conditional image synthesis has been to generate diverse output images from the same input image despite only one output image being observed per input image.
1 code implementation • 24 Nov 2022 • Ke Li, Tim Rolff, Susanne Schmidt, Reinhard Bacher, Simone Frintrop, Wim Leemans, Frank Steinicke
In this paper, we present and evaluate a NeRF-based framework that is capable of rendering scenes in immersive VR allowing users to freely move their heads to explore complex real-world scenes.
no code implementations • 5 Nov 2022 • Ke Li, Renzhi Chen, Xin Yao
Many real-world problems are usually computationally costly and the objective functions evolve over time.
no code implementations • 31 Oct 2022 • Suyoun Kim, Ke Li, Lucas Kabela, Rongqing Huang, Jiedan Zhu, Ozlem Kalinli, Duc Le
In this work, we present our Joint Audio/Text training method for Transformer Rescorer, to leverage unpaired text-only data which is relatively cheaper than paired audio-text data.
1 code implementation • 30 Oct 2022 • Kiarash Zahirnia, Oliver Schulte, Parmis Naddaf, Ke Li
We utilize the micro-macro objective to improve graph generation with a GraphVAE, a well-established model based on graph-level latent variables, that provides fast training and generation time for medium-sized graphs.
1 code implementation • 6 Oct 2022 • Heng Yang, Ke Li
Our experimental results on three classification tasks and nine public datasets show that BootAug addresses the performance drop problem and outperforms state-of-the-art text augmentation methods.
1 code implementation • 1 Oct 2022 • Xialei Liu, Yu-Song Hu, Xu-Sheng Cao, Andrew D. Bagdanov, Ke Li, Ming-Ming Cheng
However, conventional CIL methods consider a balanced distribution for each new task, which ignores the prevalence of long-tailed distributions in the real world.
no code implementations • 9 Sep 2022 • Ke Li, Cameron Baird, Dan Lin
With the advances in deep learning, speaker verification has achieved very high accuracy and is gaining popularity as a type of biometric authentication option in many scenes of our daily life, especially the growing market of web services.
1 code implementation • 27 Aug 2022 • Hong Yang, Gongrui Nan, Mingbao Lin, Fei Chao, Yunhang Shen, Ke Li, Rongrong Ji
Finally, the LSA modules are further developed to fully use the prior information in non-shadow regions to cleanse the shadow regions.
2 code implementations • 2 Aug 2022 • Heng Yang, Chen Zhang, Ke Li
The advancement of aspect-based sentiment analysis (ABSA) has urged the lack of a user-friendly framework that can largely lower the difficulty of reproducing state-of-the-art ABSA performance, especially for beginners.
Aspect-Based Sentiment Analysis Aspect-Based Sentiment Analysis (ABSA) +5
2 code implementations • 22 Jun 2022 • Peixian Chen, Kekai Sheng, Mengdan Zhang, Mingbao Lin, Yunhang Shen, Shaohui Lin, Bo Ren, Ke Li
Open-vocabulary object detection (OVD) aims to scale up vocabulary size to detect objects of novel categories beyond the training vocabulary.
Ranked #12 on Open Vocabulary Object Detection on LVIS v1.0
1 code implementation • 14 Jun 2022 • Yuxin Zhang, Mingbao Lin, Zhihang Lin, Yiting Luo, Ke Li, Fei Chao, Yongjian Wu, Rongrong Ji
In this paper, we show that the N:M learning can be naturally characterized as a combinatorial problem which searches for the best combination candidate within a finite collection.
2 code implementations • 14 Jun 2022 • Peixian Chen, Mengdan Zhang, Yunhang Shen, Kekai Sheng, Yuting Gao, Xing Sun, Ke Li, Chunhua Shen
A natural usage of ViTs in detection is to replace the CNN-based backbone with a transformer-based backbone, which is straightforward and effective, with the price of bringing considerable computation burden for inference.
1 code implementation • 12 Jun 2022 • Ke Li, Heng Yang, Willem Visser
In this paper, we propose DaNuoYi, an automatic injection testing tool that simultaneously generates test inputs for multiple types of injection attacks on a WAF.
1 code implementation • 2 Jun 2022 • Nan Wang, Shaohui Lin, Xiaoxiao Li, Ke Li, Yunhang Shen, Yue Gao, Lizhuang Ma
U-Nets have achieved tremendous success in medical image segmentation.
no code implementations • 28 May 2022 • Shuang Li, Ke Li, Wei Li
Constraint violation has been a building block to design evolutionary multi-objective optimization algorithms for solving constrained multi-objective optimization problems.
no code implementations • 28 May 2022 • Renzhi Chen, Ke Li
Data-driven evolutionary multi-objective optimization (EMO) has been recognized as an effective approach for multi-objective optimization problems with expensive objective functions.
no code implementations • 29 Apr 2022 • Yuting Gao, Jinfeng Liu, Zihan Xu, Jun Zhang, Ke Li, Rongrong Ji, Chunhua Shen
Large-scale vision-language pre-training has achieved promising results on downstream tasks.
no code implementations • 6 Apr 2022 • Ke Li, Guiyu Lai, Xin Yao
Bearing this in mind, this paper develops a framework for designing preference-based EMO algorithms to find SOI in an interactive manner.
1 code implementation • 6 Apr 2022 • Dawei Li, Yanran Li, Jiayi Zhang, Ke Li, Chen Wei, Jianwei Cui, Bin Wang
Existing commonsense knowledge bases often organize tuples in an isolated manner, which is deficient for commonsense conversational models to plan the next steps.
no code implementations • 29 Mar 2022 • Jay Mahadeokar, Yangyang Shi, Ke Li, Duc Le, Jiedan Zhu, Vikas Chandra, Ozlem Kalinli, Michael L Seltzer
Streaming ASR with strict latency constraints is required in many speech recognition applications.
1 code implementation • CVPR 2022 • Qinqin Zhou, Kekai Sheng, Xiawu Zheng, Ke Li, Xing Sun, Yonghong Tian, Jie Chen, Rongrong Ji
Recently, Vision Transformer (ViT) has achieved remarkable success in several computer vision tasks.
1 code implementation • 21 Mar 2022 • Bohong Chen, Mingbao Lin, Kekai Sheng, Mengdan Zhang, Peixian Chen, Ke Li, Liujuan Cao, Rongrong Ji
To that effect, we construct an Edge-to-PSNR lookup table that maps the edge score of an image patch to the PSNR performance for each subnet, together with a set of computation costs for the subnets.
1 code implementation • 8 Mar 2022 • Mengzhao Chen, Mingbao Lin, Ke Li, Yunhang Shen, Yongjian Wu, Fei Chao, Rongrong Ji
Our proposed CF-ViT is motivated by two important observations in modern ViT models: (1) The coarse-grained patch splitting can locate informative regions of an input image.
1 code implementation • 8 Mar 2022 • Yunshan Zhong, Mingbao Lin, Xunchao Li, Ke Li, Yunhang Shen, Fei Chao, Yongjian Wu, Rongrong Ji
However, these methods suffer from severe performance degradation when quantizing the SR models to ultra-low precision (e. g., 2-bit and 3-bit) with the low-cost layer-wise quantizer.
no code implementations • 7 Mar 2022 • Jiangjiao Xu, Ke Li
One of the critical challenges of time series renewable energy forecasting is the lack of historical data to train an adequate predictive model.
no code implementations • 7 Mar 2022 • Phoenix Williams, Ke Li
To evaluate the effectiveness of our proposed method, we attack three state-of-the-art image classification models trained on the CIFAR-10 dataset in a targeted manner.
1 code implementation • NeurIPS 2021 • Kuan-Chieh Wang, Yan Fu, Ke Li, Ashish Khisti, Richard Zemel, Alireza Makhzani
In this work, we provide a probabilistic interpretation of model inversion attacks, and formulate a variational objective that accounts for both diversity and accuracy.
1 code implementation • 11 Jan 2022 • Niclas Vödisch, Ozan Unal, Ke Li, Luc van Gool, Dengxin Dai
In this work, we take a new route to learn to optimize the LiDAR beam configuration for a given application.
no code implementations • 5 Jan 2022 • Mingyu Huang, Peili Mao, Ke Li
Modern software systems are often highly configurable to tailor varied requirements from diverse stakeholders.
1 code implementation • 1 Nov 2021 • Fanxu Meng, Hao Cheng, Jiaxin Zhuang, Ke Li, Xing Sun
In this paper, we aim to remedy this problem and propose to remove the residual connection in a vanilla ResNet equivalently by a reserving and merging (RM) operation on ResBlock.
1 code implementation • 16 Oct 2021 • Heng Yang, Ke Li
Aspect sentiment coherency is an intriguing yet underexplored topic in the field of aspect-based sentiment classification.
Adversarial Defense Aspect-Based Sentiment Analysis (ABSA) +2
no code implementations • 7 Oct 2021 • Yangyang Shi, Chunyang Wu, Dilin Wang, Alex Xiao, Jay Mahadeokar, Xiaohui Zhang, Chunxi Liu, Ke Li, Yuan Shangguan, Varun Nagaraja, Ozlem Kalinli, Mike Seltzer
This paper improves the streaming transformer transducer for speech recognition by using non-causal convolution.
1 code implementation • 5 Oct 2021 • Gong Cheng, Jiabao Wang, Ke Li, Xingxing Xie, Chunbo Lang, Yanqing Yao, Junwei Han
Nowadays, oriented detectors mostly use horizontal boxes as intermedium to derive oriented boxes from them.
no code implementations • 29 Sep 2021 • Canyu Le, Zhiyuan Tang, Ke Li, Jiandong Yang
On top of this dataset, we propose a two-stage framework to perform chapter localization and chapter title generation.
no code implementations • 29 Sep 2021 • Haiyan Wu, Yuting Gao, Ke Li, Yinqi Zhang, Shaohui Lin, Yuan Xie, Xing Sun
These findings motivate us to introduce an self-supervised teaching assistant (SSTA) besides the commonly used supervised teacher to improve the performance of transformers.
no code implementations • 29 Sep 2021 • Shichong Peng, Seyed Alireza Moazenipourasil, Ke Li
We consider problems where multiple predictions can be considered correct, but only one of them is given as supervision.
no code implementations • 28 Sep 2021 • Zhe Liu, Ke Li, Shreyan Bakshi, Fuchun Peng
Speech model adaptation is crucial to handle the discrepancy between server-side proxy training data and actual data received on local devices of users.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • NeurIPS Workshop DLDE 2021 • Alexia Jolicoeur-Martineau, Ke Li, Rémi Piché-Taillefer, Tal Kachman, Ioannis Mitliagkas
Score-based (denoising diffusion) generative models have recently gained a lot of success in generating realistic and diverse data.
no code implementations • 23 Sep 2021 • Ke Li, Yun Yang, Naveen N. Narisetty
This new lower bound unifies existing regret bound results that have different dependencies on T due to the use of different values of margin parameter $\alpha$ explicitly implied by their assumptions.
no code implementations • 12 Sep 2021 • Ke Li, Renzhi Chen
Data-driven evolutionary optimization can be used to search for a set of non-dominated trade-off solutions, where the expensive objective functions are approximated as a surrogate model.
1 code implementation • 9 Sep 2021 • Yunshan Zhong, Mingbao Lin, Mengzhao Chen, Ke Li, Yunhang Shen, Fei Chao, Yongjian Wu, Rongrong Ji
While post-training quantization receives popularity mostly due to its evasion in accessing the original complete training dataset, its poor performance also stems from scarce images.
no code implementations • 21 Aug 2021 • Ke Li
Decomposition has been the mainstream approach in the classic mathematical programming for multi-objective optimization and multi-criterion decision-making.
1 code implementation • ICCV 2021 • Binghui Chen, Zhaoyi Yan, Ke Li, Pengyu Li, Biao Wang, WangMeng Zuo, Lei Zhang
In crowd counting, due to the problem of laborious labelling, it is perceived intractability of collecting a new large-scale dataset which has plentiful images with large diversity in density, scene, etc.
no code implementations • 18 Aug 2021 • Siyuan Ren, Bin Guo, Longbing Cao, Ke Li, Jiaqi Liu, Zhiwen Yu
To address these issues, we propose DeepExpress - a deep-learning based express delivery sequence prediction model, which extends the classic seq2seq framework to learning complex coupling between sequence and features.
1 code implementation • 3 Aug 2021 • Yifan Xu, Zhijie Zhang, Mengdan Zhang, Kekai Sheng, Ke Li, WeiMing Dong, Liqing Zhang, Changsheng Xu, Xing Sun
Vision transformers (ViTs) have recently received explosive popularity, but the huge computational cost is still a severe issue.
Ranked #11 on Efficient ViTs on ImageNet-1K (with DeiT-T)
no code implementations • 30 Jun 2021 • Himanshu Arora, Saurabh Mishra, Shichong Peng, Ke Li, Ali Mahdavi-Amiri
Shape completion is the problem of completing partial input shapes such as partial scans.
no code implementations • 29 Jun 2021 • Kiarash Zahirnia, Ankita Sakhuja, Oliver Schulte, Parmis Nadaf, Ke Li, Xia Hu
Our experiments demonstrate a significant improvement in the realism of the generated graph structures, typically by 1-2 orders of magnitude of graph structure metrics, compared to leading graph VAEand GAN models.
no code implementations • 16 Jun 2021 • Shichong Peng, Alireza Moazeni, Ke Li
Deep generative models such as GANs have driven impressive advances in conditional image synthesis in recent years.
no code implementations • 7 Jun 2021 • Daniel Rebain, Ke Li, Vincent Sitzmann, Soroosh Yazdani, Kwang Moo Yi, Andrea Tagliasacchi
Implicit representations of geometry, such as occupancy fields or signed distance fields (SDF), have recently re-gained popularity in encoding 3D solid shape in a functional form.
1 code implementation • 28 May 2021 • Alexia Jolicoeur-Martineau, Ke Li, Rémi Piché-Taillefer, Tal Kachman, Ioannis Mitliagkas
For high-resolution images, our method leads to significantly higher quality samples than all other methods tested.
Ranked #8 on Image Generation on CIFAR-10 (Inception score metric)
no code implementations • 19 May 2021 • Sascha Hornauer, Ke Li, Stella X. Yu, Shabnam Ghaffarzadegan, Liu Ren
Recent progress in network-based audio event classification has shown the benefit of pre-training models on visual data such as ImageNet.
no code implementations • 11 May 2021 • Yanran Li, Ke Li, Hongke Ning, Xiaoqiang Xia, Yalong Guo, Chen Wei, Jianwei Cui, Bin Wang
Existing emotion-aware conversational models usually focus on controlling the response contents to align with a specific emotion class, whereas empathy is the ability to understand and concern the feelings and experience of others.
1 code implementation • 3 May 2021 • Jie Hu, Liujuan Cao, Yao Lu, Shengchuan Zhang, Yan Wang, Ke Li, Feiyue Huang, Ling Shao, Rongrong Ji
However, such an upgrade is not applicable to instance segmentation, due to its significantly higher output dimensions compared to object detection.
Ranked #21 on Instance Segmentation on COCO test-dev
2 code implementations • 19 Apr 2021 • Yuting Gao, Jia-Xin Zhuang, Shaohui Lin, Hao Cheng, Xing Sun, Ke Li, Chunhua Shen
Specifically, we find the final embedding obtained by the mainstream SSL methods contains the most fruitful information, and propose to distill the final embedding to maximally transmit a teacher's knowledge to a lightweight model by constraining the last embedding of the student to be consistent with that of the teacher.
2 code implementations • CVPR 2021 • Ke Li, Shijie Wang, Xiang Zhang, Yifan Xu, Weijian Xu, Zhuowen Tu
Here we utilize the encoder-decoder structure in Transformers to perform regression-based person and keypoint detection that is general-purpose and requires less heuristic design compared with the existing approaches.
2 code implementations • 3 Apr 2021 • Junbo Zhang, Zhiwen Zhang, Yongqing Wang, Zhiyong Yan, Qiong Song, YuKai Huang, Ke Li, Daniel Povey, Yujun Wang
This paper introduces a new open-source speech corpus named "speechocean762" designed for pronunciation assessment use, consisting of 5000 English utterances from 250 non-native speakers, where half of the speakers are children.
Ranked #7 on Phone-level pronunciation scoring on speechocean762
no code implementations • 25 Mar 2021 • Kekai Sheng, Ke Li, Xiawu Zheng, Jian Liang, WeiMing Dong, Feiyue Huang, Rongrong Ji, Xing Sun
However, considering that the configuration of attention, i. e., the type and the position of attention module, affects the performance significantly, it is more generalized to optimize the attention configuration automatically to be specialized for arbitrary UDA scenario.
Ranked #1 on Partial Domain Adaptation on Office-Home
no code implementations • 10 Mar 2021 • Jiangjiao Xu, Ke Li, Mohammad Abusara
The proposed model consists of three layers, smart grid layer, independent system operator (ISO) layer and power grid layer.
no code implementations • 10 Mar 2021 • Xinyu Shan, Ke Li
Constrained multi-objective optimization problems (CMOPs) are ubiquitous in real-world engineering optimization scenarios.
1 code implementation • 8 Mar 2021 • Ke Li, Daniel Povey, Sanjeev Khudanpur
This paper proposes a parallel computation strategy and a posterior-based lattice expansion algorithm for efficient lattice rescoring with neural language models (LMs) for automatic speech recognition.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
1 code implementation • 25 Feb 2021 • Jing Dong, Ke Li, Shuai Li, Baoxiang Wang
Strategic behavior against sequential learning methods, such as "click framing" in real recommendation systems, have been widely observed.
no code implementations • 23 Feb 2021 • BESIII Collaboration, M. Ablikim, M. N. Achasov, P. Adlarson, S. Ahmed, M. Albrecht, R. Aliberti, A. Amoroso, M. R. An, Q. An, X. H. Bai, Y. Bai, O. Bakina, R. Baldini Ferroli, I. Balossino, Y. Ban, K. Begzsuren, N. Berger, M. Bertani, D. Bettoni, F. Bianchi, J. Bloms, A. Bortone, I. Boyko, R. A. Briere, H. Cai, X. Cai, A. Calcaterra, G. F. Cao, N. Cao, S. A. Cetin, J. F. Chang, W. L. Chang, G. Chelkov, D. Y. Chen, G. Chen, H. S. Chen, M. L. Chen, S. J. Chen, X. R. Chen, Y. B. Chen, Z. J Chen, W. S. Cheng, G. Cibinetto, F. Cossio, X. F. Cui, H. L. Dai, X. C. Dai, A. Dbeyssi, R. E. de Boer, D. Dedovich, Z. Y. Deng, A. Denig, I. Denysenko, M. Destefanis, F. De Mori, Y. Ding, C. Dong, J. Dong, L. Y. Dong, M. Y. Dong, X. Dong, S. X. Du, Y. L. Fan, J. Fang, S. S. Fang, Y. Fang, R. Farinelli, L. Fava, F. Feldbauer, G. Felici, C. Q. Feng, J. H. Feng, M. Fritsch, C. D. Fu, Y. Gao, Y. G. Gao, I. Garzia, P. T. Ge, C. Geng, E. M. Gersabeck, A Gilman, K. Goetzen, L. Gong, W. X. Gong, W. Gradl, M. Greco, L. M. Gu, M. H. Gu, S. Gu, Y. T. Gu, C. Y Guan, A. Q. Guo, L. B. Guo, R. P. Guo, Y. P. Guo, A. Guskov, T. T. Han, W. Y. Han, X. Q. Hao, F. A. Harris, K. L. He, F. H. Heinsius, C. H. Heinz, T. Held, Y. K. Heng, C. Herold, M. Himmelreich, T. Holtmann, G. Y. Hou, Y. R. Hou, Z. L. Hou, H. M. Hu, J. F. Hu, T. Hu, Y. Hu, G. S. Huang, L. Q. Huang, X. T. Huang, Y. P. Huang, Z. Huang, T. Hussain, N Hüsken, W. Ikegami Andersson, W. Imoehl, M. Irshad, S. Jaeger, S. Janchiv, Q. Ji, Q. P. Ji, X. B. Ji, X. L. Ji, Y. Y. Ji, H. B. Jiang, X. S. Jiang, J. B. Jiao, Z. Jiao, S. Jin, Y. Jin, M. Q. Jing, T. Johansson, N. Kalantar-Nayestanaki, X. S. Kang, R. Kappert, M. Kavatsyuk, B. C. Ke, I. K. Keshk, A. Khoukaz, P. Kiese, R. Kiuchi, R. Kliemt, L. Koch, O. B. Kolcu, B. Kopf, M. Kuemmel, M. Kuessner, A. Kupsc, M. G. Kurth, W. Kühn, J. J. Lane, J. S. Lange, P. Larin, A. Lavania, L. Lavezzi, Z. H. Lei, H. Leithoff, M. Lellmann, T. Lenz, C. Li, C. H. Li, Cheng Li, D. M. Li, F. Li, G. Li, H. Li, H. B. Li, H. J. Li, J. L. Li, J. Q. Li, J. S. Li, Ke Li, L. K. Li, Lei LI, P. R. Li, S. Y. Li, W. D. Li, W. G. Li, X. H. Li, X. L. Li, Xiaoyu Li, Z. Y. Li, H. Liang, Y. F. Liang, Y. T. Liang, G. R. Liao, L. Z. Liao, J. Libby, C. X. Lin, B. J. Liu, C. X. Liu, D. Liu, F. H. Liu, Fang Liu, Feng Liu, H. B. Liu, H. M. Liu, Huanhuan Liu, Huihui Liu, J. B. Liu, J. L. Liu, J. Y. Liu, K. Liu, K. Y. Liu, L. Liu, M. H. Liu, P. L. Liu, Q. Liu, S. B. Liu, Shuai Liu, T. Liu, W. M. Liu, X. Liu, Y. Liu, Y. B. Liu, Z. A. Liu, Z. Q. Liu, X. C. Lou, F. X. Lu, H. J. Lu, J. D. Lu, J. G. Lu, X. L. Lu, Y. Lu, Y. P. Lu, C. L. Luo, M. X. Luo, P. W. Luo, T. Luo, X. L. Luo, S. Lusso, X. R. Lyu, F. C. Ma, H. L. Ma, L. L. Ma, M. M. Ma, Q. M. Ma, R. Q. Ma, R. T. Ma, X. X. Ma, X. Y. Ma, F. E. Maas, M. Maggiora, S. Maldaner, S. Malde, A. Mangoni, Y. J. Mao, Z. P. Mao, S. Marcello, Z. X. Meng, J. G. Messchendorp, G. Mezzadri, T. J. Min, R. E. Mitchell, X. H. Mo, Y. J. Mo, N. Yu. Muchnoi, H. Muramatsu, S. Nakhoul, Y. Nefedov, F. Nerling, I. B. Nikolaev, Z. Ning, S. Nisar, S. L. Olsen, Q. Ouyang, S. Pacetti, X. Pan, Y. Pan, A. Pathak, P. Patteri, M. Pelizaeus, H. P. Peng, K. Peters, J. Pettersson, J. L. Ping, R. G. Ping, R. Poling, V. Prasad, H. Qi, H. R. Qi, K. H. Qi, M. Qi, T. Y. Qi, S. Qian, W. B. Qian, Z. Qian, C. F. Qiao, L. Q. Qin, X. P. Qin, X. S. Qin, Z. H. Qin, J. F. Qiu, S. Q. Qu, K. H. Rashid, K. Ravindran, C. F. Redmer, A. Rivetti, V. Rodin, M. Rolo, G. Rong, Ch. Rosner, M. Rump, H. S. Sang, A. Sarantsev, Y. Schelhaas, C. Schnier, K. Schoenning, M. Scodeggio, D. C. Shan, W. Shan, X. Y. Shan, J. F. Shangguan, M. Shao, C. P. Shen, H. F. Shen, P. X. Shen, X. Y. Shen, H. C. Shi, R. S. Shi, X. Shi, X. D Shi, J. J. Song, W. M. Song, Y. X. Song, S. Sosio, S. Spataro, K. X. Su, P. P. Su, F. F. Sui, G. X. Sun, H. K. Sun, J. F. Sun, L. Sun, S. S. Sun, T. Sun, W. Y. Sun, X Sun, Y. J. Sun, Y. K. Sun, Y. Z. Sun, Z. T. Sun, Y. H. Tan, Y. X. Tan, C. J. Tang, G. Y. Tang, J. Tang, J. X. Teng, V. Thoren, W. H. Tian, Y. T. Tian, I. Uman, B. Wang, C. W. Wang, D. Y. Wang, H. J. Wang, H. P. Wang, K. Wang, L. L. Wang, M. Wang, M. Z. Wang, Meng Wang, W. Wang, W. H. Wang, W. P. Wang, X. Wang, X. F. Wang, X. L. Wang, Y. Wang, Y. D. Wang, Y. F. Wang, Y. Q. Wang, Y. Y. Wang, Z. Wang, Z. Y. Wang, Ziyi Wang, Zongyuan Wang, D. H. Wei, P. Weidenkaff, F. Weidner, S. P. Wen, D. J. White, U. Wiedner, G. Wilkinson, M. Wolke, L. Wollenberg, J. F. Wu, L. H. Wu, L. J. Wu, X. Wu, Z. Wu, L. Xia, H. Xiao, S. Y. Xiao, Z. J. Xiao, X. H. Xie, Y. G. Xie, Y. H. Xie, T. Y. Xing, G. F. Xu, Q. J. Xu, W. Xu, X. P. Xu, Y. C. Xu, F. Yan, L. Yan, W. B. Yan, W. C. Yan, Xu Yan, H. J. Yang, H. X. Yang, L. Yang, S. L. Yang, Y. X. Yang, Yifan Yang, Zhi Yang, M. Ye, M. H. Ye, J. H. Yin, Z. Y. You, B. X. Yu, C. X. Yu, G. Yu, J. S. Yu, T. Yu, C. Z. Yuan, L. Yuan, X. Q. Yuan, Y. Yuan, Z. Y. Yuan, C. X. Yue, A. Yuncu, A. A. Zafar, Y. Zeng, A. Q. Zhang, B. X. Zhang, Guangyi Zhang, H. Zhang, H. H. Zhang, H. Y. Zhang, J. J. Zhang, J. L. Zhang, J. Q. Zhang, J. W. Zhang, J. Y. Zhang, J. Z. Zhang, Jianyu Zhang, Jiawei Zhang, L. M. Zhang, L. Q. Zhang, Lei Zhang, S. Zhang, S. F. Zhang, Shulei Zhang, X. D. Zhang, X. Y. Zhang, Y. Zhang, Y. H. Zhang, Y. T. Zhang, Yan Zhang, Yao Zhang, Yi Zhang, Z. H. Zhang, Z. Y. Zhang, G. Zhao, J. Zhao, J. Y. Zhao, J. Z. Zhao, Lei Zhao, Ling Zhao, M. G. Zhao, Q. Zhao, S. J. Zhao, Y. B. Zhao, Y. X. Zhao, Z. G. Zhao, A. Zhemchugov, B. Zheng, J. P. Zheng, Y. Zheng, Y. H. Zheng, B. Zhong, C. Zhong, L. P. Zhou, Q. Zhou, X. Zhou, X. K. Zhou, X. R. Zhou, X. Y. Zhou, A. N. Zhu, J. Zhu, K. Zhu, K. J. Zhu, S. H. Zhu, T. J. Zhu, W. J. Zhu, Y. C. Zhu, Z. A. Zhu, B. S. Zou, J. H. Zou
Constraining our measurement to the Standard Model expectation of lepton universality ($R=9. 75$), we find the more precise results $\cal B(D_s^+\to \tau^+\nu_\tau) = (5. 22\pm0. 10\pm 0. 14)\times10^{-2}$ and $A_{\it CP}(\tau^\pm\nu_\tau) = (-0. 1\pm1. 9\pm1. 0)\%$.
High Energy Physics - Experiment
no code implementations • 8 Feb 2021 • M. Ablikim, M. N. Achasov, P. Adlarson, S. Ahmed, M. Albrecht, R. Aliberti, A. Amoroso, Q. An, X. H. Bai, Y. Bai, O. Bakina, R. Baldini Ferroli, I. Balossino, Y. Ban, K. Begzsuren, N. Berger, M. Bertani, D. Bettoni, F. Bianchi, J Biernat, J. Bloms, A. Bortone, I. Boyko, R. A. Briere, H. Cai, X. Cai, A. Calcaterra, G. F. Cao, N. Cao, S. A. Cetin, J. F. Chang, W. L. Chang, G. Chelkov, D. Y. Chen, G. Chen, H. S. Chen, M. L. Chen, S. J. Chen, X. R. Chen, Y. B. Chen, Z. J Chen, W. S. Cheng, G. Cibinetto, F. Cossio, X. F. Cui, H. L. Dai, X. C. Dai, A. Dbeyssi, R. E. de Boer, D. Dedovich, Z. Y. Deng, A. Denig, I. Denysenko, M. Destefanis, F. De Mori, Y. Ding, C. Dong, J. Dong, L. Y. Dong, M. Y. Dong, X. Dong, S. X. Du, J. Fang, S. S. Fang, Y. Fang, R. Farinelli, L. Fava, F. Feldbauer, G. Felici, C. Q. Feng, M. Fritsch, C. D. Fu, Y. Gao, Y. G. Gao, I. Garzia, E. M. Gersabeck, A. Gilman, K. Goetzen, L. Gong, W. X. Gong, W. Gradl, M. Greco, L. M. Gu, M. H. Gu, S. Gu, Y. T. Gu, C. Y Guan, A. Q. Guo, L. B. Guo, R. P. Guo, Y. P. Guo, A. Guskov, T. T. Han, X. Q. Hao, F. A. Harris, K. L. He, F. H. Heinsius, C. H. Heinz, T. Held, Y. K. Heng, C. Herold, M. Himmelreich, T. Holtmann, Y. R. Hou, Z. L. Hou, H. M. Hu, J. F. Hu, T. Hu, Y. Hu, G. S. Huang, L. Q. Huang, X. T. Huang, Y. P. Huang, Z. Huang, T. Hussain, N. Hüsken, W. Ikegami Andersson, W. Imoehl, M. Irshad, S. Jaeger, S. Janchiv, Q. Ji, Q. P. Ji, X. B. Ji, X. L. Ji, H. B. Jiang, X. S. Jiang, J. B. Jiao, Z. Jiao, S. Jin, Y. Jin, T. Johansson, N. Kalantar-Nayestanaki, X. S. Kang, R. Kappert, M. Kavatsyuk, B. C. Ke, I. K. Keshk, A. Khoukaz, P. Kiese, R. Kiuchi, R. Kliemt, L. Koch, O. B. Kolcu, B. Kopf, M. Kuemmel, M. Kuessner, A. Kupsc, M. G. Kurth, W. Kühn, J. J. Lane, J. S. Lange, P. Larin, A. Lavania, L. Lavezzi, Z. H. Lei, H. Leithoff, M. Lellmann, T. Lenz, C. Li, C. H. Li, Cheng Li, D. M. Li, F. Li, G. Li, H. Li, H. B. Li, H. J. Li, J. L. Li, J. Q. Li, Ke Li, L. K. Li, Lei LI, P. L. Li, P. R. Li, S. Y. Li, W. D. Li, W. G. Li, X. H. Li, X. L. Li, Z. Y. Li, H. Liang, Y. F. Liang, Y. T. Liang, G. R. Liao, L. Z. Liao, J. Libby, C. X. Lin, B. J. Liu, C. X. Liu, D. Liu, F. H. Liu, Fang Liu, Feng Liu, H. B. Liu, H. M. Liu, Huanhuan Liu, Huihui Liu, J. B. Liu, J. Y. Liu, K. Liu, K. Y. Liu, L. Liu, M. H. Liu, Q. Liu, S. B. Liu, Shuai Liu, T. Liu, W. M. Liu, X. Liu, Y. B. Liu, Z. A. Liu, Z. Q. Liu, X. C. Lou, F. X. Lu, H. J. Lu, J. D. Lu, J. G. Lu, X. L. Lu, Y. Lu, Y. P. Lu, C. L. Luo, M. X. Luo, P. W. Luo, T. Luo, X. L. Luo, S. Lusso, X. R. Lyu, F. C. Ma, H. L. Ma, L. L. Ma, M. M. Ma, Q. M. Ma, R. Q. Ma, R. T. Ma, X. X. Ma, X. Y. Ma, F. E. Maas, M. Maggiora, S. Maldaner, S. Malde, Q. A. Malik, A. Mangoni, Y. J. Mao, Z. P. Mao, S. Marcello, Z. X. Meng, J. G. Messchendorp, G. Mezzadri, T. J. Min, R. E. Mitchell, X. H. Mo, Y. J. Mo, N. Yu. Muchnoi, H. Muramatsu, S. Nakhoul, Y. Nefedov, F. Nerling, I. B. Nikolaev, Z. Ning, S. Nisar, S. L. Olsen, Q. Ouyang, S. Pacetti, X. Pan, Y. Pan, A. Pathak, P. Patteri, M. Pelizaeus, H. P. Peng, K. Peters, J. Pettersson, J. L. Ping, R. G. Ping, A. Pitka, R. Poling, V. Prasad, H. Qi, H. R. Qi, K. H. Qi, M. Qi, T. Y. Qi, S. Qian, W. B. Qian, Z. Qian, C. F. Qiao, L. Q. Qin, X. S. Qin, Z. H. Qin, J. F. Qiu, S. Q. Qu, K. H. Rashid, K. Ravindran, C. F. Redmer, A. Rivetti, V. Rodin, M. Rolo, G. Rong, Ch. Rosner, M. Rump, H. S. Sang, A. Sarantsev, Y. Schelhaas, C. Schnier, K. Schoenning, M. Scodeggio, D. C. Shan, W. Shan, X. Y. Shan, M. Shao, C. P. Shen, P. X. Shen, X. Y. Shen, H. C. Shi, R. S. Shi, X. Shi, X. D Shi, J. J. Song, W. M. Song, Y. X. Song, S. Sosio, S. Spataro, K. X. Su, F. F. Sui, G. X. Sun, J. F. Sun, L. Sun, S. S. Sun, T. Sun, W. Y. Sun, X Sun, Y. J. Sun, Y. K. Sun, Y. Z. Sun, Z. T. Sun, Y. H. Tan, Y. X. Tan, C. J. Tang, G. Y. Tang, J. Tang, J. X. Teng, V. Thoren, I. Uman, B. Wang, C. W. Wang, D. Y. Wang, H. P. Wang, K. Wang, L. L. Wang, M. Wang, M. Z. Wang, Meng Wang, W. H. Wang, W. P. Wang, X. Wang, X. F. Wang, X. L. Wang, Y. Wang, Y. D. Wang, Y. F. Wang, Y. Q. Wang, Z. Wang, Z. Y. Wang, Ziyi Wang, Zongyuan Wang, D. H. Wei, P. Weidenkaff, F. Weidner, S. P. Wen, D. J. White, U. Wiedner, G. Wilkinson, M. Wolke, L. Wollenberg, J. F. Wu, L. H. Wu, L. J. Wu, X. Wu, Z. Wu, L. Xia, H. Xiao, S. Y. Xiao, Z. J. Xiao, X. H. Xie, Y. G. Xie, Y. H. Xie, T. Y. Xing, G. F. Xu, J. J. Xu, Q. J. Xu, W. Xu, X. P. Xu, Y. C. Xu, F. Yan, L. Yan, W. B. Yan, W. C. Yan, Xu Yan, H. J. Yang, H. X. Yang, L. Yang, S. L. Yang, Y. H. Yang, Y. X. Yang, Yifan Yang, Zhi Yang, M. Ye, M. H. Ye, J. H. Yin, Z. Y. You, B. X. Yu, C. X. Yu, G. Yu, J. S. Yu, T. Yu, C. Z. Yuan, L. Yuan, W. Yuan, X. Q. Yuan, Y. Yuan, Z. Y. Yuan, C. X. Yue, A. Yuncu, A. A. Zafar, Y. Zeng, B. X. Zhang, Guangyi Zhang, H. Zhang, H. H. Zhang, H. Y. Zhang, J. J. Zhang, J. L. Zhang, J. Q. Zhang, J. W. Zhang, J. Y. Zhang, J. Z. Zhang, Jianyu Zhang, Jiawei Zhang, Lei Zhang, S. Zhang, S. F. Zhang, X. D. Zhang, X. Y. Zhang, Y. Zhang, Y. H. Zhang, Y. T. Zhang, Yan Zhang, Yao Zhang, Yi Zhang, Z. H. Zhang, Z. Y. Zhang, G. Zhao, J. Zhao, J. Y. Zhao, J. Z. Zhao, Lei Zhao, Ling Zhao, M. G. Zhao, Q. Zhao, S. J. Zhao, Y. B. Zhao, Y. X. Zhao, Z. G. Zhao, A. Zhemchugov, B. Zheng, J. P. Zheng, Y. Zheng, Y. H. Zheng, B. Zhong, C. Zhong, L. P. Zhou, Q. Zhou, X. Zhou, X. K. Zhou, X. R. Zhou, A. N. Zhu, J. Zhu, K. Zhu, K. J. Zhu, S. H. Zhu, W. J. Zhu, Y. C. Zhu, Z. A. Zhu, B. S. Zou, J. H. Zou
Based on $14. 7~\textrm{fb}^{-1}$ of $e^+e^-$ annihilation data collected with the BESIII detector at the BEPCII collider at 17 different center-of-mass energies between $3. 7730~\textrm{GeV}$ and $4. 5995~\textrm{GeV}$, Born cross sections of the two processes $e^+e^- \to p\bar{p}\eta$ and $e^+e^- \to p\bar{p}\omega$ are measured for the first time.
High Energy Physics - Experiment
no code implementations • 19 Jan 2021 • Huixiang Luo, Hao Cheng, Fanxu Meng, Yuting Gao, Ke Li, Mengdan Zhang, Xing Sun
Pseudo-labeling (PL) and Data Augmentation-based Consistency Training (DACT) are two approaches widely used in Semi-Supervised Learning (SSL) methods.
2 code implementations • 19 Jan 2021 • Ke Li, Dengxin Dai, Ender Konukoglu, Luc van Gool
With these contributions, our method is able to learn from heterogeneous datasets and lift the requirement for having a large amount of HD HSI training samples.
no code implementations • 29 Dec 2020 • BESIII Collaboration, M. Ablikim, M. N. Achasov, P. Adlarson, S. Ahmed, M. Albrecht, R. Aliberti, A. Amoroso, M. R. An, Q. An, X. H. Bai, Y. Bai, O. Bakina, R. Baldini Ferroli, I. Balossino, Y. Ban, K. Begzsuren, N. Berger, M. Bertani, D. Bettoni, F. Bianchi, J. Bloms, A. Bortone, I. Boyko, R. A. Briere, H. Cai, X. Cai, A. Calcaterra, G. F. Cao, N. Cao, S. A. Cetin, J. F. Chang, W. L. Chang, G. Chelkov, D. Y. Chen, G. Chen, H. S. Chen, M. L. Chen, S. J. Chen, X. R. Chen, Y. B. Chen, Z. J Chen, W. S. Cheng, G. Cibinetto, F. Cossio, X. F. Cui, H. L. Dai, X. C. Dai, A. Dbeyssi, R. E. de Boer, D. Dedovich, Z. Y. Deng, A. Denig, I. Denysenko, M. Destefanis, F. De Mori, Y. Ding, C. Dong, J. Dong, L. Y. Dong, M. Y. Dong, X. Dong, S. X. Du, Y. L. Fan, J. Fang, S. S. Fang, Y. Fang, R. Farinelli, L. Fava, F. Feldbauer, G. Felici, C. Q. Feng, J. H. Feng, M. Fritsch, C. D. Fu, Y. Gao, Y. G. Gao, I. Garzia, P. T. Ge, C. Geng, E. M. Gersabeck, A Gilman, K. Goetzen, L. Gong, W. X. Gong, W. Gradl, M. Greco, L. M. Gu, M. H. Gu, S. Gu, Y. T. Gu, C. Y Guan, A. Q. Guo, L. B. Guo, R. P. Guo, Y. P. Guo, A. Guskov, T. T. Han, W. Y. Han, X. Q. Hao, F. A. Harris, N Hüsken, K. L. He, F. H. Heinsius, C. H. Heinz, T. Held, Y. K. Heng, C. Herold, M. Himmelreich, T. Holtmann, Y. R. Hou, Z. L. Hou, H. M. Hu, J. F. Hu, T. Hu, Y. Hu, G. S. Huang, L. Q. Huang, X. T. Huang, Y. P. Huang, Z. Huang, T. Hussain, W. Ikegami Andersson, W. Imoehl, M. Irshad, S. Jaeger, S. Janchiv, Q. Ji, Q. P. Ji, X. B. Ji, X. L. Ji, Y. Y. Ji, H. B. Jiang, X. S. Jiang, J. B. Jiao, Z. Jiao, S. Jin, Y. Jin, T. Johansson, N. Kalantar-Nayestanaki, X. S. Kang, R. Kappert, M. Kavatsyuk, B. C. Ke, I. K. Keshk, A. Khoukaz, P. Kiese, R. Kiuchi, R. Kliemt, L. Koch, O. B. Kolcu, B. Kopf, M. Kuemmel, M. Kuessner, A. Kupsc, M. G. Kurth, W. Kühn, J. J. Lane, J. S. Lange, P. Larin, A. Lavania, L. Lavezzi, Z. H. Lei, H. Leithoff, M. Lellmann, T. Lenz, C. Li, C. H. Li, Cheng Li, D. M. Li, F. Li, G. Li, H. Li, H. B. Li, H. J. Li, J. L. Li, J. Q. Li, J. S. Li, Ke Li, L. K. Li, Lei LI, P. R. Li, S. Y. Li, W. D. Li, W. G. Li, X. H. Li, X. L. Li, Xiaoyu Li, Z. Y. Li, H. Liang, Y. F. Liang, Y. T. Liang, G. R. Liao, L. Z. Liao, J. Libby, C. X. Lin, B. J. Liu, C. X. Liu, D. Liu, F. H. Liu, Fang Liu, Feng Liu, H. B. Liu, H. M. Liu, Huanhuan Liu, Huihui Liu, J. B. Liu, J. L. Liu, J. Y. Liu, K. Liu, K. Y. Liu, Ke Liu, L. Liu, M. H. Liu, P. L. Liu, Q. Liu, S. B. Liu, Shuai Liu, T. Liu, W. M. Liu, X. Liu, Y. Liu, Y. B. Liu, Z. A. Liu, Z. Q. Liu, X. C. Lou, F. X. Lu, H. J. Lu, J. D. Lu, J. G. Lu, X. L. Lu, Y. Lu, Y. P. Lu, C. L. Luo, M. X. Luo, P. W. Luo, T. Luo, X. L. Luo, S. Lusso, X. R. Lyu, F. C. Ma, H. L. Ma, L. L. Ma, M. M. Ma, Q. M. Ma, R. Q. Ma, R. T. Ma, X. X. Ma, X. Y. Ma, F. E. Maas, M. Maggiora, S. Maldaner, S. Malde, Q. A. Malik, A. Mangoni, Y. J. Mao, Z. P. Mao, S. Marcello, Z. X. Meng, J. G. Messchendorp, G. Mezzadri, T. J. Min, R. E. Mitchell, X. H. Mo, Y. J. Mo, N. Yu. Muchnoi, H. Muramatsu, S. Nakhoul, Y. Nefedov, F. Nerling, I. B. Nikolaev, Z. Ning, S. Nisar, S. L. Olsen, Q. Ouyang, S. Pacetti, X. Pan, Y. Pan, A. Pathak, P. Patteri, M. Pelizaeus, H. P. Peng, K. Peters, J. Pettersson, J. L. Ping, R. G. Ping, R. Poling, V. Prasad, H. Qi, H. R. Qi, K. H. Qi, M. Qi, T. Y. Qi, S. Qian, W. B. Qian, Z. Qian, C. F. Qiao, L. Q. Qin, X. P. Qin, X. S. Qin, Z. H. Qin, J. F. Qiu, S. Q. Qu, K. H. Rashid, K. Ravindran, C. F. Redmer, A. Rivetti, V. Rodin, M. Rolo, G. Rong, Ch. Rosner, M. Rump, H. S. Sang, A. Sarantsev, Y. Schelhaas, C. Schnier, K. Schoenning, M. Scodeggio, D. C. Shan, W. Shan, X. Y. Shan, J. F. Shangguan, M. Shao, C. P. Shen, P. X. Shen, X. Y. Shen, H. C. Shi, R. S. Shi, X. Shi, X. D Shi, J. J. Song, W. M. Song, Y. X. Song, S. Sosio, S. Spataro, K. X. Su, P. P. Su, F. F. Sui, G. X. Sun, H. K. Sun, J. F. Sun, L. Sun, S. S. Sun, T. Sun, W. Y. Sun, X Sun, Y. J. Sun, Y. K. Sun, Y. Z. Sun, Z. T. Sun, Y. H. Tan, Y. X. Tan, C. J. Tang, G. Y. Tang, J. Tang, J. X. Teng, V. Thoren, W. H. Tian, Y. T. Tian, I. Uman, B. Wang, C. W. Wang, D. Y. Wang, H. J. Wang, H. P. Wang, K. Wang, L. L. Wang, M. Wang, M. Z. Wang, Meng Wang, W. Wang, W. H. Wang, W. P. Wang, X. Wang, X. F. Wang, X. L. Wang, Y. Wang, Y. D. Wang, Y. F. Wang, Y. Q. Wang, Y. Y. Wang, Z. Wang, Z. Y. Wang, Ziyi Wang, Zongyuan Wang, D. H. Wei, P. Weidenkaff, F. Weidner, S. P. Wen, D. J. White, U. Wiedner, G. Wilkinson, M. Wolke, L. Wollenberg, J. F. Wu, L. H. Wu, L. J. Wu, X. Wu, Z. Wu, L. Xia, H. Xiao, S. Y. Xiao, Z. J. Xiao, X. H. Xie, Y. G. Xie, Y. H. Xie, T. Y. Xing, G. F. Xu, Q. J. Xu, W. Xu, X. P. Xu, Y. C. Xu, F. Yan, L. Yan, W. B. Yan, W. C. Yan, Xu Yan, H. J. Yang, H. X. Yang, L. Yang, S. L. Yang, Y. X. Yang, Yifan Yang, Zhi Yang, M. Ye, M. H. Ye, J. H. Yin, Z. Y. You, B. X. Yu, C. X. Yu, G. Yu, J. S. Yu, T. Yu, C. Z. Yuan, L. Yuan, X. Q. Yuan, Y. Yuan, Z. Y. Yuan, C. X. Yue, A. Yuncu, A. A. Zafar, Y. Zeng, B. X. Zhang, Guangyi Zhang, H. Zhang, H. H. Zhang, H. Y. Zhang, J. J. Zhang, J. L. Zhang, J. Q. Zhang, J. W. Zhang, J. Y. Zhang, J. Z. Zhang, Jianyu Zhang, Jiawei Zhang, L. M. Zhang, L. Q. Zhang, Lei Zhang, S. Zhang, S. F. Zhang, Shulei Zhang, X. D. Zhang, X. Y. Zhang, Y. Zhang, Y. H. Zhang, Y. T. Zhang, Yan Zhang, Yao Zhang, Yi Zhang, Z. H. Zhang, Z. Y. Zhang, G. Zhao, J. Zhao, J. Y. Zhao, J. Z. Zhao, Lei Zhao, Ling Zhao, M. G. Zhao, Q. Zhao, S. J. Zhao, Y. B. Zhao, Y. X. Zhao, Z. G. Zhao, A. Zhemchugov, B. Zheng, J. P. Zheng, Y. Zheng, Y. H. Zheng, B. Zhong, C. Zhong, L. P. Zhou, Q. Zhou, X. Zhou, X. K. Zhou, X. R. Zhou, X. Y. Zhou, A. N. Zhu, J. Zhu, K. Zhu, K. J. Zhu, S. H. Zhu, T. J. Zhu, W. J. Zhu, Y. C. Zhu, Z. A. Zhu, B. S. Zou, J. H. Zou
During the 2016-17 and 2018-19 running periods, the BESIII experiment collected 7. 5~fb$^{-1}$ of $e^+e^-$ collision data at center-of-mass energies ranging from 4. 13 to 4. 44 GeV.
High Energy Physics - Experiment
1 code implementation • 10 Dec 2020 • Enwei Zhang, Xinyang Jiang, Hao Cheng, AnCong Wu, Fufu Yu, Ke Li, Xiaowei Guo, Feng Zheng, Wei-Shi Zheng, Xing Sun
Current training objectives of existing person Re-IDentification (ReID) models only ensure that the loss of the model decreases on selected training batch, with no regards to the performance on samples outside the batch.
no code implementations • 4 Dec 2020 • BESIII Collaboration, M. Ablikim, M. N. Achasov, P. Adlarson, S. Ahmed, M. Albrecht, A. Amoroso, Q. An, X. H. Bai, Y. Bai, O. Bakina, R. Baldini Ferroli, I. Balossino, Y. Ban, K. Begzsuren, J. V. Bennett, N. Berger, M. Bertani, D. Bettoni, F. Bianchi, J Biernat, J. Bloms, A. Bortone, I. Boyko, R. A. Briere, H. Cai, X. Cai, A. Calcaterra, G. F. Cao, N. Cao, S. A. Cetin, J. F. Chang, W. L. Chang, G. Chelkov, D. Y. Chen, G. Chen, H. S. Chen, M. L. Chen, S. J. Chen, X. R. Chen, Y. B. Chen, W. S. Cheng, G. Cibinetto, F. Cossio, X. F. Cui, H. L. Dai, J. P. Dai, X. C. Dai, A. Dbeyssi, R. E. de Boer, D. Dedovich, Z. Y. Deng, A. Denig, I. Denysenko, M. Destefanis, F. De Mori, Y. Ding, C. Dong, J. Dong, L. Y. Dong, M. Y. Dong, S. X. Du, J. Fang, S. S. Fang, Y. Fang, R. Farinelli, L. Fava, F. Feldbauer, G. Felici, C. Q. Feng, M. Fritsch, C. D. Fu, Y. Fu, X. L. Gao, Y. Gao, Y. G. Gao, I. Garzia, E. M. Gersabeck, A. Gilman, K. Goetzen, L. Gong, W. X. Gong, W. Gradl, M. Greco, L. M. Gu, M. H. Gu, S. Gu, Y. T. Gu, C. Y Guan, A. Q. Guo, L. B. Guo, R. P. Guo, Y. P. Guo, A. Guskov, S. Han, T. T. Han, T. Z. Han, X. Q. Hao, F. A. Harris, N. Hüsken, K. L. He, F. H. Heinsius, T. Held, Y. K. Heng, M. Himmelreich, T. Holtmann, Y. R. Hou, Z. L. Hou, H. M. Hu, J. F. Hu, T. Hu, Y. Hu, G. S. Huang, L. Q. Huang, X. T. Huang, Y. P. Huang, Z. Huang, T. Hussain, W. Ikegami Andersson, W. Imoehl, M. Irshad, S. Jaeger, S. Janchiv, Q. Ji, Q. P. Ji, X. B. Ji, X. L. Ji, H. B. Jiang, X. S. Jiang, J. B. Jiao, Z. Jiao, S. Jin, Y. Jin, T. Johansson, N. Kalantar-Nayestanaki, X. S. Kang, R. Kappert, M. Kavatsyuk, B. C. Ke, I. K. Keshk, A. Khoukaz, P. Kiese, R. Kiuchi, R. Kliemt, L. Koch, O. B. Kolcu, B. Kopf, M. Kuemmel, M. Kuessner, A. Kupsc, M. G. Kurth, W. Kühn, J. J. Lane, J. S. Lange, P. Larin, A. Lavania, L. Lavezzi, H. Leithoff, M. Lellmann, T. Lenz, C. Li, C. H. Li, Cheng Li, D. M. Li, F. Li, G. Li, H. Li, H. B. Li, H. J. Li, J. L. Li, J. Q. Li, Ke Li, L. K. Li, Lei LI, P. L. Li, P. R. Li, S. Y. Li, W. D. Li, W. G. Li, X. H. Li, X. L. Li, Z. Y. Li, H. Liang, Y. F. Liang, Y. T. Liang, G. R. Liao, L. Z. Liao, J. Libby, C. X. Lin, B. Liu, B. J. Liu, C. X. Liu, D. Liu, D. Y. Liu, F. H. Liu, Fang Liu, Feng Liu, H. B. Liu, H. M. Liu, Huanhuan Liu, Huihui Liu, J. B. Liu, J. Y. Liu, K. Liu, K. Y. Liu, Ke Liu, L. Liu, Q. Liu, S. B. Liu, Shuai Liu, T. Liu, X. Liu, Y. B. Liu, Z. A. Liu, Z. Q. Liu, Y. F. Long, X. C. Lou, F. X. Lu, H. J. Lu, J. D. Lu, J. G. Lu, X. L. Lu, Y. Lu, Y. P. Lu, C. L. Luo, M. X. Luo, P. W. Luo, T. Luo, X. L. Luo, S. Lusso, X. R. Lyu, F. C. Ma, H. L. Ma, L. L. Ma, M. M. Ma, Q. M. Ma, R. Q. Ma, R. T. Ma, X. N. Ma, X. X. Ma, X. Y. Ma, Y. M. Ma, F. E. Maas, M. Maggiora, S. Maldaner, S. Malde, A. Mangoni, Y. J. Mao, Z. P. Mao, S. Marcello, Z. X. Meng, J. G. Messchendorp, G. Mezzadri, T. J. Min, R. E. Mitchell, X. H. Mo, Y. J. Mo, N. Yu. Muchnoi, H. Muramatsu, S. Nakhoul, Y. Nefedov, F. Nerling, I. B. Nikolaev, Z. Ning, S. Nisar, S. L. Olsen, Q. Ouyang, S. Pacetti, X. Pan, Y. Pan, A. Pathak, P. Patteri, M. Pelizaeus, H. P. Peng, K. Peters, J. Pettersson, J. L. Ping, R. G. Ping, A. Pitka, R. Poling, V. Prasad, H. Qi, H. R. Qi, M. Qi, T. Y. Qi, S. Qian, W. B. Qian, Z. Qian, C. F. Qiao, L. Q. Qin, X. S. Qin, Z. H. Qin, J. F. Qiu, S. Q. Qu, K. Ravindran, C. F. Redmer, A. Rivetti, V. Rodin, M. Rolo, G. Rong, Ch. Rosner, M. Rump, A. Sarantsev, Y. Schelhaas, C. Schnier, K. Schoenning, D. C. Shan, W. Shan, X. Y. Shan, M. Shao, C. P. Shen, P. X. Shen, X. Y. Shen, H. C. Shi, R. S. Shi, X. Shi, X. D Shi, J. J. Song, Q. Q. Song, W. M. Song, Y. X. Song, S. Sosio, S. Spataro, F. F. Sui, G. X. Sun, J. F. Sun, L. Sun, S. S. Sun, T. Sun, W. Y. Sun, Y. J. Sun, Y. K. Sun, Y. Z. Sun, Z. T. Sun, Y. H. Tan, Y. X. Tan, C. J. Tang, G. Y. Tang, J. Tang, V. Thoren, I. Uman, B. Wang, B. L. Wang, C. W. Wang, D. Y. Wang, H. P. Wang, K. Wang, L. L. Wang, M. Wang, M. Z. Wang, Meng Wang, W. H. Wang, W. P. Wang, X. Wang, X. F. Wang, X. L. Wang, Y. Wang, Y. D. Wang, Y. F. Wang, Y. Q. Wang, Z. Wang, Z. Y. Wang, Ziyi Wang, Zongyuan Wang, D. H. Wei, P. Weidenkaff, F. Weidner, S. P. Wen, D. J. White, U. Wiedner, G. Wilkinson, M. Wolke, L. Wollenberg, J. F. Wu, L. H. Wu, L. J. Wu, X. Wu, Z. Wu, L. Xia, H. Xiao, S. Y. Xiao, Y. J. Xiao, Z. J. Xiao, X. H. Xie, Y. G. Xie, Y. H. Xie, T. Y. Xing, X. A. Xiong, G. F. Xu, J. J. Xu, Q. J. Xu, W. Xu, X. P. Xu, Y. C. Xu, F. Yan, L. Yan, W. B. Yan, W. C. Yan, Xu Yan, H. J. Yang, H. X. Yang, L. Yang, R. X. Yang, S. L. Yang, Y. H. Yang, Y. X. Yang, Yifan Yang, Zhi Yang, M. Ye, M. H. Ye, J. H. Yin, Z. Y. You, B. X. Yu, C. X. Yu, G. Yu, J. S. Yu, T. Yu, C. Z. Yuan, W. Yuan, X. Q. Yuan, Y. Yuan, Z. Y. Yuan, C. X. Yue, A. Yuncu, A. A. Zafar, Y. Zeng, B. X. Zhang, Guangyi Zhang, H. H. Zhang, H. Y. Zhang, J. L. Zhang, J. Q. Zhang, J. W. Zhang, J. Y. Zhang, J. Z. Zhang, Jianyu Zhang, Jiawei Zhang, Lei Zhang, S. Zhang, S. F. Zhang, T. J. Zhang, X. Y. Zhang, Y. Zhang, Y. H. Zhang, Y. T. Zhang, Yan Zhang, Yao Zhang, Yi Zhang, Z. H. Zhang, Z. Y. Zhang, G. Zhao, J. Zhao, J. Y. Zhao, J. Z. Zhao, Lei Zhao, Ling Zhao, M. G. Zhao, Q. Zhao, S. J. Zhao, Y. B. Zhao, Y. X. Zhao, Z. G. Zhao, A. Zhemchugov, B. Zheng, J. P. Zheng, Y. Zheng, Y. H. Zheng, B. Zhong, C. Zhong, L. P. Zhou, Q. Zhou, X. Zhou, X. K. Zhou, X. R. Zhou, A. N. Zhu, J. Zhu, K. Zhu, K. J. Zhu, S. H. Zhu, W. J. Zhu, Y. C. Zhu, Z. A. Zhu, B. S. Zou, J. H. Zou
We search for the process $e^{+}e^{-}\rightarrow \pi ^{+}\pi ^{-} \chi_{cJ}$ ($J=0, 1, 2$) and for a charged charmonium-like state in the $\pi ^{\pm} \chi_{cJ}$ subsystem.
High Energy Physics - Experiment
no code implementations • 26 Nov 2020 • Ke Li, Shichong Peng, Kailas Vodrahalli, Jitendra Malik
In continual learning, new categories may be introduced over time, and an ideal learning system should perform well on both the original categories and the new categories.
no code implementations • CVPR 2021 • Daniel Rebain, Wei Jiang, Soroosh Yazdani, Ke Li, Kwang Moo Yi, Andrea Tagliasacchi
Moreover, we show that a Voronoi spatial decomposition is preferable for this purpose, as it is provably compatible with the Painter's Algorithm for efficient and GPU-friendly rendering.
no code implementations • 7 Nov 2020 • Jinming Liu, Ke Li, Baolin Song, Li Zhao
On the other hand, some methods based on deep learning also cannot get high accuracy due to problems such as the imbalance of databases.
Micro Expression Recognition Micro-Expression Recognition +1
no code implementations • 3 Nov 2020 • Shichong Peng, Ke Li
This setting differs from both the regression and class-conditional generative modelling settings: in the former, there is a unique observed output for each input, which is provided as supervision; in the latter, there are many observed outputs for each input, and many are provided as supervision.
no code implementations • 3 Nov 2020 • Yunhe Feng, Daniel Saelid, Ke Li, Ruoyuan Gao, Chirag Shah
The results showed that our runs performed below par for re-ranking task, but above average for retrieval.
1 code implementation • NeurIPS 2020 • Fanxu Meng, Hao Cheng, Ke Li, Huixiang Luo, Xiaowei Guo, Guangming Lu, Xing Sun
Through extensive experiments, we demonstrate that SWP is more effective compared to the previous FP-based methods and achieves the state-of-art pruning ratio on CIFAR-10 and ImageNet datasets without obvious accuracy drop.
2 code implementations • CVPR 2021 • Jinpeng Wang, Yuting Gao, Ke Li, Yiqi Lin, Andy J. Ma, Hao Cheng, Pai Peng, Feiyue Huang, Rongrong Ji, Xing Sun
Then we force the model to pull the feature of the distracting video and the feature of the original video closer, so that the model is explicitly restricted to resist the background influence, focusing more on the motion changes.
3 code implementations • 12 Sep 2020 • Jinpeng Wang, Yuting Gao, Ke Li, Jianguo Hu, Xinyang Jiang, Xiaowei Guo, Rongrong Ji, Xing Sun
Specifically, we construct a positive clip and a negative clip for each video.
no code implementations • 5 Aug 2020 • Ruizhe Huang, Ke Li, Ashish Arora, Dan Povey, Sanjeev Khudanpur
This paper presents an efficient algorithm for n-gram language model adaptation under the minimum discrimination information (MDI) principle, where an out-of-domain language model is adapted to satisfy the constraints of marginal probabilities of the in-domain data.
no code implementations • 7 Jul 2020 • M. Ablikim, M. N. Achasov, P. Adlarson, S. Ahmed, M. Albrecht, A. Amoroso, Q. An, Anita, X. H. Bai, Y. Bai, O. Bakina, R. Baldini Ferroli, I. Balossino, Y. Ban, K. Begzsuren, J. V. Bennett, N. Berger, M. Bertani, D. Bettoni, F. Bianchi, J Biernat, J. Bloms, A. Bortone, I. Boyko, R. A. Briere, H. Cai, X. Cai, A. Calcaterra, G. F. Cao, N. Cao, S. A. Cetin, J. F. Chang, W. L. Chang, G. Chelkov, D. Y. Chen, G. Chen, H. S. Chen, M. L. Chen, S. J. Chen, X. R. Chen, Y. B. Chen, W. S. Cheng, G. Cibinetto, F. Cossio, X. F. Cui, H. L. Dai, J. P. Dai, X. C. Dai, A. Dbeyssi, R. B. de Boer, D. Dedovich, Z. Y. Deng, A. Denig, I. Denysenko, M. Destefanis, F. De Mori, Y. Ding, C. Dong, J. Dong, L. Y. Dong, M. Y. Dong, S. X. Du, J. Fang, S. S. Fang, Y. Fang, R. Farinelli, L. Fava, F. Feldbauer, G. Felici, C. Q. Feng, M. Fritsch, C. D. Fu, Y. Fu, X. L. Gao, Y. Gao, Y. G. Gao, I. Garzia, E. M. Gersabeck, A. Gilman, K. Goetzen, L. Gong, W. X. Gong, W. Gradl, M. Greco, L. M. Gu, M. H. Gu, S. Gu, Y. T. Gu, C. Y Guan, A. Q. Guo, L. B. Guo, R. P. Guo, Y. P. Guo, A. Guskov, S. Han, T. T. Han, T. Z. Han, X. Q. Hao, F. A. Harris, K. L. He, F. H. Heinsius, T. Held, Y. K. Heng, M. Himmelreich, T. Holtmann, Y. R. Hou, Z. L. Hou, H. M. Hu, J. F. Hu, T. Hu, Y. Hu, G. S. Huang, L. Q. Huang, X. T. Huang, Y. P. Huang, Z. Huang, N. Huesken, T. Hussain, W. Ikegami Andersson, W. Imoehl, M. Irshad, S. Jaeger, S. Janchiv, Q. Ji, Q. P. Ji, X. B. Ji, X. L. Ji, H. B. Jiang, X. S. Jiang, X. Y. Jiang, J. B. Jiao, Z. Jiao, S. Jin, Y. Jin, T. Johansson, N. Kalantar-Nayestanaki, X. S. Kang, R. Kappert, M. Kavatsyuk, B. C. Ke, I. K. Keshk, A. Khoukaz, P. Kiese, R. Kiuchi, R. Kliemt, L. Koch, O. B. Kolcu, B. Kopf, M. Kuemmel, M. Kuessner, A. Kupsc, M. G. Kurth, W. Kühn, J. J. Lane, J. S. Lange, P. Larin, L. Lavezzi, H. Leithoff, M. Lellmann, T. Lenz, C. Li, C. H. Li, Cheng Li, D. M. Li, F. Li, G. Li, H. Li, H. B. Li, H. J. Li, J. L. Li, J. Q. Li, Ke Li, L. K. Li, Lei LI, P. L. Li, P. R. Li, S. Y. Li, W. D. Li, W. G. Li, X. H. Li, X. L. Li, Z. Y. Li, H. Liang, Y. F. Liang, Y. T. Liang, L. Z. Liao, J. Libby, C. X. Lin, B. Liu, B. J. Liu, C. X. Liu, D. Liu, D. Y. Liu, F. H. Liu, Fang Liu, Feng Liu, H. B. Liu, H. M. Liu, Huanhuan Liu, Huihui Liu, J. B. Liu, J. Y. Liu, K. Liu, K. Y. Liu, Ke Liu, L. Liu, Q. Liu, S. B. Liu, Shuai Liu, T. Liu, X. Liu, Y. B. Liu, Z. A. Liu, Z. Q. Liu, Y. F. Long, X. C. Lou, F. X. Lu, H. J. Lu, J. D. Lu, J. G. Lu, X. L. Lu, Y. Lu, Y. P. Lu, C. L. Luo, M. X. Luo, P. W. Luo, T. Luo, X. L. Luo, S. Lusso, X. R. Lyu, F. C. Ma, H. L. Ma, L. L. Ma, M. M. Ma, Q. M. Ma, R. Q. Ma, R. T. Ma, X. N. Ma, X. X. Ma, X. Y. Ma, Y. M. Ma, F. E. Maas, M. Maggiora, S. Maldaner, S. Malde, Q. A. Malik, A. Mangoni, Y. J. Mao, Z. P. Mao, S. Marcello, Z. X. Meng, J. G. Messchendorp, G. Mezzadri, T. J. Min, R. E. Mitchell, X. H. Mo, Y. J. Mo, N. Yu. Muchnoi, H. Muramatsu, S. Nakhoul, Y. Nefedov, F. Nerling, I. B. Nikolaev, Z. Ning, S. Nisar, S. L. Olsen, Q. Ouyang, S. Pacetti, X. Pan, Y. Pan, A. Pathak, P. Patteri, M. Pelizaeus, H. P. Peng, K. Peters, J. Pettersson, J. L. Ping, R. G. Ping, A. Pitka, R. Poling, V. Prasad, H. Qi, H. R. Qi, M. Qi, T. Y. Qi, S. Qian, W. -B. Qian, Z. Qian, C. F. Qiao, L. Q. Qin, X. P. Qin, X. S. Qin, Z. H. Qin, J. F. Qiu, S. Q. Qu, K. H. Rashid, K. Ravindran, C. F. Redmer, A. Rivetti, V. Rodin, M. Rolo, G. Rong, Ch. Rosner, M. Rump, A. Sarantsev, Y. Schelhaas, C. Schnier, K. Schoenning, D. C. Shan, W. Shan, X. Y. Shan, M. Shao, C. P. Shen, P. X. Shen, X. Y. Shen, H. C. Shi, R. S. Shi, X. Shi, X. D Shi, J. J. Song, Q. Q. Song, W. M. Song, Y. X. Song, S. Sosio, S. Spataro, F. F. Sui, G. X. Sun, J. F. Sun, L. Sun, S. S. Sun, T. Sun, W. Y. Sun, Y. J. Sun, Y. K. Sun, Y. Z. Sun, Z. T. Sun, Y. H. Tan, Y. X. Tan, C. J. Tang, G. Y. Tang, J. Tang, V. Thoren, I. Uman, B. Wang, B. L. Wang, C. W. Wang, D. Y. Wang, H. P. Wang, K. Wang, L. L. Wang, M. Wang, M. Z. Wang, Meng Wang, W. H. Wang, W. P. Wang, X. Wang, X. F. Wang, X. L. Wang, Y. Wang, Y. D. Wang, Y. F. Wang, Y. Q. Wang, Z. Wang, Z. Y. Wang, Ziyi Wang, Zongyuan Wang, D. H. Wei, P. Weidenkaff, F. Weidner, S. P. Wen, D. J. White, U. Wiedner, G. Wilkinson, M. Wolke, L. Wollenberg, J. F. Wu, L. H. Wu, L. J. Wu, X. Wu, Z. Wu, L. Xia, H. Xiao, S. Y. Xiao, Y. J. Xiao, Z. J. Xiao, X. H. Xie, Y. G. Xie, Y. H. Xie, T. Y. Xing, X. A. Xiong, G. F. Xu, J. J. Xu, Q. J. Xu, W. Xu, X. P. Xu, F. Yan, L. Yan, W. B. Yan, W. C. Yan, Xu Yan, H. J. Yang, H. X. Yang, L. Yang, R. X. Yang, S. L. Yang, Y. H. Yang, Y. X. Yang, Yifan Yang, Zhi Yang, M. Ye, M. H. Ye, J. H. Yin, Z. Y. You, B. X. Yu, C. X. Yu, G. Yu, J. S. Yu, T. Yu, C. Z. Yuan, W. Yuan, X. Q. Yuan, Y. Yuan, Z. Y. Yuan, C. X. Yue, A. Yuncu, A. A. Zafar, Y. Zeng, B. X. Zhang, Guangyi Zhang, H. H. Zhang, H. Y. Zhang, J. L. Zhang, J. Q. Zhang, J. W. Zhang, J. Y. Zhang, J. Z. Zhang, Jianyu Zhang, Jiawei Zhang, L. Zhang, Lei Zhang, S. Zhang, S. F. Zhang, T. J. Zhang, X. Y. Zhang, Y. Zhang, Y. H. Zhang, Y. T. Zhang, Yan Zhang, Yao Zhang, Yi Zhang, Z. H. Zhang, Z. Y. Zhang, G. Zhao, J. Zhao, J. Y. Zhao, J. Z. Zhao, Lei Zhao, Ling Zhao, M. G. Zhao, Q. Zhao, S. J. Zhao, Y. B. Zhao, Y. X. Zhao, Z. G. Zhao, A. Zhemchugov, B. Zheng, J. P. Zheng, Y. Zheng, Y. H. Zheng, B. Zhong, C. Zhong, L. P. Zhou, Q. Zhou, X. Zhou, X. K. Zhou, X. R. Zhou, A. N. Zhu, J. Zhu, K. Zhu, K. J. Zhu, S. H. Zhu, W. J. Zhu, X. L. Zhu, Y. C. Zhu, Z. A. Zhu, B. S. Zou, J. H. Zou
We present an analysis of the process $\psi(3686) \to \Omega^- \bar{\Omega}^+$ ($\Omega^-\to K^-\Lambda$, $\bar{\Omega}^+\to K^+\bar{\Lambda}$, $\Lambda\to p\pi^-$, $\bar{\Lambda}\to \bar{p}\pi^+$) based on a data set of $448\times 10^6$ $\psi(3686)$ decays collected with the BESIII detector at the BEPCII electron-positron collider.
High Energy Physics - Experiment
1 code implementation • 23 May 2020 • Ke Li, Haifeng Nie, Huifu Gao, Xin Yao
Knee points, characterised as their smallest trade-off loss at all objectives, are attractive to decision makers in multi-criterion decision-making.
1 code implementation • 26 Apr 2020 • Hao Cheng, Fanxu Meng, Ke Li, Yuting Gao, Guangming Lu, Xing Sun, Rongrong Ji
To gain a universal improvement on both valid and invalid filters, we compensate grafting with distillation (\textbf{Cultivation}) to overcome the drawback of grafting .
no code implementations • 22 Apr 2020 • Lei Sun, Ke Li
In particular, each arm of our bandit learning model represents a reproduction operator and is assigned with a prior reward distribution.
no code implementations • 15 Apr 2020 • Geoffrey Pruvost, Bilel Derbel, Arnaud Liefooghe, Ke Li, Qingfu Zhang
This paper intends to understand and to improve the working principle of decomposition-based multi-objective evolutionary algorithms.
2 code implementations • 7 Apr 2020 • Ke Li, Shichong Peng, Tianhao Zhang, Jitendra Malik
Many tasks in computer vision and graphics fall within the framework of conditional image synthesis.
1 code implementation • ECCV 2020 • Ning Yu, Ke Li, Peng Zhou, Jitendra Malik, Larry Davis, Mario Fritz
Generative Adversarial Networks (GANs) have brought about rapid progress towards generating photorealistic images.
1 code implementation • CVPR 2020 • Difei Gao, Ke Li, Ruiping Wang, Shiguang Shan, Xilin Chen
Then, we introduce three aggregators which guide the message passing from one graph to another to utilize the contexts in various modalities, so as to refine the features of nodes.
1 code implementation • ICCV 2021 • Jie Hu, Liujuan Cao, Qixiang Ye, Tong Tong, Shengchuan Zhang, Ke Li, Feiyue Huang, Rongrong Ji, Ling Shao
Based on the experimental results, we present three new findings that provide fresh insights into the inner logic of DNNs.
no code implementations • 8 Feb 2020 • Xiaoran Ruan, Ke Li, Bilel Derbel, Arnaud Liefooghe
The effectiveness of our proposed algorithm is validated on benchmark problems with 10, 20, 50 variables, comparing with three state-of-the-art SAEAs.
1 code implementation • 8 Feb 2020 • Ke Li, Zilin Xiang, Tao Chen, Shuo Wang, Kay Chen Tan
Given a tight computational budget, it is more cost-effective to focus on optimizing the parameter configuration of transfer learning algorithms (3) The research on CPDP is far from mature where it is "not difficult" to find a better alternative by making a combination of existing transfer learning and classification techniques.
no code implementations • 30 Jan 2020 • Joseph Billingsley, Ke Li, Wang Miao, Geyong Min, Nektarios Georgalas
The ever increasing demand for computing resources has led to the creation of hyperscale datacentres with tens of thousands of servers.
1 code implementation • 22 Jan 2020 • Tao Chen, Miqing Li, Ke Li, Kalyanmoy Deb
In this paper, we provide the first systematic and comprehensive survey exclusively on SBSE for SASs, covering papers in 27 venues from 7 repositories, which eventually leads to several key statistics from the most notable 74 primary studies in this particular field of research.
2 code implementations • CVPR 2020 • Fanxu Meng, Hao Cheng, Ke Li, Zhixin Xu, Rongrong Ji, Xing Sun, Gaungming Lu
To better perform the grafting process, we develop an entropy-based criterion to measure the information of filters and an adaptive weighting strategy for balancing the grafted information among networks.
1 code implementation • 3 Dec 2019 • Fengxiang Yang, Ke Li, Zhun Zhong, Zhiming Luo, Xing Sun, Hao Cheng, Xiaowei Guo, Feiyue Huang, Rongrong Ji, Shaozi Li
This procedure encourages that the selected training samples can be both clean and miscellaneous, and that the two models can promote each other iteratively.
Ranked #9 on Unsupervised Domain Adaptation on Market to Duke
1 code implementation • NeurIPS 2019 • Ke Li, Tianhao Zhang, Jitendra Malik
Work on adversarial examples has shown that neural nets are surprisingly sensitive to adversarially chosen changes of small magnitude.
no code implementations • 7 Nov 2019 • Jingwen Fu, Licheng Zong, Yinbing Li, Ke Li, Bingqian Yang, Xibei Liu
Object detection for robot guidance is a crucial mission for autonomous robots, which has provoked extensive attention for researchers.
no code implementations • 30 Sep 2019 • Ke Li, Min-Hui Liao, Kalyanmoy Deb, Geyong Min, Xin Yao
The ultimate goal of multi-objective optimisation is to help a decision maker (DM) identify solution(s) of interest (SOI) achieving satisfactory trade-offs among multiple conflicting criteria.
no code implementations • 24 Sep 2019 • Ke Li, Kejun Tang, Tianfan Wu, Qifeng Liao
A state-of-the-art deep domain decomposition method (D3M) based on the variational principle is proposed for partial differential equations (PDEs).
1 code implementation • 31 Aug 2019 • Ke Li, Gang Wan, Gong Cheng, Liqiu Meng, Junwei Han
However, the current survey of datasets and deep learning based methods for object detection in optical remote sensing images is not adequate.
no code implementations • 6 Aug 2019 • Ran Wang, Suhe Ye, Ke Li, Sam Kwong
Classifier chain (CC) is a multi-label learning approach that constructs a sequence of binary classifiers according to a label order.
no code implementations • 6 Aug 2019 • Rongrong Ji, Ke Li, Yan Wang, Xiaoshuai Sun, Feng Guo, Xiaowei Guo, Yongjian Wu, Feiyue Huang, Jiebo Luo
In this paper, we address the problem of monocular depth estimation when only a limited number of training image-depth pairs are available.
no code implementations • 3 Jun 2019 • M. Ablikim, M. N. Achasov, S. Ahmed, M. Albrecht, M. Alekseev, A. Amoroso, F. F. An, Q. An, Y. Bai, O. Bakina, R. Baldini Ferroli, Y. Ban, K. Begzsuren, D. W. Bennett, J. V. Bennett, N. Berger, M. Bertani, D. Bettoni, F. Bianchi, E. Boger, I. Boyko, R. A. Briere, H. Cai, X. Cai, A. Calcaterra, G. F. Cao, S. A. Cetin, J. Chai, J. F. Chang, W. L. Chang, G. Chelkov, G. Chen, H. S. Chen, J. C. Chen, M. L. Chen, P. L. Chen, S. J. Chen, X. R. Chen, Y. B. Chen, W. Cheng, X. K. Chu, G. Cibinetto, F. Cossio, H. L. Dai, J. P. Dai, A. Dbeyssi, D. Dedovich, Z. Y. Deng, A. Denig, I. Denysenko, M. Destefanis, F. DeMori, Y. Ding, C. Dong, J. Dong, L. Y. Dong, M. Y. Dong, Z. L. Dou, S. X. Du, P. F. Duan, J. Fang, S. S. Fang, Y. Fang, R. Farinelli, L. Fava, F. Feldbauer, G. Felici, C. Q. Feng, M. Fritsch, C. D. Fu, Q. Gao, X. L. Gao, Y. Gao, Y. G. Gao, Z. Gao, B. Garillon, I. Garzia, A. Gilman, K. Goetzen, L. Gong, W. X. Gong, W. Gradl, M. Greco, L. M. Gu, M. H. Gu, Y. T. Gu, A. Q. Guo, L. B. Guo, R. P. Guo, Y. P. Guo, A. Guskov, Z. Haddadi, S. Han, X. Q. Hao, F. A. Harris, K. L. He, F. H. Heinsius, T. Held, Y. K. Heng, Z. L. Hou, H. M. Hu, J. F. Hu, T. Hu, Y. Hu, G. S. Huang, J. S. Huang, X. T. Huang, X. Z. Huang, Z. L. Huang, T. Hussain, W. Ikegami Andersson, M. Irshad, Q. Ji, Q. P. Ji, X. B. Ji, X. L. Ji, H. L. Jiang, X. S. Jiang, X. Y. Jiang, J. B. Jiao, Z. Jiao, D. P. Jin, S. Jin, Y. Jin, T. Johansson, A. Julin, N. Kalantar-Nayestanaki, X. S. Kang, M. Kavatsyuk, B. C. Ke, I. K. Keshk, T. Khan, A. Khoukaz, P. Kiese, R. Kiuchi, R. Kliemt, L. Koch, O. B. Kolcu, B. Kopf, M. Kuemmel, M. Kuessner, A. Kupsc, M. Kurth, W. Kühn, J. S. Lange, P. Larin, L. Lavezzi, S. Leiber, H. Leithoff, C. Li, Cheng Li, D. M. Li, F. Li, F. Y. Li, G. Li, H. B. Li, H. J. Li, J. C. Li, J. W. Li, K. J. Li, Kang Li, Ke Li, Lei LI, P. L. Li, P. R. Li, Q. Y. Li, T. Li, W. D. Li, W. G. Li, X. L. Li, X. N. Li, X. Q. Li, Z. B. Li, H. Liang, Y. F. Liang, Y. T. Liang, G. R. Liao, L. Z. Liao, J. Libby, C. X. Lin, D. X. Lin, B. Liu, B. J. Liu, C. X. Liu, D. Liu, D. Y. Liu, F. H. Liu, Fang Liu, Feng Liu, H. B. Liu, H. L. Liu, H. M. Liu, Huanhuan Liu, Huihui Liu, J. B. Liu, J. Y. Liu, K. Y. Liu, Ke Liu, L. D. Liu, Q. Liu, S. B. Liu, X. Liu, Y. B. Liu, Z. A. Liu, Zhiqing Liu, Y. F. Long, X. C. Lou, H. J. Lu, J. G. Lu, Y. Lu, Y. P. Lu, C. L. Luo, M. X. Luo, P. W. Luo, T. Luo, X. L. Luo, S. Lusso, X. R. Lyu, F. C. Ma, H. L. Ma, L. L. Ma, M. M. Ma, Q. M. Ma, X. N. Ma, X. Y. Ma, Y. M. Ma, F. E. Maas, M. Maggiora, S. Maldaner, Q. A. Malik, A. Mangoni, Y. J. Mao, Z. P. Mao, S. Marcello, Z. X. Meng, J. G. Messchendorp, G. Mezzadri, J. Min, T. J. Min, R. E. Mitchell, X. H. Mo, Y. J. Mo, C. Morales Morales, N. Yu. Muchnoi, H. Muramatsu, A. Mustafa, S. Nakhoul, Y. Nefedov, F. Nerling, I. B. Nikolaev, Z. Ning, S. Nisar, S. L. Niu, X. Y. Niu, S. L. Olsen, Q. Ouyang, S. Pacetti, Y. Pan, M. Papenbrock, P. Patteri, M. Pelizaeus, J. Pellegrino, H. P. Peng, Z. Y. Peng, K. Peters, J. Pettersson, J. L. Ping, R. G. Ping, A. Pitka, R. Poling, V. Prasad, H. R. Qi, M. Qi, T. Y. Qi, S. Qian, C. F. Qiao, N. Qin, X. S. Qin, Z. H. Qin, J. F. Qiu, S. Q. Qu, K. H. Rashid, C. F. Redmer, M. Richter, M. Ripka, A. Rivetti, M. Rolo, G. Rong, Ch. Rosner, A. Sarantsev, M. Savrié, K. Schoenning, W. Shan, X. Y. Shan, M. Shao, C. P. Shen, P. X. Shen, X. Y. Shen, H. Y. Sheng, X. Shi, J. J. Song, W. M. Song, X. Y. Song, S. Sosio, C. Sowa, S. Spataro, F. F. Sui, G. X. Sun, J. F. Sun, L. Sun, S. S. Sun, X. H. Sun, Y. J. Sun, Y. K Sun, Y. Z. Sun, Z. J. Sun, Z. T. Sun, Y. T Tan, C. J. Tang, G. Y. Tang, X. Tang, M. Tiemens, B. Tsednee, I. Uman, B. Wang, B. L. Wang, C. W. Wang, D. Wang, D. Y. Wang, Dan Wang, H. H. Wang, K. Wang, L. L. Wang, L. S. Wang, M. Wang, Meng Wang, P. Wang, P. L. Wang, W. P. Wang, X. F. Wang, Y. Wang, Y. F. Wang, Z. Wang, Z. G. Wang, Z. Y. Wang, Zongyuan Wang, T. Weber, D. H. Wei, P. Weidenkaff, S. P. Wen, U. Wiedner, M. Wolke, L. H. Wu, L. J. Wu, Z. Wu, L. Xia, X. Xia, Y. Xia, D. Xiao, Y. J. Xiao, Z. J. Xiao, Y. G. Xie, Y. H. Xie, X. A. Xiong, Q. L. Xiu, G. F. Xu, J. J. Xu, L. Xu, Q. J. Xu, X. P. Xu, F. Yan, L. Yan, W. B. Yan, W. C. Yan, Y. H. Yan, H. J. Yang, H. X. Yang, L. Yang, R. X. Yang, S. L. Yang, Y. H. Yang, Y. X. Yang, Yifan Yang, Z. Q. Yang, M. Ye, M. H. Ye, J. H. Yin, Z. Y. You, B. X. Yu, C. X. Yu, J. S. Yu, C. Z. Yuan, Y. Yuan, A. Yuncu, A. A. Zafar, Y. Zeng, B. X. Zhang, B. Y. Zhang, C. C. Zhang, D. H. Zhang, H. H. Zhang, H. Y. Zhang, J. Zhang, J. L. Zhang, J. Q. Zhang, J. W. Zhang, J. Y. Zhang, J. Z. Zhang, K. Zhang, L. Zhang, S. F. Zhang, T. J. Zhang, X. Y. Zhang, Y. Zhang, Y. H. Zhang, Y. T. Zhang, Yang Zhang, YaoZ hang, Yu Zhang, Z. H. Zhang, Z. P. Zhang, Z. Y. Zhang, G. Zhao, J. W. Zhao, J. Y. Zhao, J. Z. Zhao, Lei Zhao, Ling Zhao, M. G. Zhao, Q. Zhao, S. J. Zhao, T. C. Zhao, Y. B. Zhao, Z. G. Zhao, A. Zhemchugov, B. Zheng, J. P. Zheng, W. J. Zheng, Y. H. Zheng, B. Zhong, L. Zhou, Q. Zhou, X. Zhou, X. K. Zhou, X. R. Zhou, X. Y. Zhou, Xiaoyu Zhou, Xu Zhou, A. N. Zhu, J. Zhu, K. Zhu, K. J. Zhu, S. Zhu, S. H. Zhu, X. L. Zhu, Y. C. Zhu, Y. S. Zhu, Z. A. Zhu, J. Zhuang, B. S. Zou, J. H. Zou
We study $e^{+}e^{-}$ collisions with a $\pi^{+}\pi^{-}\pi^{0}\eta_{c}$ final state using data samples collected with the BESIII detector at center-of-mass energies $\sqrt{s}=4. 226$, $4. 258$, $4. 358$, $4. 416$, and $4. 600$ GeV.
High Energy Physics - Experiment
no code implementations • ICLR 2019 • Ke Li, Jitendra Malik
Extensive work on compressed sensing has yielded a rich collection of sparse recovery algorithms, each making different tradeoffs between recovery condition and computational efficiency.
no code implementations • 5 Mar 2019 • Huiru Gao, Haifeng Nie, Ke Li
Visualisation is an effective way to facilitate the analysis and understanding of multivariate data.
no code implementations • 30 Jan 2019 • Ke Li, Zilin Xiang, Kay Chen Tan
Perhaps surprisingly, it is possible to build a cheap-to-evaluate surrogate that models the algorithm's empirical performance as a function of its parameters.
no code implementations • 24 Jan 2019 • Jianqiao Wangni, Ke Li, Jianbo Shi, Jitendra Malik
Recently, researchers proposed various low-precision gradient compression, for efficient communication in large-scale distributed optimization.
no code implementations • 4 Jan 2019 • Ke Li, Jinyu Li, Yong Zhao, Kshitiz Kumar, Yifan Gong
We propose two approaches for speaker adaptation in end-to-end (E2E) automatic speech recognition systems.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 30 Nov 2018 • Kailas Vodrahalli, Ke Li, Jitendra Malik
Modern computer vision algorithms often rely on very large training datasets.
1 code implementation • ICCV 2019 • Ke Li, Tianhao Zhang, Jitendra Malik
Most existing methods for conditional image synthesis are only able to generate a single plausible image for any given input, or at best a fixed number of plausible images.
no code implementations • 29 Nov 2018 • Ke Li, Jitendra Malik
Generative adversarial nets (GANs) have generated a lot of excitement.
no code implementations • 2 Oct 2018 • Ke Li, Shichong Peng, Jitendra Malik
Single-image super-resolution (SISR) is a canonical problem with diverse applications.
1 code implementation • ICLR 2019 • Ke Li, Jitendra Malik
Implicit probabilistic models are models defined naturally in terms of a sampling procedure and often induces a likelihood function that cannot be expressed explicitly.
1 code implementation • Interspeech 2018 2018 • Daniel Povey, Gaofeng Cheng, Yiming Wang, Ke Li, Hainan Xu, Mahsa Yarmohammadi, Sanjeev Khudanpur
Time Delay Neural Networks (TDNNs), also known as onedimensional Convolutional Neural Networks (1-d CNNs), are an efficient and well-performing neural network architecture for speech recognition.
no code implementations • ECCV 2018 • Ke Li, Kaiyue Pang, Jifei Song, Yi-Zhe Song, Tao Xiang, Timothy M. Hospedales, Honggang Zhang
In this work we aim to develop a universal sketch grouper.
1 code implementation • 7 Aug 2018 • Ke Li, Kaiyue Pang, Jifei Song, Yi-Zhe Song, Tao Xiang, Timothy M. Hospedales, Honggang Zhang
In this work we aim to develop a universal sketch grouper.
no code implementations • ICASSP 2018 • Hainan Xu, Ke Li, Yiming Wang, Jian Wang, Shiyin Kang, Xie Chen, Daniel Povey, Sanjeev Khudanpur
In this paper we describe an extension of the Kaldi software toolkit to support neural-based language modeling, intended for use in automatic speech recognition (ASR) and related tasks.
Ranked #36 on Speech Recognition on LibriSpeech test-other (using extra training data)
Automatic Speech Recognition Automatic Speech Recognition (ASR) +2
no code implementations • 2 Jan 2018 • Ke Li, Renzhi Chen, Dragan Savic, Xin Yao
In the preference elicitation session, the preference information learned in the consultation module is translated into the form that can be used in a decomposition-based EMO algorithm, i. e., a set of reference points that are biased toward to the ROI.
no code implementations • 21 Nov 2017 • Ke Li, Renzhi Chen, Guangtao Fu, Xin Yao
When solving constrained multi-objective optimization problems, an important issue is how to balance convergence, diversity and feasibility simultaneously.
no code implementations • 7 Apr 2017 • Mengyuan Wu, Ke Li, Sam Kwong, Qingfu Zhang
It decomposes a multi-objective optimization problem into several single-objective optimization subproblems, each of which is usually defined as a scalarizing function using a weight vector.
2 code implementations • ICML 2017 • Ke Li, Jitendra Malik
Most exact methods for k-nearest neighbour search suffer from the curse of dimensionality; that is, their query times exhibit exponential dependence on either the ambient or the intrinsic dimensionality.
no code implementations • ICLR 2018 • Ke Li, Jitendra Malik
Learning to Optimize is a recently proposed framework for learning optimization algorithms using reinforcement learning.
no code implementations • 9 Feb 2017 • Zongping Deng, Ke Li, Qijun Zhao, Yi Zhang, Hu Chen
In this paper, we propose a novel face alignment method using single deep network (SDN) on existing limited training data.
no code implementations • 20 Jan 2017 • Ke Li, Kalyanmoy Deb, Xin Yao
Extensive experiments, both proof-of-principle and on a variety of problems with 3 to 10 objectives, fully demonstrate the effectiveness of our proposed method for approximating the preferred solutions in the region of interest.
no code implementations • 2 Oct 2016 • Hu Chen, Yi Zhang, Weihua Zhang, Peixi Liao, Ke Li, Jiliu Zhou, Ge Wang
To reduce the potential radiation risk, low-dose CT has attracted much attention.
no code implementations • 27 Sep 2016 • Hu Chen, Yi Zhang, Weihua Zhang, Peixi Liao, Ke Li, Jiliu Zhou, Ge Wang
In order to reduce the potential radiation risk, low-dose CT has attracted more and more attention.
Medical Physics
no code implementations • 30 Aug 2016 • Mengyuan Wu, Ke Li, Sam Kwong, Yu Zhou, Qingfu Zhang
In particular, the stable matching between subproblems and solutions, which achieves an equilibrium between their mutual preferences, implicitly strikes a balance between the convergence and diversity.
no code implementations • 23 Aug 2016 • Renzhi Chen, Ke Li, Xin Yao
Existing studies on dynamic multi-objective optimization focus on problems with time-dependent objective functions, while the ones with a changing number of objectives have rarely been considered in the literature.
no code implementations • 2016 2016 • Ke Li, Jitendra Malik
Algorithm design is a laborious process and often requires many iterations of ideation and validation.
no code implementations • 27 Apr 2016 • Ke Li, Jitendra Malik
We consider the problem of amodal instance segmentation, the objective of which is to predict the region encompassing both visible and occluded parts of each object.
1 code implementation • 1 Dec 2015 • Ke Li, Jitendra Malik
Existing methods for retrieving k-nearest neighbours suffer from the curse of dimensionality.
no code implementations • CVPR 2016 • Ke Li, Bharath Hariharan, Jitendra Malik
Existing methods for pixel-wise labelling tasks generally disregard the underlying structure of labellings, often leading to predictions that are visually implausible.
no code implementations • 5 Nov 2015 • Shuaiqi Hu, Ke Li, Xudong Bao
Using the clustering results, an SIFT keypoints histogram is calculated for each wood image.