Search Results for author: Qiang Yu

Found 10 papers, 3 papers with code

Weak Distribution Detectors Lead to Stronger Generalizability of Vision-Language Prompt Tuning

1 code implementation31 Mar 2024 Kun Ding, Haojian Zhang, Qiang Yu, Ying Wang, Shiming Xiang, Chunhong Pan

The idea is realized by exploiting out-of-distribution (OOD) detection to predict whether a sample belongs to a base distribution or a novel distribution and then using the score generated by a dedicated competition based scoring function to fuse the zero-shot and few-shot classifier.

Out of Distribution (OOD) Detection

Compositional Kronecker Context Optimization for Vision-Language Models

no code implementations18 Mar 2024 Kun Ding, Xiaohui Li, Qiang Yu, Ying Wang, Haojian Zhang, Shiming Xiang

Context Optimization (CoOp) has emerged as a simple yet effective technique for adapting CLIP-like vision-language models to downstream image recognition tasks.

Prompt Tuning with Soft Context Sharing for Vision-Language Models

1 code implementation29 Aug 2022 Kun Ding, Ying Wang, Pengzhang Liu, Qiang Yu, Haojian Zhang, Shiming Xiang, Chunhong Pan

Inspired by the fact that modeling task relationship by multi-task learning can usually boost performance, we propose a novel method SoftCPT (Soft Context Sharing for Prompt Tuning) to tune pre-trained vision-language models on multiple target few-shot tasks jointly.

Few-Shot Learning Multi-Task Learning

Deep Spike Learning with Local Classifiers

1 code implementation IEEE Transactions on Cybernetics 2022 Chenxiang Ma, Rui Yan, Zhaofei Yu, Qiang Yu

We then propose two variants that additionally incorporate temporal dependencies through a backward and forward process, respectively.

Consensus Graph Representation Learning for Better Grounded Image Captioning

no code implementations2 Dec 2021 Wenqiao Zhang, Haochen Shi, Siliang Tang, Jun Xiao, Qiang Yu, Yueting Zhuang

The contemporary visual captioning models frequently hallucinate objects that are not actually in a scene, due to the visual misclassification or over-reliance on priors that resulting in the semantic inconsistency between the visual information and the target lexical words.

Graph Representation Learning Hallucination +1

Synaptic Learning with Augmented Spikes

no code implementations11 May 2020 Qiang Yu, Shiming Song, Chenxiang Ma, Linqiang Pan, Kay Chen Tan

Traditional neuron models use analog values for information representation and computation, while all-or-nothing spikes are employed in the spiking ones.

Towards Efficient Processing and Learning with Spikes: New Approaches for Multi-Spike Learning

no code implementations2 May 2020 Qiang Yu, Shenglan Li, Huajin Tang, Longbiao Wang, Jianwu Dang, Kay Chen Tan

They are also believed to play an essential role in low-power consumption of the biological systems, whose efficiency attracts increasing attentions to the field of neuromorphic computing.

Robust Environmental Sound Recognition with Sparse Key-point Encoding and Efficient Multi-spike Learning

no code implementations4 Feb 2019 Qiang Yu, Yanli Yao, Longbiao Wang, Huajin Tang, Jianwu Dang, Kay Chen Tan

Our framework is a unifying system with a consistent integration of three major functional parts which are sparse encoding, efficient learning and robust readout.

Decision Making

Cannot find the paper you are looking for? You can Submit a new open access paper.