Search Results for author: Jinmiao Fu

Found 6 papers, 1 papers with code

Q-Tuning: Queue-based Prompt Tuning for Lifelong Few-shot Language Learning

no code implementations22 Apr 2024 Yanhui Guo, Shaoyuan Xu, Jinmiao Fu, Jia Liu, Chaosheng Dong, Bryan Wang

This paper introduces \textbf{Q-tuning}, a novel approach for continual prompt tuning that enables the lifelong learning of a pre-trained language model.

Language Modelling

AdaSelection: Accelerating Deep Learning Training through Data Subsampling

no code implementations19 Jun 2023 Minghe Zhang, Chaosheng Dong, Jinmiao Fu, Tianchen Zhou, Jia Liang, Jia Liu, Bo Liu, Michinari Momma, Bryan Wang, Yan Gao, Yi Sun

In this paper, we introduce AdaSelection, an adaptive sub-sampling method to identify the most informative sub-samples within each minibatch to speed up the training of large-scale deep learning models without sacrificing model performance.

Text Is All You Need: Learning Language Representations for Sequential Recommendation

1 code implementation23 May 2023 Jiacheng Li, Ming Wang, Jin Li, Jinmiao Fu, Xin Shen, Jingbo Shang, Julian McAuley

In this paper, we propose to model user preferences and item features as language representations that can be generalized to new items and datasets.

Representation Learning Sentence +1

CMA-CLIP: Cross-Modality Attention CLIP for Image-Text Classification

no code implementations7 Dec 2021 Huidong Liu, Shaoyuan Xu, Jinmiao Fu, Yang Liu, Ning Xie, Chien-Chih Wang, Bryan Wang, Yi Sun

In this paper, we propose the Cross-Modality Attention Contrastive Language-Image Pre-training (CMA-CLIP), a new framework which unifies two types of cross-modality attentions, sequence-wise attention and modality-wise attention, to effectively fuse information from image and text pairs.

Attribute Image-text Classification +3

Cannot find the paper you are looking for? You can Submit a new open access paper.