Search Results for author: Xiaojie Jin

Found 42 papers, 19 papers with code

Video Recognition in Portrait Mode

1 code implementation21 Dec 2023 Mingfei Han, Linjie Yang, Xiaojie Jin, Jiashi Feng, Xiaojun Chang, Heng Wang

While existing datasets mainly comprise landscape mode videos, our paper seeks to introduce portrait mode videos to the research community and highlight the unique challenges associated with this video format.

Data Augmentation Video Recognition

Vista-LLaMA: Reliable Video Narrator via Equal Distance to Visual Tokens

no code implementations12 Dec 2023 Fan Ma, Xiaojie Jin, Heng Wang, Yuchen Xian, Jiashi Feng, Yi Yang

This amplifies the effect of visual tokens on text generation, especially when the relative distance is longer between visual and text tokens.

Hallucination Position +2

PixelLM: Pixel Reasoning with Large Multimodal Model

no code implementations4 Dec 2023 Zhongwei Ren, Zhicheng Huang, Yunchao Wei, Yao Zhao, Dongmei Fu, Jiashi Feng, Xiaojie Jin

PixelLM excels across various pixel-level image reasoning and understanding tasks, outperforming well-established methods in multiple benchmarks, including MUSE, single- and multi-referring segmentation.

Segmentation

Selective Feature Adapter for Dense Vision Transformers

no code implementations3 Oct 2023 Xueqing Deng, Qi Fan, Xiaojie Jin, Linjie Yang, Peng Wang

Specifically, SFA consists of external adapters and internal adapters which are sequentially operated over a transformer model.

Depth Estimation

Realistic Full-Body Tracking from Sparse Observations via Joint-Level Modeling

1 code implementation ICCV 2023 Xiaozheng Zheng, Zhuo Su, Chao Wen, Zhou Xue, Xiaojie Jin

To bridge the physical and virtual worlds for rapidly developed VR/AR applications, the ability to realistically drive 3D full-body avatars is of great significance.

COSA: Concatenated Sample Pretrained Vision-Language Foundation Model

1 code implementation15 Jun 2023 Sihan Chen, Xingjian He, Handong Li, Xiaojie Jin, Jiashi Feng, Jing Liu

Due to the limited scale and quality of video-text training corpus, most vision-language foundation models employ image-text datasets for pretraining and primarily focus on modeling visually semantic representations while disregarding temporal semantic representations and correlations.

 Ranked #1 on TGIF-Frame on TGIF-QA (using extra training data)

Question Answering Retrieval +6

Delving Deeper into Data Scaling in Masked Image Modeling

no code implementations24 May 2023 Cheng-Ze Lu, Xiaojie Jin, Qibin Hou, Jun Hao Liew, Ming-Ming Cheng, Jiashi Feng

The study reveals that: 1) MIM can be viewed as an effective method to improve the model capacity when the scale of the training data is relatively small; 2) Strong reconstruction targets can endow the models with increased capacities on downstream tasks; 3) MIM pre-training is data-agnostic under most scenarios, which means that the strategy of sampling pre-training data is non-critical.

Self-Supervised Learning

VLAB: Enhancing Video Language Pre-training by Feature Adapting and Blending

no code implementations22 May 2023 Xingjian He, Sihan Chen, Fan Ma, Zhicheng Huang, Xiaojie Jin, Zikang Liu, Dongmei Fu, Yi Yang, Jing Liu, Jiashi Feng

Towards this goal, we propose a novel video-text pre-training method dubbed VLAB: Video Language pre-training by feature Adapting and Blending, which transfers CLIP representations to video pre-training tasks and develops unified video multimodal models for a wide range of video-text tasks.

 Ranked #1 on Visual Question Answering (VQA) on MSVD-QA (using extra training data)

Question Answering Retrieval +6

MV-Adapter: Multimodal Video Transfer Learning for Video Text Retrieval

1 code implementation19 Jan 2023 Xiaojie Jin, BoWen Zhang, Weibo Gong, Kai Xu, Xueqing Deng, Peng Wang, Zhao Zhang, Xiaohui Shen, Jiashi Feng

The first is a Temporal Adaptation Module that is incorporated in the video branch to introduce global and local temporal contexts.

Retrieval Text Retrieval +2

Temporal Perceiving Video-Language Pre-training

no code implementations18 Jan 2023 Fan Ma, Xiaojie Jin, Heng Wang, Jingjia Huang, Linchao Zhu, Jiashi Feng, Yi Yang

Specifically, text-video localization consists of moment retrieval, which predicts start and end boundaries in videos given the text description, and text localization which matches the subset of texts with the video features.

Contrastive Learning Moment Retrieval +7

CMAE-V: Contrastive Masked Autoencoders for Video Action Recognition

no code implementations15 Jan 2023 Cheng-Ze Lu, Xiaojie Jin, Zhicheng Huang, Qibin Hou, Ming-Ming Cheng, Jiashi Feng

Contrastive Masked Autoencoder (CMAE), as a new self-supervised framework, has shown its potential of learning expressive feature representations in visual image recognition.

Action Recognition Temporal Action Localization

Revisiting Training-free NAS Metrics: An Efficient Training-based Method

1 code implementation16 Nov 2022 Taojiannan Yang, Linjie Yang, Xiaojie Jin, Chen Chen

In this paper, we revisit these training-free metrics and find that: (1) the number of parameters (\#Param), which is the most straightforward training-free metric, is overlooked in previous works but is surprisingly effective, (2) recent training-free metrics largely rely on the \#Param information to rank networks.

Neural Architecture Search

NoiSER: Noise is All You Need for Low-Light Image Enhancement

no code implementations9 Nov 2022 Zhao Zhang, Suiyi Zhao, Xiaojie Jin, Mingliang Xu, Yi Yang, Shuicheng Yan

In this paper, we present an embarrassingly simple yet effective solution to a seemingly impossible mission, low-light image enhancement (LLIE) without access to any task-related data.

Low-Light Image Enhancement regression

OSIC: A New One-Stage Image Captioner Coined

no code implementations4 Nov 2022 Bo wang, Zhao Zhang, Mingbo Zhao, Xiaojie Jin, Mingliang Xu, Meng Wang

To obtain rich features, we use the Swin Transformer to calculate multi-level features, and then feed them into a novel dynamic multi-sight embedding module to exploit both global structure and local texture of input images.

Descriptive Language Modelling +2

Exploring Domain Incremental Video Highlights Detection with the LiveFood Benchmark

1 code implementation12 Sep 2022 Sen Pei, Shixiong Xu, Xiaojie Jin

However, most VHD methods are based on the closed world assumption, i. e., a fixed number of highlight categories is defined in advance and all training data are available beforehand.

Incremental Learning

Contrastive Masked Autoencoders are Stronger Vision Learners

1 code implementation27 Jul 2022 Zhicheng Huang, Xiaojie Jin, Chengze Lu, Qibin Hou, Ming-Ming Cheng, Dongmei Fu, Xiaohui Shen, Jiashi Feng

The momentum encoder, fed with the full images, enhances the feature discriminability via contrastive learning with its online counterpart.

Contrastive Learning Image Classification +3

AutoTransition: Learning to Recommend Video Transition Effects

1 code implementation27 Jul 2022 Yaojie Shen, Libo Zhang, Kai Xu, Xiaojie Jin

First we learn the embedding of video transitions through a video transition classification task.

Retrieval Video Editing

Conflict-Averse Gradient Descent for Multi-task Learning

3 code implementations NeurIPS 2021 Bo Liu, Xingchao Liu, Xiaojie Jin, Peter Stone, Qiang Liu

The goal of multi-task learning is to enable more efficient learning than single task learning by sharing model structures for a diverse set of tasks.

Multi-Task Learning

HR-NAS: Searching Efficient High-Resolution Neural Architectures with Lightweight Transformers

1 code implementation CVPR 2021 Mingyu Ding, Xiaochen Lian, Linjie Yang, Peng Wang, Xiaojie Jin, Zhiwu Lu, Ping Luo

Last, we proposed an efficient fine-grained search strategy to train HR-NAS, which effectively explores the search space, and finds optimal architectures given various tasks and computation resources.

Image Classification Neural Architecture Search +3

Refiner: Refining Self-attention for Vision Transformers

1 code implementation7 Jun 2021 Daquan Zhou, Yujun Shi, Bingyi Kang, Weihao Yu, Zihang Jiang, Yuan Li, Xiaojie Jin, Qibin Hou, Jiashi Feng

Vision Transformers (ViTs) have shown competitive accuracy in image classification tasks compared with CNNs.

Image Classification

One Backward from Ten Forward, Subsampling for Large-Scale Deep Learning

no code implementations27 Apr 2021 Chaosheng Dong, Xiaojie Jin, Weihao Gao, Yijia Wang, Hongyi Zhang, Xiang Wu, Jianchao Yang, Xiaobing Liu

Deep learning models in large-scale machine learning systems are often continuously trained with enormous data from production environments.

DeepViT: Towards Deeper Vision Transformer

5 code implementations22 Mar 2021 Daquan Zhou, Bingyi Kang, Xiaojie Jin, Linjie Yang, Xiaochen Lian, Zihang Jiang, Qibin Hou, Jiashi Feng

In this paper, we show that, unlike convolution neural networks (CNNs)that can be improved by stacking more convolutional layers, the performance of ViTs saturate fast when scaled to be deeper.

Image Classification Representation Learning

AutoSpace: Neural Architecture Search with Less Human Interference

1 code implementation ICCV 2021 Daquan Zhou, Xiaojie Jin, Xiaochen Lian, Linjie Yang, Yujing Xue, Qibin Hou, Jiashi Feng

Current neural architecture search (NAS) algorithms still require expert knowledge and effort to design a search space for network construction.

Neural Architecture Search

Neural Architecture Search for Lightweight Non-Local Networks

2 code implementations CVPR 2020 Yingwei Li, Xiaojie Jin, Jieru Mei, Xiaochen Lian, Linjie Yang, Cihang Xie, Qihang Yu, Yuyin Zhou, Song Bai, Alan Yuille

However, it has been rarely explored to embed the NL blocks in mobile neural networks, mainly due to the following challenges: 1) NL blocks generally have heavy computation cost which makes it difficult to be applied in applications where computational resources are limited, and 2) it is an open problem to discover an optimal configuration to embed NL blocks into mobile neural networks.

Image Classification Neural Architecture Search

RC-DARTS: Resource Constrained Differentiable Architecture Search

no code implementations30 Dec 2019 Xiaojie Jin, Jiang Wang, Joshua Slocum, Ming-Hsuan Yang, Shengyang Dai, Shuicheng Yan, Jiashi Feng

In this paper, we propose the resource constrained differentiable architecture search (RC-DARTS) method to learn architectures that are significantly smaller and faster while achieving comparable accuracy.

Image Classification One-Shot Learning

AtomNAS: Fine-Grained End-to-End Neural Architecture Search

1 code implementation ICLR 2020 Jieru Mei, Yingwei Li, Xiaochen Lian, Xiaojie Jin, Linjie Yang, Alan Yuille, Jianchao Yang

We propose a fine-grained search space comprised of atomic blocks, a minimal search unit that is much smaller than the ones used in recent NAS algorithms.

Neural Architecture Search

Neural Epitome Search for Architecture-Agnostic Network Compression

no code implementations ICLR 2020 Daquan Zhou, Xiaojie Jin, Qibin Hou, Kaixin Wang, Jianchao Yang, Jiashi Feng

The recent WSNet [1] is a new model compression method through sampling filterweights from a compact set and has demonstrated to be effective for 1D convolutionneural networks (CNNs).

Model Compression Neural Architecture Search

Predicting Scene Parsing and Motion Dynamics in the Future

no code implementations NeurIPS 2017 Xiaojie Jin, Huaxin Xiao, Xiaohui Shen, Jimei Yang, Zhe Lin, Yunpeng Chen, Zequn Jie, Jiashi Feng, Shuicheng Yan

The ability of predicting the future is important for intelligent systems, e. g. autonomous vehicles and robots to plan early and make decisions accordingly.

Autonomous Vehicles motion prediction +2

Dual Path Networks

19 code implementations NeurIPS 2017 Yunpeng Chen, Jianan Li, Huaxin Xiao, Xiaojie Jin, Shuicheng Yan, Jiashi Feng

In this work, we present a simple, highly efficient and modularized Dual Path Network (DPN) for image classification which presents a new topology of connection paths internally.

Image Classification

Deep Self-Taught Learning for Weakly Supervised Object Localization

no code implementations CVPR 2017 Zequn Jie, Yunchao Wei, Xiaojie Jin, Jiashi Feng, Wei Liu

To overcome this issue, we propose a deep self-taught learning approach, which makes the detector learn the object-level features reliable for acquiring tight positive samples and afterwards re-train itself based on them.

Object Weakly Supervised Object Detection +1

Training Group Orthogonal Neural Networks with Privileged Information

no code implementations24 Jan 2017 Yunpeng Chen, Xiaojie Jin, Jiashi Feng, Shuicheng Yan

Learning rich and diverse representations is critical for the performance of deep convolutional neural networks (CNNs).

Image Classification Image Segmentation +1

Video Scene Parsing with Predictive Feature Learning

no code implementations ICCV 2017 Xiaojie Jin, Xin Li, Huaxin Xiao, Xiaohui Shen, Zhe Lin, Jimei Yang, Yunpeng Chen, Jian Dong, Luoqi Liu, Zequn Jie, Jiashi Feng, Shuicheng Yan

In this way, the network can effectively learn to capture video dynamics and temporal context, which are critical clues for video scene parsing, without requiring extra manual annotations.

Representation Learning Scene Parsing

Multi-Path Feedback Recurrent Neural Network for Scene Parsing

no code implementations27 Aug 2016 Xiaojie Jin, Yunpeng Chen, Jiashi Feng, Zequn Jie, Shuicheng Yan

In this paper, we consider the scene parsing problem and propose a novel Multi-Path Feedback recurrent neural network (MPF-RNN) for parsing scene images.

Scene Parsing

Collaborative Layer-wise Discriminative Learning in Deep Neural Networks

no code implementations19 Jul 2016 Xiaojie Jin, Yunpeng Chen, Jian Dong, Jiashi Feng, Shuicheng Yan

In this paper, we propose a layer-wise discriminative learning method to enhance the discriminative capability of a deep network by allowing its layers to work collaboratively for classification.

Classification General Classification +1

Training Skinny Deep Neural Networks with Iterative Hard Thresholding Methods

no code implementations19 Jul 2016 Xiaojie Jin, Xiao-Tong Yuan, Jiashi Feng, Shuicheng Yan

In this paper, we propose an iterative hard thresholding (IHT) approach to train Skinny Deep Neural Networks (SDNNs).

Object Recognition

Deep Learning with S-shaped Rectified Linear Activation Units

1 code implementation22 Dec 2015 Xiaojie Jin, Chunyan Xu, Jiashi Feng, Yunchao Wei, Junjun Xiong, Shuicheng Yan

Rectified linear activation units are important components for state-of-the-art deep convolutional networks.

Cannot find the paper you are looking for? You can Submit a new open access paper.