Search Results for author: Yatai Ji

Found 8 papers, 7 papers with code

MIRTT: Learning Multimodal Interaction Representations from Trilinear Transformers for Visual Question Answering

1 code implementation Findings (EMNLP) 2021 Junjie Wang, Yatai Ji, Jiaqi Sun, Yujiu Yang, Tetsuya Sakai

On the other hand, trilinear models such as the CTI model efficiently utilize the inter-modality information between answers, questions, and images, while ignoring intra-modality information.

Multiple-choice Question Answering +1

Taming Lookup Tables for Efficient Image Retouching

1 code implementation28 Mar 2024 Sidi Yang, Binxiao Huang, Mingdeng Cao, Yatai Ji, Hanzhong Guo, Ngai Wong, Yujiu Yang

Existing enhancement models often optimize for high performance while falling short of reducing hardware inference time and power consumption, especially on edge devices with constrained computing and storage resources.

Image Enhancement Image Retouching

Global and Local Semantic Completion Learning for Vision-Language Pre-training

1 code implementation12 Jun 2023 Rong-Cheng Tu, Yatai Ji, Jie Jiang, Weijie Kong, Chengfei Cai, Wenzhe Zhao, Hongfa Wang, Yujiu Yang, Wei Liu

MGSC promotes learning more representative global features, which have a great impact on the performance of downstream tasks, while MLTC reconstructs modal-fusion local tokens, further enhancing accurate comprehension of multimodal data.

Language Modelling Masked Language Modeling +5

Control-A-Video: Controllable Text-to-Video Generation with Diffusion Models

1 code implementation23 May 2023 Weifeng Chen, Yatai Ji, Jie Wu, Hefeng Wu, Pan Xie, Jiashi Li, Xin Xia, Xuefeng Xiao, Liang Lin

Based on a pre-trained conditional text-to-image (T2I) diffusion model, our model aims to generate videos conditioned on a sequence of control signals, such as edge or depth maps.

Optical Flow Estimation Style Transfer +4

Multimodal Prototype-Enhanced Network for Few-Shot Action Recognition

no code implementations9 Dec 2022 Xinzhe Ni, Yong liu, Hao Wen, Yatai Ji, Jing Xiao, Yujiu Yang

Then in the visual flow, visual prototypes are computed by a Temporal-Relational CrossTransformer (TRX) module for example.

Few-Shot action recognition Few Shot Action Recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.