Search Results for author: Xing Shi

Found 20 papers, 5 papers with code

DiffPoint: Single and Multi-view Point Cloud Reconstruction with ViT Based Diffusion Model

no code implementations17 Feb 2024 Yu Feng, Xing Shi, Mengli Cheng, Yun Xiong

As the task of 2D-to-3D reconstruction has gained significant attention in various real-world scenarios, it becomes crucial to be able to generate high-quality point clouds.

Point cloud reconstruction

Arithmetic Feature Interaction Is Necessary for Deep Tabular Learning

1 code implementation4 Feb 2024 Yi Cheng, Renjun Hu, Haochao Ying, Xing Shi, Jian Wu, Wei Lin

Our extensive experiments on real-world data also validate the consistent effectiveness, efficiency, and rationale of AMFormer, suggesting it has established a strong inductive bias for deep learning on tabular data.

Inductive Bias

EasyPhoto: Your Smart AI Photo Generator

2 code implementations7 Oct 2023 Ziheng Wu, Jiaqi Xu, Xinyi Zou, Kunzhe Huang, Xing Shi, Jun Huang

By training a digital doppelganger of a specific user ID using 5 to 20 relevant images, the finetuned model (according to the trained LoRA model) allows for the generation of AI photos using arbitrary templates.

MuLTI: Efficient Video-and-Language Understanding with Text-Guided MultiWay-Sampler and Multiple Choice Modeling

no code implementations10 Mar 2023 Jiaqi Xu, Bo Liu, Yunkuo Chen, Mengli Cheng, Xing Shi

Specifically, we design a Text-Guided MultiWay-Sampler based on adapt-pooling residual mapping and self-attention modules to sample long sequences and fuse multi-modal features, which reduces the computational costs and addresses performance degradation caused by previous samplers.

 Ranked #1 on TGIF-Transition on TGIF-QA (using extra training data)

Multi-Label Classification Multiple-choice +8

Consecutive Question Generation via Dynamic Multitask Learning

no code implementations16 Nov 2022 Yunji Li, Sujian Li, Xing Shi

In this paper, we propose the task of consecutive question generation (CQG), which generates a set of logically related question-answer pairs to understand a whole passage, with a comprehensive consideration of the aspects including accuracy, coverage, and informativeness.

Data Augmentation Informativeness +2

Why Neural Machine Translation Prefers Empty Outputs

no code implementations24 Dec 2020 Xing Shi, Yijun Xiao, Kevin Knight

Using different EoS types in target sentences of different lengths exposes and eliminates this implicit smoothing.

Machine Translation NMT +1

MEEP: An Open-Source Platform for Human-Human Dialog Collection and End-to-End Agent Training

1 code implementation9 Oct 2020 Arkady Arkhangorodsky, Amittai Axelrod, Christopher Chu, Scot Fang, Yiqi Huang, Ajay Nagesh, Xing Shi, Boliang Zhang, Kevin Knight

We create a new task-oriented dialog platform (MEEP) where agents are given considerable freedom in terms of utterances and API calls, but are constrained to work within a push-button environment.

One-shot Text Field Labeling using Attention and Belief Propagation for Structure Information Extraction

1 code implementation9 Sep 2020 Mengli Cheng, Minghui Qiu, Xing Shi, Jun Huang, Wei. Lin

Existing learning based methods for text labeling task usually require a large amount of labeled examples to train a specific model for each type of document.

One-Shot Learning Text Detection

FINDINGS OF THE IWSLT 2020 EVALUATION CAMPAIGN

no code implementations WS 2020 Ebrahim Ansari, Amittai Axelrod, Nguyen Bach, Ond{\v{r}}ej Bojar, Roldano Cattoni, Fahim Dalvi, Nadir Durrani, Marcello Federico, Christian Federmann, Jiatao Gu, Fei Huang, Kevin Knight, Xutai Ma, Ajay Nagesh, Matteo Negri, Jan Niehues, Juan Pino, Elizabeth Salesky, Xing Shi, Sebastian St{\"u}ker, Marco Turchi, Alex Waibel, er, Changhan Wang

The evaluation campaign of the International Conference on Spoken Language Translation (IWSLT 2020) featured this year six challenge tracks: (i) Simultaneous speech translation, (ii) Video speech translation, (iii) Offline speech translation, (iv) Conversational speech translation, (v) Open domain translation, and (vi) Non-native speech translation.

Translation

Fast Locality Sensitive Hashing for Beam Search on GPU

no code implementations2 Jun 2018 Xing Shi, Shizhen Xu, Kevin Knight

We present a GPU-based Locality Sensitive Hashing (LSH) algorithm to speed up beam search for sequence models.

Machine Translation Translation

A Sequential Embedding Approach for Item Recommendation with Heterogeneous Attributes

no code implementations28 May 2018 Kuan Liu, Xing Shi, Prem Natarajan

Our ablation experiments demonstrate the effectiveness of the two components to address heterogeneous attribute challenges including variable lengths and attribute sparseness.

Attribute Recommendation Systems

Speeding Up Neural Machine Translation Decoding by Shrinking Run-time Vocabulary

no code implementations ACL 2017 Xing Shi, Kevin Knight

Compared with Locality Sensitive Hashing (LSH), decoding with word alignments is GPU-friendly, orthogonal to existing speedup methods and more robust across language pairs.

Machine Translation NMT +1

Cannot find the paper you are looking for? You can Submit a new open access paper.