Search Results for author: Xindian Ma

Found 6 papers, 2 papers with code

TextTN: Probabilistic Encoding of Language on Tensor Network

no code implementations1 Jan 2021 Peng Zhang, Jing Zhang, Xindian Ma, Siwei Rao, Guangjian Tian, Jun Wang

As a novel model that bridges machine learning and quantum theory, tensor network (TN) has recently gained increasing attention and successful applications for processing natural images.

General Classification Sentence +4

TensorCoder: Dimension-Wise Attention via Tensor Representation for Natural Language Modeling

no code implementations28 Jul 2020 Shuai Zhang, Peng Zhang, Xindian Ma, Junqiu Wei, Ningning Wang, Qun Liu

Transformer has been widely-used in many Natural Language Processing (NLP) tasks and the scaled dot-product attention between tokens is a core module of Transformer.

Language Modelling Machine Translation +2

Interpretable Network Structure for Modeling Contextual Dependency

no code implementations25 Sep 2019 Xindian Ma, Peng Zhang, Xiaoliu Mao, Yehua Zhang, Nan Duan, Yuexian Hou, Ming Zhou.

Then, we show that the lower bound of such a separation rank can reveal the quantitative relation between the network structure (e. g. depth/width) and the modeling ability for the contextual dependency.

Language Modelling Sentence +1

Leveraging Entanglement Entropy for Deep Understanding of Attention Matrix in Text Matching

no code implementations25 Sep 2019 Peng Zhang, Xiaoliu Mao, Xindian Ma, Benyou Wang, Jing Zhang, Jun Wang, Dawei Song

We prove that by a mapping (via the trace operator) on the high-dimensional matching matrix, a low-dimensional attention matrix can be derived.

Inductive Bias Question Answering +2

A Tensorized Transformer for Language Modeling

1 code implementation NeurIPS 2019 Xindian Ma, Peng Zhang, Shuai Zhang, Nan Duan, Yuexian Hou, Dawei Song, Ming Zhou

In this paper, based on the ideas of tensor decomposition and parameters sharing, we propose a novel self-attention model (namely Multi-linear attention) with Block-Term Tensor Decomposition (BTD).

Language Modelling Machine Translation +2

A Generalized Language Model in Tensor Space

1 code implementation31 Jan 2019 Lipeng Zhang, Peng Zhang, Xindian Ma, Shuqin Gu, Zhan Su, Dawei Song

Theoretically, we prove that such tensor representation is a generalization of the n-gram language model.

Language Modelling Tensor Decomposition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.