LATTE: Lattice ATTentive Encoding for Character-based Word Segmentation

A character sequence comprises at least one or more segmentation alternatives. This can be considered segmentation ambiguity and may weaken segmentation performance in word segmentation. Proper handling of such ambiguity lessens ambiguous decisions on word boundaries. Previous works have achieved remarkable segmentation performance and alleviated the ambiguity problem by incorporating the lattice, owing to its ability to capture segmentation alternatives, along with graph-based and pre-trained models. However, multiple granularity information, including character and word, in a lattice that encodes with such models may not be attentively exploited. To strengthen multi-granularity representations in a lattice, we propose the Lattice ATTentive Encoding (LATTE) method for character-based word segmentation. Our model employs the lattice structure to handle segmentation alternatives and utilizes graph neural networks along with an attention mechanism to attentively extract multi-granularity representation from the lattice for complementing character representations. Our experimental results demonstrated improvements in segmentation performance on the BCCWJ, CTB6, and BEST2010 datasets in three languages, particularly Japanese, Chinese, and Thai.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


 Ranked #1 on Chinese Word Segmentation on CTB6 (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Benchmark
Japanese Word Segmentation BCCWJ LATTE (Linguistic units, lattices, PTMs, GNNs) F1-score (Word) 0.9936 # 1
Thai Word Segmentation BEST-2010 LATTE (Linguistic units, lattices, PTMs, GNNs) F1-Score 0.9907 # 1
Chinese Word Segmentation CTB6 LATTE (Linguistic units, lattices, PTMs, GNNs) F1 98.07 # 1

Methods


No methods listed for this paper. Add relevant methods here