no code implementations • EMNLP 2021 • Jingun Kwon, Naoki Kobayashi, Hidetaka Kamigaito, Manabu Okumura
Sentence extractive summarization shortens a document by selecting sentences for a summary while preserving its important contents.
Ranked #4 on Extractive Text Summarization on CNN / Daily Mail
no code implementations • RANLP 2021 • Jingun Kwon, Naoki Kobayashi, Hidetaka Kamigaito, Hiroya Takamura, Manabu Okumura
The results demonstrate that the position of emojis in texts is a good clue to boost the performance of emoji label prediction.
1 code implementation • 15 Oct 2022 • Naoki Kobayashi, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura, Masaaki Nagata
To promote and further develop RST-style discourse parsing models, we need a strong baseline that can be regarded as a reference for reporting reliable experimental results.
Ranked #1 on Discourse Parsing on Instructional-DT (Instr-DT)
no code implementations • NAACL 2021 • Naoki Kobayashi, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura, Masaaki Nagata
We then pre-train a neural RST parser with the obtained silver data and fine-tune it on the RST-DT.
Ranked #2 on Discourse Parsing on RST-DT (using extra training data)
1 code implementation • 17 Mar 2021 • Naoki Kobayashi, Taro Sekiyama, Issei Sato, Hiroshi Unno
Another application is to a new program development framework called oracle-based programming, which is a neural-network-guided variation of Solar-Lezama's program synthesis by sketching.
no code implementations • 28 Oct 2020 • Mayuko Kori, Takeshi Tsukada, Naoki Kobayashi
A cyclic proof system allows us to perform inductive reasoning without explicit inductions.
1 code implementation • 3 Apr 2020 • Naoki Kobayashi, Tsutomu Hirao, Hidetaka Kamigaito, Manabu Okumura, Masaaki Nagata
To obtain better discourse dependency trees, we need to improve the accuracy of RST trees at the upper parts of the structures.
Ranked #3 on Discourse Parsing on RST-DT
no code implementations • IJCNLP 2019 • Naoki Kobayashi, Tsutomu Hirao, Kengo Nakamura, Hidetaka Kamigaito, Manabu Okumura, Masaaki Nagata
The first one builds the optimal tree in terms of a dissimilarity score function that is defined for splitting a text span into smaller ones.