Search Results for author: Suma Desu

Found 5 papers, 1 papers with code

Question Generation using a Scratchpad Encoder

no code implementations ICLR 2019 Ryan Y. Benmalek, Madian Khabsa, Suma Desu, Claire Cardie, Michele Banko

In this paper we introduce the Scratchpad Encoder, a novel addition to the sequence to sequence (seq2seq) framework and explore its effectiveness in generating natural language questions from a given logical form.

Question Generation Question-Generation

Keeping Notes: Conditional Natural Language Generation with a Scratchpad Encoder

no code implementations ACL 2019 Ryan Benmalek, Madian Khabsa, Suma Desu, Claire Cardie, Michele Banko

We introduce the Scratchpad Mechanism, a novel addition to the sequence-to-sequence (seq2seq) neural network architecture and demonstrate its effectiveness in improving the overall fluency of seq2seq models for natural language generation tasks.

Machine Translation Question Generation +4

Keeping Notes: Conditional Natural Language Generation with a Scratchpad Mechanism

1 code implementation12 Jun 2019 Ryan Y. Benmalek, Madian Khabsa, Suma Desu, Claire Cardie, Michele Banko

We introduce the Scratchpad Mechanism, a novel addition to the sequence-to-sequence (seq2seq) neural network architecture and demonstrate its effectiveness in improving the overall fluency of seq2seq models for natural language generation tasks.

Machine Translation Question Generation +4

Zipf's law holds for phrases, not words

no code implementations19 Jun 2014 Jake Ryland Williams, Paul R. Lessard, Suma Desu, Eric Clark, James P. Bagrow, Christopher M. Danforth, Peter Sheridan Dodds

With Zipf's law being originally and most famously observed for word frequency, it is surprisingly limited in its applicability to human language, holding over no more than three to four orders of magnitude before hitting a clear break in scaling.

Human language reveals a universal positivity bias

no code implementations15 Jun 2014 Peter Sheridan Dodds, Eric M. Clark, Suma Desu, Morgan R. Frank, Andrew J. Reagan, Jake Ryland Williams, Lewis Mitchell, Kameron Decker Harris, Isabel M. Kloumann, James P. Bagrow, Karine Megerdoomian, Matthew T. McMahon, Brian F. Tivnan, Christopher M. Danforth

Using human evaluation of 100, 000 words spread across 24 corpora in 10 languages diverse in origin and culture, we present evidence of a deep imprint of human sociality in language, observing that (1) the words of natural human language possess a universal positivity bias; (2) the estimated emotional content of words is consistent between languages under translation; and (3) this positivity bias is strongly independent of frequency of word usage.

Cultural Vocal Bursts Intensity Prediction Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.