Semantic textual similarity deals with determining how similar two pieces of texts are. This can take the form of assigning a score from 1 to 5. Related tasks are paraphrase or duplicate identification.
|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
To remedy this, we propose, BigBird, a sparse attention mechanism that reduces this quadratic dependency to linear.
Ranked #1 on Document Summarization on BBC XSum
We find that Impact Case studies submitted to the UK Research Excellence Framework (REF) 2014 that refer to scientific papers mentioned in newspaper articles were awarded a higher score in the REF assessment.
In this digital age of news consumption, a news reader has the ability to react, express and share opinions with others in a highly interactive and fast manner.
BERT (Bidirectional Encoder Representations from Transformers) and ALBERT (A Lite BERT) are methods for pre-training language models which can later be fine-tuned for a variety of Natural Language Understanding tasks.
To overcome this gap, we introduce CORD19STS dataset which includes 13, 710 annotated sentence pairs collected from COVID-19 open research dataset (CORD-19) challenge.
Even though BERT has achieved successful performance improvements in various supervised learning tasks, BERT is still limited by repetitive inferences on unsupervised tasks for the computation of contextual language representations.