Book summarization
5 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Book summarization
Most implemented papers
Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention
This work introduces an efficient method to scale Transformer-based Large Language Models (LLMs) to infinitely long inputs with bounded memory and computation.
Enhancing Large Language Model with Self-Controlled Memory Framework
Large Language Models (LLMs) are constrained by their inability to process lengthy inputs, resulting in the loss of critical historical information.
Unlimiformer: Long-Range Transformers with Unlimited Length Input
This kNN index can be kept on either the GPU or CPU memory and queried in sub-linear time; this way, we can index practically unlimited input sequences, while every attention head in every decoder layer retrieves its top-k keys, instead of attending to every key.
Echoes from Alexandria: A Large Resource for Multilingual Book Summarization
In recent years, research in text summarization has mainly focused on the news domain, where texts are typically short and have strong layout features.
LOCOST: State-Space Models for Long Document Abstractive Summarization
State-space models are a low-complexity alternative to transformers for encoding long sequences and capturing long-term dependencies.