1 code implementation • 20 Mar 2024 • Dongwei Jiang, Marcio Fonseca, Shay B. Cohen
Large language models (LLMs) often struggle with complex logical reasoning due to logical inconsistencies and the inherent difficulty of such reasoning.
no code implementations • 18 Jan 2024 • Marcio Fonseca, Shay B. Cohen
Also, we show that we can improve the controllability of LLMs with keyword-based classifier-free guidance (CFG) while achieving lexical overlap comparable to strong fine-tuned baselines on arXiv and PubMed.
no code implementations • 15 Nov 2023 • Marcio Fonseca, Shay B. Cohen
Although large language models (LLMs) exhibit remarkable capacity to leverage in-context demonstrations, it is still unclear to what extent they can learn new concepts or facts from ground-truth labels.
1 code implementation • 25 May 2022 • Marcio Fonseca, Yftah Ziser, Shay B. Cohen
We argue that disentangling content selection from the budget used to cover salient content improves the performance and applicability of abstractive summarizers.
Ranked #1 on Text Summarization on GovReport
no code implementations • NeurIPS Workshop Neuro_AI 2019 • Marcio Fonseca
Deep predictive coding networks are neuroscience-inspired unsupervised learning models that learn to predict future sensory states.
1 code implementation • 30 Jun 2019 • Marcio Fonseca
Deep predictive coding networks are neuroscience-inspired unsupervised learning models that learn to predict future sensory states.