Search Results for author: Jonathan Pilault

Found 11 papers, 3 papers with code

Course Correcting Koopman Representations

no code implementations23 Oct 2023 Mahan Fathi, Clement Gehring, Jonathan Pilault, David Kanaa, Pierre-Luc Bacon, Ross Goroshin

Koopman representations aim to learn features of nonlinear dynamical systems (NLDS) which lead to linear dynamics in the latent space.

On Conditional and Compositional Language Model Differentiable Prompting

no code implementations4 Jul 2023 Jonathan Pilault, Can Liu, Mohit Bansal, Markus Dreyer

Prompts have been shown to be an effective method to adapt a frozen Pretrained Language Model (PLM) to perform well on downstream tasks.

Few-Shot Learning Language Modelling +1

Using Graph Algorithms to Pretrain Graph Completion Transformers

no code implementations14 Oct 2022 Jonathan Pilault, Michael Galkin, Bahare Fatemi, Perouz Taslakian, David Vasquez, Christopher Pal

While using our new path-finding algorithm as a pretraining signal provides 2-3% MRR improvements, we show that pretraining on all signals together gives the best knowledge graph completion results.

Knowledge Graph Completion Knowledge Graph Embedding +1

Towards Neural Functional Program Evaluation

no code implementations NeurIPS Workshop AIPLANS 2021 Torsten Scholak, Jonathan Pilault, Joey Velez-Ginorio

This paper explores the capabilities of current transformer-based language models for program evaluation of simple functional programming languages.

Mem2Mem: Learning to Summarize Long Texts with Memory Compression and Transfer

no code implementations1 Jan 2021 Jonathan Pilault, Jaehong Park, Christopher Pal

We introduce Mem2Mem, a memory-to-memory mechanism for hierarchical recurrent neural network based encoder decoder architectures and we explore its use for abstractive document summarization.

Abstractive Text Summarization Document Summarization +1

Learning to Summarize Long Texts with Memory Compression and Transfer

no code implementations21 Oct 2020 Jaehong Park, Jonathan Pilault, Christopher Pal

We introduce Mem2Mem, a memory-to-memory mechanism for hierarchical recurrent neural network based encoder decoder architectures and we explore its use for abstractive document summarization.

Abstractive Text Summarization Document Summarization +1

Conditionally Adaptive Multi-Task Learning: Improving Transfer Learning in NLP Using Fewer Parameters & Less Data

1 code implementation ICLR 2021 Jonathan Pilault, Amine Elhattami, Christopher Pal

Through this construction (a hypernetwork adapter), we achieve more efficient parameter sharing and mitigate forgetting by keeping half of the weights of a pretrained model fixed.

Multi-Task Learning Natural Language Inference

On the impressive performance of randomly weighted encoders in summarization tasks

no code implementations21 Feb 2020 Jonathan Pilault, Jae-hong Park, Christopher Pal

In this work, we investigate the performance of untrained randomly initialized encoders in a general class of sequence to sequence models and compare their performance with that of fully-trained encoders on the task of abstractive summarization.

Abstractive Text Summarization

Cannot find the paper you are looking for? You can Submit a new open access paper.