1 code implementation • 5 Oct 2023 • Tom Sherborne, Naomi Saphra, Pradeep Dasigi, Hao Peng
We propose Trust Region Aware Minimization (TRAM), a SAM algorithm fine-tuning for low parameter sharpness and smooth, informative representations preserving pre-trained structure.
no code implementations • 19 Jul 2023 • Hao Peng, Qingqing Cao, Jesse Dodge, Matthew E. Peters, Jared Fernandez, Tom Sherborne, Kyle Lo, Sam Skjonsberg, Emma Strubell, Darrell Plessas, Iz Beltagy, Evan Pete Walsh, Noah A. Smith, Hannaneh Hajishirzi
In response, we introduce Pentathlon, a benchmark for holistic and realistic evaluation of model efficiency.
1 code implementation • 9 Jul 2023 • Tom Sherborne, Tom Hosking, Mirella Lapata
Cross-lingual semantic parsing transfers parsing capability from a high-resource language (e. g., English) to low-resource languages with scarce training data.
no code implementations • 24 May 2023 • Ananya Harsh Jha, Tom Sherborne, Evan Pete Walsh, Dirk Groeneveld, Emma Strubell, Iz Beltagy
With the increase in the size of large language models (LLMs), we need compression methods that can reduce the model size while preserving the generality and zero-shot promptability of the model.
no code implementations • 20 Dec 2022 • Nikita Moghe, Tom Sherborne, Mark Steedman, Alexandra Birch
We calculate the correlation between the metric's ability to predict a good/bad translation with the success/failure on the final task for the Translate-Test setup.
1 code implementation • 26 Sep 2022 • Tom Sherborne, Mirella Lapata
We introduce a first-order meta-learning algorithm to train a semantic parser with maximal sample efficiency during cross-lingual transfer.
1 code implementation • ACL 2022 • Tom Sherborne, Mirella Lapata
Recent work in cross-lingual semantic parsing has successfully applied machine translation to localize parsers to new languages.
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Tom Sherborne, Yumo Xu, Mirella Lapata
Considering when MT is inadequate, we also find that using our approach achieves parsing accuracy within 2% of complete translation using only 50% of training data.
no code implementations • 19 Dec 2019 • Huiyuan Xie, Tom Sherborne, Alexander Kuhnle, Ann Copestake
Image captioning as a multimodal task has drawn much interest in recent years.