2 code implementations • 19 Dec 2022 • Bairu Hou, Joe O'Connor, Jacob Andreas, Shiyu Chang, Yang Zhang
Instead of directly optimizing in prompt space, PromptBoosting obtains a small pool of prompts via a gradient-free approach and then constructs a large pool of weak learners by pairing these prompts with different elements of the LM's output distribution.
1 code implementation • ACL 2021 • Joe O'Connor, Jacob Andreas
Transformer-based language models benefit from conditioning on contexts of hundreds to thousands of previous tokens.