Deepening Hidden Representations from Pre-trained Language Models

Transformer-based pre-trained language models have proven to be effective for learning contextualized language representation. However, current approaches only take advantage of the output of the encoder's final layer when fine-tuning the downstream tasks... (read more)

PDF Abstract ICLR 2021 PDF (under review) ICLR 2021 Abstract (under review)
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper