Mirror-BERT converts pretrained language models into effective universal text encoders without any supervision, in 20-30 seconds. It is an extremely simple, fast, and effective contrastive learning technique. It relies on fully identical or slightly modified string pairs as positive (i.e., synonymous) fine-tuning examples, and aims to maximise their similarity during identity fine-tuning.
Source: Fast, Effective, and Self-Supervised: Transforming Masked Language Models into Universal Lexical and Sentence EncodersPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Sentence | 3 | 16.67% |
Entity Linking | 2 | 11.11% |
Language Modelling | 2 | 11.11% |
Semantic Textual Similarity | 2 | 11.11% |
Natural Language Understanding | 1 | 5.56% |
Bilingual Lexicon Induction | 1 | 5.56% |
Cross-Lingual Entity Linking | 1 | 5.56% |
Link Prediction | 1 | 5.56% |
XLM-R | 1 | 5.56% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |