Search Results for author: Fredrik Carlsson

Found 10 papers, 4 papers with code

Fine-Grained Controllable Text Generation Using Non-Residual Prompting

1 code implementation ACL 2022 Fredrik Carlsson, Joey Öhman, Fangyu Liu, Severine Verlinden, Joakim Nivre, Magnus Sahlgren

We propose a resource-efficient method for converting a pre-trained CLM into this architecture, and demonstrate its potential on various experiments, including the novel task of contextualized word inclusion.

Text Generation

Cross-lingual and Multilingual CLIP

1 code implementation LREC 2022 Fredrik Carlsson, Philipp Eisen, Faton Rekathati, Magnus Sahlgren

The long-standing endeavor of relating the textual and the visual domain recently underwent a pivotal breakthrough, as OpenAI released CLIP.

Contrastive Learning Machine Translation +3

It’s Basically the Same Language Anyway: the Case for a Nordic Language Model

no code implementations NoDaLiDa 2021 Magnus Sahlgren, Fredrik Carlsson, Fredrik Olsson, Love Börjeson

When is it beneficial for a research community to organize a broader collaborative effort on a topic, and when should we instead promote individual efforts?

Language Modelling

The Nordic Pile: A 1.2TB Nordic Dataset for Language Modeling

no code implementations30 Mar 2023 Joey Öhman, Severine Verlinden, Ariel Ekgren, Amaru Cuba Gyllensten, Tim Isbister, Evangelia Gogoulou, Fredrik Carlsson, Magnus Sahlgren

Pre-training Large Language Models (LLMs) require massive amounts of text data, and the performance of the LLMs typically correlates with the scale and quality of the datasets.

Language Modelling

Should we Stop Training More Monolingual Models, and Simply Use Machine Translation Instead?

1 code implementation NoDaLiDa 2021 Tim Isbister, Fredrik Carlsson, Magnus Sahlgren

We demonstrate empirically that a large English language model coupled with modern machine translation outperforms native language models in most Scandinavian languages.

Language Modelling Machine Translation +1

The Singleton Fallacy: Why Current Critiques of Language Models Miss the Point

no code implementations8 Feb 2021 Magnus Sahlgren, Fredrik Carlsson

By contrast, we will argue that there are many different types of language use, meaning and understanding, and that (current) language models are build with the explicit purpose of acquiring and representing one type of structural understanding of language.

Natural Language Understanding Position

Deep Representational Re-tuning using Contrastive Tension

1 code implementation ICLR 2021 Fredrik Carlsson, Amaru Cuba Gyllensten, Evangelia Gogoulou, Erik Ylipää Hellqvist, Magnus Sahlgren

Extracting semantically useful natural language sentence representations from pre-trained deep neural networks such as Transformers remains a challenge.

Semantic Similarity Semantic Textual Similarity +3

Cannot find the paper you are looking for? You can Submit a new open access paper.