no code implementations • 14 Sep 2022 • Sandra Wankmüller
To make generalizing inferences about the performance effect that is caused by applying some method A vs. another method B, it is not sufficient to compare a few specific models that are produced by a few specific (probably incomparable) processing systems.
no code implementations • 3 May 2022 • Sandra Wankmüller
More complex and costly methods such as query expansion techniques, topic model-based classification rules, and active as well as passive supervised learning could have the potential to more accurately separate relevant from irrelevant documents and thereby reduce the potential size of bias.
no code implementations • 3 Feb 2021 • Sandra Wankmüller
Transformer-based models for transfer learning have the potential to achieve high prediction accuracies on text-based supervised learning tasks with relatively few training data instances.