Search Results for author: Carsten Jentsch

Found 8 papers, 2 papers with code

Structural Periodic Vector Autoregressions

no code implementations25 Jan 2024 Daniel Dzikowski, Carsten Jentsch

While seasonality inherent to raw macroeconomic data is commonly removed by seasonal adjustment techniques before it is used for structural inference, this approach might distort valuable information contained in the data.

Prototypes as Explanation for Time Series Anomaly Detection

no code implementations4 Jul 2023 Bin Li, Carsten Jentsch, Emmanuel Müller

Detecting abnormal patterns that deviate from a certain regular repeating pattern in time series is essential in many big data applications.

Anomaly Detection Time Series +1

Lex2Sent: A bagging approach to unsupervised sentiment analysis

no code implementations26 Sep 2022 Kai-Robin Lange, Jonas Rieger, Carsten Jentsch

Unsupervised sentiment analysis is traditionally performed by counting those words in a text that are stored in a sentiment lexicon and then assigning a label depending on the proportion of positive and negative words registered.

Classification Sentiment Analysis

A Bootstrap-Assisted Self-Normalization Approach to Inference in Cointegrating Regressions

1 code implementation4 Apr 2022 Karsten Reichold, Carsten Jentsch

Traditional inference in cointegrating regressions requires tuning parameter choices to estimate a long-run variance parameter.

Unsupervised Movement Detection in Indoor Positioning Systems of Production Halls

no code implementations21 Aug 2021 Jonathan Flossdorf, Anne Meyer, Dmitri Artjuch, Jaques Schneider, Carsten Jentsch

Beside its large volume, the analyzation of the resulting raw data is challenging due to the susceptibility towards noise.

Random boosting and random^2 forests -- A random tree depth injection approach

no code implementations13 Sep 2020 Tobias Markus Krabel, Thi Ngoc Tien Tran, Andreas Groll, Daniel Horn, Carsten Jentsch

A Monte Carlo simulation, in which tree-shaped data sets with different numbers of final partitions are built, suggests that there are several scenarios where \emph{Random Boost} and \emph{Random$^2$ Forest} can improve the prediction performance of conventional hierarchical boosting and random forest approaches.

Improving Reliability of Latent Dirichlet Allocation by Assessing Its Stability Using Clustering Techniques on Replicated Runs

no code implementations14 Feb 2020 Jonas Rieger, Lars Koppers, Carsten Jentsch, Jörg Rahnenführer

Based on the newly proposed measure for LDA stability, we propose a method to increase the reliability and hence to improve the reproducibility of empirical findings based on topic modeling.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.