1 code implementation • COLING 2022 • Arman Kazmi, Sidharth Ranjan, Arpit Sharma, Rajakrishnan Rajkumar
We also compared the classification accuracy of the logistic regression model with two deep-learning models.
no code implementations • 29 Apr 2024 • Sidharth Ranjan, Titus von der Malsburg
Dependency length minimization is a universally observed quantitative property of natural languages.
no code implementations • 22 Apr 2023 • Sidharth Ranjan, Titus von der Malsburg
Additionally, for the task of distinguishing corpus sentences from counterfactual variants, we find that the dependency length and constituent length of the constituent closest to the main verb are much better predictors of whether a sentence appeared in the corpus than total dependency length.
no code implementations • 25 Oct 2022 • Sidharth Ranjan, Marten Van Schijndel, Sumeet Agarwal, Rajakrishnan Rajkumar
While prior work has shown that a number of factors (e. g., information status, dependency length, and syntactic surprisal) influence Hindi word order preferences, the role of discourse predictability is underexplored in the literature.
no code implementations • 25 Oct 2022 • Sidharth Ranjan, Marten Van Schijndel, Sumeet Agarwal, Rajakrishnan Rajkumar
By showing that different priming influences are separable from one another, our results support the hypothesis that multiple different cognitive mechanisms underlie priming.
no code implementations • WS 2019 • Mohammed Rameez Qureshi, Sidharth Ranjan, Rajakrishnan Rajkumar, Kushal Shah
In this work, we deploy a logistic regression classifier to ascertain whether a given document belongs to the fiction or non-fiction genre.
no code implementations • WS 2019 • Sidharth Ranjan, Sumeet Agarwal, Rajakrishnan Rajkumar
Based on the Production-Distribution-Comprehension (PDC) account of language processing, we formulate two distinct hypotheses about case marking, word order choices and processing in Hindi.
no code implementations • WS 2018 • Ayush Jain, Vishal Singh, Sidharth Ranjan, Rajakrishnan Rajkumar, Sumeet Agarwal
According to the UNIFORM INFORMATION DENSITY (UID) hypothesis (Levy and Jaeger, 2007; Jaeger, 2010), speakers tend to distribute information density across the signal uniformly while producing language.