Search Results for author: Ojasv Kamal

Found 5 papers, 4 papers with code

Adversities are all you need: Classification of self-reported breast cancer posts on Twitter using Adversarial Fine-tuning

no code implementations NAACL (SMM4H) 2021 Adarsh Kumar, Ojasv Kamal, Susmita Mazumdar

In this paper, we describe our system entry for Shared Task 8 at SMM4H-2021, which is on automatic classification of self-reported breast cancer posts on Twitter.

Language Modelling

CLadder: Assessing Causal Reasoning in Language Models

1 code implementation NeurIPS 2023 Zhijing Jin, Yuen Chen, Felix Leeb, Luigi Gresele, Ojasv Kamal, Zhiheng Lyu, Kevin Blin, Fernando Gonzalez Adauto, Max Kleiman-Weiner, Mrinmaya Sachan, Bernhard Schölkopf

Much of the existing work in natural language processing (NLP) focuses on evaluating commonsense causal reasoning in LLMs, thus failing to assess whether a model can perform causal inference in accordance with a set of well-defined formal rules.

Causal Inference Commonsense Causal Reasoning +1

Moûsai: Text-to-Music Generation with Long-Context Latent Diffusion

1 code implementation27 Jan 2023 Flavio Schneider, Ojasv Kamal, Zhijing Jin, Bernhard Schölkopf

Recent years have seen the rapid development of large generative models for text; however, much less research has explored the connection between text and another "language" of communication -- music.

Image Generation Music Generation +1

When to Make Exceptions: Exploring Language Models as Accounts of Human Moral Judgment

1 code implementation4 Oct 2022 Zhijing Jin, Sydney Levine, Fernando Gonzalez, Ojasv Kamal, Maarten Sap, Mrinmaya Sachan, Rada Mihalcea, Josh Tenenbaum, Bernhard Schölkopf

Using a state-of-the-art large language model (LLM) as a basis, we propose a novel moral chain of thought (MORALCOT) prompting strategy that combines the strengths of LLMs with theories of moral reasoning developed in cognitive science to predict human moral judgments.

Language Modelling Large Language Model +1

Hostility Detection in Hindi leveraging Pre-Trained Language Models

1 code implementation14 Jan 2021 Ojasv Kamal, Adarsh Kumar, Tejas Vaidhya

This paper harnesses attention based pre-trained models fine-tuned on Hindi data with Hostile-Non hostile task as Auxiliary and fusing its features for further sub-tasks classification.

Fake News Detection Hate Speech Detection +1

Cannot find the paper you are looking for? You can Submit a new open access paper.