Common Sense Reasoning
254 papers with code • 24 benchmarks • 52 datasets
Common sense reasoning tasks are intended to require the model to go beyond pattern recognition. Instead, the model should use "common sense" or world knowledge to make inferences.
Libraries
Use these libraries to find Common Sense Reasoning models and implementationsDatasets
Subtasks
Latest papers with no code
Leveraging Large Language Model-based Room-Object Relationships Knowledge for Enhancing Multimodal-Input Object Goal Navigation
In this study, we propose a data-driven, modular-based approach, trained on a dataset that incorporates common-sense knowledge of object-to-room relationships extracted from a large language model.
To Help or Not to Help: LLM-based Attentive Support for Human-Robot Group Interactions
In addition to following user instructions, Attentive Support is capable of deciding when and how to support the humans, and when to remain silent to not disturb the group.
LogicalDefender: Discovering, Extracting, and Utilizing Common-Sense Knowledge
Experiments show that our model has achieved better logical performance, and the extracted logical knowledge can be effectively applied to other scenarios.
PhD: A Prompted Visual Hallucination Evaluation Dataset
The rapid growth of Large Language Models (LLMs) has driven the development of Large Vision-Language Models (LVLMs).
ContextGPT: Infusing LLMs Knowledge into Neuro-Symbolic Activity Recognition Models
Neuro-Symbolic AI (NeSy) provides an interesting research direction to mitigate this issue, by infusing common-sense knowledge about human activities and the contexts in which they can be performed into HAR deep learning classifiers.
How to Understand Named Entities: Using Common Sense for News Captioning
Our approach consists of three modules: (a) Filter Module aims to clarify the common sense concerning a named entity from two aspects: what does it mean?
Repeated Padding as Data Augmentation for Sequential Recommendation
Specifically, we use the original interaction sequences as the padding content and fill it to the padding positions during model training.
Telecom Language Models: Must They Be Large?
The increasing interest in Large Language Models (LLMs) within the telecommunications sector underscores their potential to revolutionize operational efficiency.
The Claude 3 Model Family: Opus, Sonnet, Haiku
We introduce Claude 3, a new family of large multimodal models – Claude 3 Opus, our most capable offering, Claude 3 Sonnet, which provides a combination of skills and speed, and Claude 3 Haiku, our fastest and least expensive model.
SERVAL: Synergy Learning between Vertical Models and LLMs towards Oracle-Level Zero-shot Medical Prediction
Recent development of large language models (LLMs) has exhibited impressive zero-shot proficiency on generic and common sense questions.