Common Sense Reasoning
254 papers with code • 24 benchmarks • 52 datasets
Common sense reasoning tasks are intended to require the model to go beyond pattern recognition. Instead, the model should use "common sense" or world knowledge to make inferences.
Libraries
Use these libraries to find Common Sense Reasoning models and implementationsDatasets
Subtasks
Latest papers with no code
Exploring AIGC Video Quality: A Focus on Visual Harmony, Video-Text Consistency and Domain Distribution Gap
The recent advancements in Text-to-Video Artificial Intelligence Generated Content (AIGC) have been remarkable.
Concept Induction using LLMs: a user experiment for assessment
To evaluate the output, we compare the concepts generated by the LLM with two other methods: concepts generated by humans and the ECII heuristic concept induction system.
Enhancing 3D Fidelity of Text-to-3D using Cross-View Correspondences
Leveraging multi-view diffusion models as priors for 3D optimization have alleviated the problem of 3D consistency, e. g., the Janus face problem or the content drift problem, in zero-shot text-to-3D models.
Deep Reinforcement Learning-Based Approach for a Single Vehicle Persistent Surveillance Problem with Fuel Constraints
This article presents a deep reinforcement learning-based approach to tackle a persistent surveillance mission requiring a single unmanned aerial vehicle initially stationed at a depot with fuel or time-of-flight constraints to repeatedly visit a set of targets with equal priority.
DELTA: Decomposed Efficient Long-Term Robot Task Planning using Large Language Models
Recent advancements in Large Language Models (LLMs) have sparked a revolution across various research fields.
Auditing Large Language Models for Enhanced Text-Based Stereotype Detection and Probing-Based Bias Evaluation
Recent advancements in Large Language Models (LLMs) have significantly increased their presence in human-facing Artificial Intelligence (AI) applications.
Detect2Interact: Localizing Object Key Field in Visual Question Answering (VQA) with LLMs
As a result, Detect2Interact achieves consistent qualitative results on object key field detection across extensive test cases and outperforms the existing VQA system with object detection by providing a more reasonable and finer visual representation.
ITCMA: A Generative Agent Based on a Computational Consciousness Structure
ITCMA enhances LLMs' ability to understand implicit instructions and apply common-sense knowledge by considering agents' interaction and reasoning with the environment.
LC-LLM: Explainable Lane-Change Intention and Trajectory Predictions with Large Language Models
To the best of our knowledge, this is the first attempt to utilize LLMs for predicting lane change behavior.
Hallucination Detection in Foundation Models for Decision-Making: A Flexible Definition and Review of the State of the Art
The rise of foundation models trained on multiple tasks with impressively large datasets from a variety of fields has led researchers to believe that these models may provide common sense reasoning that existing planners are missing.