Common Sense Reasoning

259 papers with code • 24 benchmarks • 52 datasets

Common sense reasoning tasks are intended to require the model to go beyond pattern recognition. Instead, the model should use "common sense" or world knowledge to make inferences.

A Content-Based Novelty Measure for Scholarly Publications: A Proof of Concept

Wang-Haining/noveval 8 Jan 2024

Novelty, akin to gene mutation in evolution, opens possibilities for scholarly advancement.

0
08 Jan 2024

Parameter-Efficient Sparsity Crafting from Dense to Mixture-of-Experts for Instruction Tuning on General Tasks

wuhy68/parameter-efficient-moe 5 Jan 2024

Instruction tuning, a successful paradigm, enhances the ability of LLMs to follow natural language instructions and exhibit robust generalization across a wide range of tasks.

111
05 Jan 2024

Collaborative Synthesis of Patient Records through Multi-Visit Health State Inference

p1nksnow/MSIC 22 Dec 2023

Furthermore, we propose to generate medical reports to add textual descriptions for each medical event, providing broader applications for synthesized EHR data.

5
22 Dec 2023

A Semantic Space is Worth 256 Language Descriptions: Make Stronger Segmentation Models with Descriptive Properties

lambert-x/prolab 21 Dec 2023

Instead of relying solely on category-specific annotations, ProLab uses descriptive properties grounded in common sense knowledge for supervising segmentation models.

48
21 Dec 2023

CORECODE: A Common Sense Annotated Dialogue Dataset with Benchmark Tasks for Chinese Large Language Models

danshi777/corecode 20 Dec 2023

With these pre-defined domains and slots, we collect 76, 787 commonsense knowledge annotations from 19, 700 dialogues through crowdsourcing.

3
20 Dec 2023

Holodeck: Language Guided Generation of 3D Embodied AI Environments

allenai/Holodeck 14 Dec 2023

3D simulated environments play a critical role in Embodied AI, but their creation requires expertise and extensive manual effort, restricting their diversity and scope.

273
14 Dec 2023

Generative agent-based modeling with actions grounded in physical, social, or digital space using Concordia

google-deepmind/concordia 6 Dec 2023

Agent-based modeling has been around for decades, and applied widely across the social and natural sciences.

359
06 Dec 2023

Mamba: Linear-Time Sequence Modeling with Selective State Spaces

state-spaces/mamba 1 Dec 2023

Foundation models, now powering most of the exciting applications in deep learning, are almost universally based on the Transformer architecture and its core attention module.

9,835
01 Dec 2023

Speak Like a Native: Prompting Large Language Models in a Native Style

yangzhch6/alignedcot 22 Nov 2023

Specifically, with AlignedCoT, we observe an average +3. 2\% improvement for \texttt{gpt-3. 5-turbo} compared to the carefully handcrafted CoT on multi-step reasoning benchmarks. Furthermore, we use AlignedCoT to rewrite the CoT text style in the training set, which improves the performance of Retrieval Augmented Generation by 3. 6\%. The source code and dataset is available at https://github. com/yangzhch6/AlignedCoT

6
22 Nov 2023

A Language Agent for Autonomous Driving

usc-gvl/agent-driver 17 Nov 2023

Our approach, termed Agent-Driver, transforms the traditional autonomous driving pipeline by introducing a versatile tool library accessible via function calls, a cognitive memory of common sense and experiential knowledge for decision-making, and a reasoning engine capable of chain-of-thought reasoning, task planning, motion planning, and self-reflection.

168
17 Nov 2023