knowledge editing
32 papers with code • 1 benchmarks • 2 datasets
Libraries
Use these libraries to find knowledge editing models and implementationsMost implemented papers
Journey to the Center of the Knowledge Neurons: Discoveries of Language-Independent Knowledge Neurons and Degenerate Knowledge Neurons
We design cross-lingual knowledge editing experiments, demonstrating that the PLMs can accomplish this task based on language-independent neurons; (2) The discovery of Degenerate Knowledge Neurons, a novel type of neuron showing that different knowledge neurons can store the same fact.
Cross-Lingual Knowledge Editing in Large Language Models
With the recent advancements in large language models (LLMs), knowledge editing has been shown as a promising technique to adapt LLMs to new knowledge without retraining from scratch.
Unveiling the Pitfalls of Knowledge Editing for Large Language Models
This paper pioneers the investigation into the potential pitfalls associated with knowledge editing for LLMs.
Untying the Reversal Curse via Bidirectional Language Model Editing
A new evaluation metric of reversibility is introduced, and a benchmark dubbed as Bidirectional Assessment for Knowledge Editing (BAKE) is constructed to evaluate the reversibility of edited models in recalling knowledge in the reverse direction of editing.
Finding and Editing Multi-Modal Neurons in Pre-Trained Transformer
Multi-modal large language models (LLM) have achieved powerful capabilities for visual semantic understanding in recent years.
Evaluating Dependencies in Fact Editing for Language Models: Specificity and Implication Awareness
The potential of using a large language model (LLM) as a knowledge base (KB) has sparked significant interest.
History Matters: Temporal Knowledge Editing in Large Language Model
The imperative task of revising or updating the knowledge stored within large language models arises from two distinct sources: intrinsic errors inherent in the model which should be corrected and outdated knowledge due to external shifts in the real world which should be updated.
Retrieval-augmented Multilingual Knowledge Editing
Knowledge represented in Large Language Models (LLMs) is quite often incorrect and can also become obsolete over time.
PokeMQA: Programmable knowledge editing for Multi-hop Question Answering
Multi-hop question answering (MQA) is one of the challenging tasks to evaluate machine's comprehension and reasoning abilities, where large language models (LLMs) have widely achieved the human-comparable performance.
DeepEdit: Knowledge Editing as Decoding with Constraints
To enforce these constraints, we utilize a depth-first search to adaptively substitute new knowledge for the LLMs' original reasoning steps, greedily seeking the optimal path of multi-hop reasoning with new knowledge.