Search Results for author: Weimin Zhang

Found 5 papers, 1 papers with code

SWEA: Updating Factual Knowledge in Large Language Models via Subject Word Embedding Altering

1 code implementation31 Jan 2024 Xiaopeng Li, Shasha Li, Shezheng Song, Huijun Liu, Bin Ji, Xi Wang, Jun Ma, Jie Yu, Xiaodong Liu, Jing Wang, Weimin Zhang

In particular, local editing methods, which directly update model parameters, are more suitable for updating a small amount of knowledge.

Model Editing Word Embeddings

How to Bridge the Gap between Modalities: A Comprehensive Survey on Multimodal Large Language Model

no code implementations10 Nov 2023 Shezheng Song, Xiaopeng Li, Shasha Li, Shan Zhao, Jie Yu, Jun Ma, Xiaoguang Mao, Weimin Zhang

The study surveys existing modal alignment methods in MLLMs into four groups: (1) Multimodal Converters that change data into something LLMs can understand; (2) Multimodal Perceivers to improve how LLMs perceive different types of data; (3) Tools Assistance for changing data into one common format, usually text; and (4) Data-Driven methods that teach LLMs to understand specific types of data in a dataset.

Language Modelling Large Language Model

Correlative Preference Transfer with Hierarchical Hypergraph Network for Multi-Domain Recommendation

no code implementations21 Nov 2022 Zixuan Xu, Penghui Wei, Shaoguo Liu, Weimin Zhang, Liang Wang, Bo Zheng

Conventional graph neural network based methods usually deal with each domain separately, or train a shared model to serve all domains.

Marketing Recommendation Systems

UKD: Debiasing Conversion Rate Estimation via Uncertainty-regularized Knowledge Distillation

no code implementations20 Jan 2022 Zixuan Xu, Penghui Wei, Weimin Zhang, Shaoguo Liu, Liang Wang, Bo Zheng

Then a student model is trained on both clicked and unclicked ads with knowledge distillation, performing uncertainty modeling to alleviate the inherent noise in pseudo-labels.

Knowledge Distillation Selection bias

Cannot find the paper you are looking for? You can Submit a new open access paper.