no code implementations • 5 Apr 2024 • Xize Liang, Chao Chen, Jie Wang, Yue Wu, Zhihang Fu, Zhihao Shi, Feng Wu, Jieping Ye
The preference alignment aims to enable large language models (LLMs) to generate responses that conform to human values, which is essential for developing general AI systems.
no code implementations • 17 Mar 2023 • Jie Wang, Zhihao Shi, Xize Liang, Shuiwang Ji, Bin Li, Feng Wu
During the message passing (MP) in GNNs, subgraph-wise sampling methods discard messages outside the mini-batches in backward passes to avoid the well-known neighbor explosion problem, i. e., the exponentially increasing dependencies of nodes with the number of MP iterations.
1 code implementation • 2 Feb 2023 • Zhihao Shi, Xize Liang, Jie Wang
The key idea of LMC is to retrieve the discarded messages in backward passes based on a message passing formulation of backward passes.