no code implementations • 27 Mar 2024 • Erjia Chen, Bang Wang
In this paper, we challenge such an equal training assumption and propose a novel one backpropagation updating strategy, which keeps the normal gradient backpropagation for the item encoding tower, but cuts off the backpropagation for the user encoding tower.
1 code implementation • 2 Jun 2023 • Bin Liu, Erjia Chen, Bang Wang
To achieve this win-win situation, we propose to intervene in model training through negative sampling thereby modifying model predictions.