1 code implementation • 11 Feb 2023 • Wenxuan Wang, Jen-tse Huang, Weibin Wu, Jianping Zhang, Yizhan Huang, Shuqing Li, Pinjia He, Michael Lyu
In addition, we leverage the test cases generated by MTTM to retrain the model we explored, which largely improves model robustness (0% to 5. 9% EFR) while maintaining the accuracy on the original test set.
2 code implementations • 25 Sep 2019 • Liwei Wu, Shuqing Li, Cho-Jui Hsieh, James Sharpnack
Recent advances in deep learning, especially the discovery of various attention mechanisms and newer architectures in addition to widely used RNN and CNN in natural language processing, have allowed for better use of the temporal ordering of items that each user has engaged with.
Ranked #1 on Recommendation Systems on MovieLens 1M (nDCG@10 metric)
3 code implementations • 15 Aug 2019 • Liwei Wu, Shuqing Li, Cho-Jui Hsieh, James Sharpnack
Recent advances in deep learning, especially the discovery of various attention mechanisms and newer architectures in addition to widely used RNN and CNN in natural language processing, have allowed us to make better use of the temporal ordering of items that each user has engaged with.
3 code implementations • NeurIPS 2019 • Liwei Wu, Shuqing Li, Cho-Jui Hsieh, James Sharpnack
We find that when used along with widely-used regularization methods such as weight decay and dropout, our proposed SSE can further reduce overfitting, which often leads to more favorable generalization results.