no code implementations • 22 Mar 2024 • Mahtab Sarvmaili, Hassan Sajjad, Ga Wu
Existing example-based prediction explanation methods often bridge test and training data points through the model's parameters or latent representations.
no code implementations • 25 Jan 2024 • Kai Luo, Tianshu Shen, Lan Yao, Ga Wu, Aaron Liblong, Istvan Fehervari, Ruijian An, Jawad Ahmed, Harshit Mishra, Charu Pujari
Within-basket recommendation (WBR) refers to the task of recommending items to the end of completing a non-empty shopping basket during a shopping session.
1 code implementation • 11 Oct 2023 • Yi Sui, Tongzi Wu, Jesse C. Cresswell, Ga Wu, George Stein, Xiao Shi Huang, Xiaochen Zhang, Maksims Volkovs
Self-supervised representation learning~(SSRL) has advanced considerably by exploiting the transformation invariance assumption under artificially designed data augmentations.
no code implementations • 10 Oct 2022 • Giuseppe Castiglione, Ga Wu, Christopher Srinivasa, Simon Prince
We propose a novel criterion for evaluating individual fairness and develop a practical testing method based on this criterion which we call fAux (pronounced fox).
no code implementations • 31 Mar 2022 • Giuseppe Castiglione, Gavin Ding, Masoud Hashemi, Christopher Srinivasa, Ga Wu
Adversarial robustness is one of the essential safety criteria for guaranteeing the reliability of machine learning models.
no code implementations • 2 Mar 2022 • Ga Wu, Masoud Hashemi, Christopher Srinivasa
It then complements the negative impact of removing marked data by reweighting the remaining data optimally.
1 code implementation • NeurIPS 2021 • Yi Sui, Ga Wu, Scott Sanner
Explaining the influence of training data on deep neural network predictions is a critical tool for debugging models through data curation.
1 code implementation • 5 Oct 2021 • Yi Sui, Ga Wu, Scott Sanner
We additionally introduce a novel Frobenius norm-based contrastive learning objective to improve latent representational generalization. Empirically, we validate MAPSED on two publicly accessible urban crime datasets for spatiotemporal sparse event prediction, where MAPSED outperforms both classical and state-of-the-art deep learning models.
no code implementations • 24 Oct 2020 • Zheda Mai, Ga Wu, Kai Luo, Scott Sanner
In order to capture multifaceted user preferences, existing recommender systems either increase the encoding complexity or extend the latent representation dimension.
no code implementations • 3 Aug 2020 • Jin Peng Zhou, Ga Wu, Zheda Mai, Scott Sanner
One-class collaborative filtering (OC-CF) is a common class of recommendation problem where only the positive class is explicitly observed (e. g., purchases, clicks).
no code implementations • 5 Apr 2019 • Ga Wu, Buser Say, Scott Sanner
But there remains one major problem for the task of control -- how can we plan with deep network learned transition models without resorting to Monte Carlo Tree Search and other black-box transition model techniques that ignore model structure and do not easily extend to mixed discrete and continuous domains?
no code implementations • 31 Aug 2018 • Yu Qing Zhou, Ga Wu, Scott Sanner, Putra Manggala
Many photography websites such as Flickr, 500px, Unsplash, and Adobe Behance are used by amateur and professional photography enthusiasts.
1 code implementation • ICLR 2019 • Ga Wu, Justin Domke, Scott Sanner
Variational Autoencoders (VAEs) are a popular generative model, but one in which conditional inference can be challenging.
no code implementations • NeurIPS 2017 • Ga Wu, Buser Say, Scott Sanner
Given recent deep learning results that demonstrate the ability to effectively optimize high-dimensional non-convex functions with gradient descent optimization on GPUs, we ask in this paper whether symbolic gradient optimization tools such as Tensorflow can be effective for planning in hybrid (mixed discrete and continuous) nonlinear domains with high dimensional state and action spaces?