Search Results for author: Zihe Song

Found 7 papers, 2 papers with code

Mixed Supervised Graph Contrastive Learning for Recommendation

no code implementations24 Apr 2024 Weizhi Zhang, Liangwei Yang, Zihe Song, Henry Peng Zou, Ke Xu, Yuanjie Zhu, Philip S. Yu

Graph contrastive learning aims to learn from high-order collaborative filtering signals with unsupervised augmentation on the user-item bipartite graph, which predominantly relies on the multi-task learning framework involving both the pair-wise recommendation loss and the contrastive loss.

Collaborative Filtering Contrastive Learning +2

Cyclic Neural Network

no code implementations11 Jan 2024 Liangwei Yang, Hengrui Zhang, Zihe Song, Jiawei Zhang, Weizhi Zhang, Jing Ma, Philip S. Yu

This paper answers a fundamental question in artificial neural network (ANN) design: We do not need to build ANNs layer-by-layer sequentially to guarantee the Directed Acyclic Graph (DAG) property.

NICGSlowDown: Evaluating the Efficiency Robustness of Neural Image Caption Generation Models

1 code implementation CVPR 2022 Simin Chen, Zihe Song, Mirazul Haque, Cong Liu, Wei Yang

To further understand such efficiency-oriented threats, we propose a new attack approach, NICGSlowDown, to evaluate the efficiency robustness of NICG models.

Caption Generation

TransSlowDown: Efficiency Attacks on Neural Machine Translation Systems

no code implementations29 Sep 2021 Simin Chen, Mirazul Haque, Zihe Song, Cong Liu, Wei Yang

To further the understanding of such efficiency-oriented threats and raise the community’s concern on the efficiency robustness of NMT systems, we propose a new attack approach, TranSlowDown, to test the efficiency robustness of NMT systems.

Machine Translation NMT +1

AttackDist: Characterizing Zero-day Adversarial Samples by Counter Attack

no code implementations1 Jan 2021 Simin Chen, Zihe Song, Lei Ma, Cong Liu, Wei Yang

We first theoretically clarify under which condition AttackDist can provide a certified detecting performance, then show that a potential application of AttackDist is distinguishing zero-day adversarial examples without knowing the mechanisms of new attacks.

Cannot find the paper you are looking for? You can Submit a new open access paper.