1 code implementation • 14 Feb 2024 • Yuhui Shi, Qiang Sheng, Juan Cao, Hao Mi, Beizhe Hu, Danding Wang
With the rapidly increasing application of large language models (LLMs), their abuse has caused many undesirable societal problems such as fake news, academic dishonesty, and information pollution.
1 code implementation • 27 Dec 2023 • Zhengjia Wang, Danding Wang, Qiang Sheng, Juan Cao, Silong Su, Yifan Sun, Beizhe Hu, Siyuan Ma
As the disruptive changes in the media economy and the proliferation of alternative news media outlets, news intent has progressively deviated from ethical standards that serve the public interest.
no code implementations • 29 Nov 2023 • Xiaoyue Mi, Fan Tang, Zonghan Yang, Danding Wang, Juan Cao, Peng Li, Yang Liu
Despite the remarkable advances that have been made in continual learning, the adversarial vulnerability of such methods has not been fully discussed.
no code implementations • 29 Nov 2023 • Xiaoyue Mi, Fan Tang, Yepeng Weng, Danding Wang, Juan Cao, Sheng Tang, Peng Li, Yang Liu
Despite the effectiveness in improving the robustness of neural networks, adversarial training has suffered from the natural accuracy degradation problem, i. e., accuracy on natural samples has reduced significantly.
no code implementations • 29 Nov 2023 • Zhihao Sun, Haipeng Fang, Xinying Zhao, Danding Wang, Juan Cao
However, the lack of comprehensive dataset containing images edited with abundant and advanced generative regional editing methods poses a substantial obstacle to the advancement of corresponding detection methods.
no code implementations • 16 Oct 2023 • Qiong Nan, Qiang Sheng, Juan Cao, Yongchun Zhu, Danding Wang, Guang Yang, Jintao Li, Kai Shu
To break such a dilemma, a feasible but not well-studied solution is to leverage social contexts (e. g., comments) from historical news for training a detection model and apply it to newly emerging news without social contexts.
1 code implementation • 21 Sep 2023 • Beizhe Hu, Qiang Sheng, Juan Cao, Yuhui Shi, Yang Li, Danding Wang, Peng Qi
To instantiate this proposal, we design an adaptive rationale guidance network for fake news detection (ARG), in which SLMs selectively acquire insights on news analysis from the LLMs' rationales.
no code implementations • 29 Jul 2023 • Tianyun Yang, Juan Cao, Danding Wang, Chang Xu
It is verified in existing works that CNN-based generative models leave unique fingerprints on generated images.
1 code implementation • 26 Jun 2023 • Beizhe Hu, Qiang Sheng, Juan Cao, Yongchun Zhu, Danding Wang, Zhengjia Wang, Zhiwei Jin
In this paper, we observe that the appearances of news events on the same topic may display discernible patterns over time, and posit that such patterns can assist in selecting training instances that could make the model adapt better to future data.
1 code implementation • CVPR 2023 • Tianyun Yang, Danding Wang, Fan Tang, Xinying Zhao, Juan Cao, Sheng Tang
In this study, we focus on a challenging task, namely Open-Set Model Attribution (OSMA), to simultaneously attribute images to known models and identify those from unknown ones.
2 code implementations • 7 Feb 2023 • Yuyan Bu, Qiang Sheng, Juan Cao, Peng Qi, Danding Wang, Jintao Li
With information consumption via online video streaming becoming increasingly popular, misinformation video poses a new threat to the health of the online information ecosystem.
no code implementations • ICCV 2023 • Zhihao Sun, Haoran Jiang, Danding Wang, Xirong Li, Juan Cao
Since image editing methods in real world scenarios cannot be exhausted, generalization is a core challenge for image manipulation detection, which could be severely weakened by semantically related features.
no code implementations • COLING 2022 • Qiong Nan, Danding Wang, Yongchun Zhu, Qiang Sheng, Yuhui Shi, Juan Cao, Jintao Li
To address this issue, we propose a Domain- and Instance-level Transfer Framework for Fake News Detection (DITFEND), which could improve the performance of specific target domains.
1 code implementation • 20 Apr 2022 • Yongchun Zhu, Qiang Sheng, Juan Cao, Shuokai Li, Danding Wang, Fuzhen Zhuang
In this paper, we propose an entity debiasing framework (\textbf{ENDEF}) which generalizes fake news detection models to the future data by mitigating entity bias from a cause-effect perspective.
1 code implementation • ACL 2022 • Qiang Sheng, Juan Cao, Xueyao Zhang, Rundong Li, Danding Wang, Yongchun Zhu
To differentiate fake news from real ones, existing methods observe the language patterns of the news post and "zoom in" to verify its content with knowledge sources or check its readers' replies.
no code implementations • 23 Jan 2021 • Danding Wang, Wencan Zhang, Brian Y. Lim
Feature attribution is widely used in interpretable machine learning to explain how influential each measured input feature value is for an output inference.