no code implementations • 15 Apr 2024 • Martin Kodys, Zhongmin Dai, Vrizlynn L. L. Thing
A common service provision involves the input data from the client and the model on the analyst's side.
no code implementations • 8 Dec 2023 • Balachandar Gowrisankar, Vrizlynn L. L. Thing
In this paper, we perform experiments to show that generic removal/insertion XAI evaluation methods are not suitable for deepfake detection models.
no code implementations • 15 Apr 2023 • Rahul Kale, Vrizlynn L. L. Thing
In this paper, we propose an enhancement to an existing few-shot weakly-supervised deep learning anomaly detection framework.
no code implementations • 7 Apr 2023 • ZiHao Wang, Vrizlynn L. L. Thing
Currently, research on encrypted malicious traffic detection without decryption has focused on feature extraction and the choice of machine learning or deep learning algorithms.
no code implementations • 7 Apr 2023 • Vrizlynn L. L. Thing
In this work, we study the evolutions of deep learning architectures, particularly CNNs and Transformers.
no code implementations • 2 Dec 2022 • Rahul Kale, Zhi Lu, Kar Wai Fok, Vrizlynn L. L. Thing
Cyber intrusion attacks that compromise the users' critical and sensitive data are escalating in volume and intensity, especially with the growing connections between our daily life and the Internet.
no code implementations • 18 Nov 2022 • Vrizlynn L. L. Thing
Our solution achieved 1st place at the IEEE Big Data Cup 2022: Privacy Preserving Matching of Encrypted Images Challenge.
no code implementations • 18 Nov 2022 • Martin Kodys, Zhi Lu, Kar Wai Fok, Vrizlynn L. L. Thing
As security mechanisms are often neglected during the deployment of IoT devices, they are more easily attacked by complicated and large volume intrusion attacks using advanced techniques.
no code implementations • 18 Nov 2022 • Kar Wai Fok, Vrizlynn L. L. Thing
Hence, for each malware family, a group of signatures is generated to represent the family.
no code implementations • 2 Jul 2022 • Zhi Lu, Vrizlynn L. L. Thing
It is especially so when the model's incorrect prediction can lead to severe damages or even losses to lives and critical assets.
no code implementations • 6 Nov 2021 • Zhi Lu, Vrizlynn L. L. Thing
In the following experiments, we compare the explainability and fidelity of our proposed method with state-of-the-arts, respectively.