Search Results for author: Ee-Chien Chang

Found 25 papers, 3 papers with code

Unlearning Backdoor Threats: Enhancing Backdoor Defense in Multimodal Contrastive Learning via Local Token Unlearning

no code implementations24 Mar 2024 Siyuan Liang, Kuanrong Liu, Jiajun Gong, Jiawei Liang, Yuan Xun, Ee-Chien Chang, Xiaochun Cao

In this paper, we explore the possibility of a less-cost defense from the perspective of model unlearning, that is, whether the model can be made to quickly \textbf{u}nlearn \textbf{b}ackdoor \textbf{t}hreats (UBT) by constructing a small set of poisoned samples.

backdoor defense Contrastive Learning

Object Detectors in the Open Environment: Challenges, Solutions, and Outlook

1 code implementation24 Mar 2024 Siyuan Liang, Wei Wang, Ruoyu Chen, Aishan Liu, Boxi Wu, Ee-Chien Chang, Xiaochun Cao, DaCheng Tao

This paper aims to bridge this gap by conducting a comprehensive review and analysis of object detectors in open environments.

Incremental Learning Object

Semantic Mirror Jailbreak: Genetic Algorithm Based Jailbreak Prompts Against Open-source LLMs

no code implementations21 Feb 2024 Xiaoxia Li, Siyuan Liang, Jiyi Zhang, Han Fang, Aishan Liu, Ee-Chien Chang

Large Language Models (LLMs), used in creative writing, code generation, and translation, generate text based on input sequences but are vulnerable to jailbreak attacks, where crafted prompts induce harmful outputs.

Code Generation Semantic Similarity +1

Domain Bridge: Generative model-based domain forensic for black-box models

no code implementations7 Feb 2024 Jiyi Zhang, Han Fang, Ee-Chien Chang

In forensic investigations of machine learning models, techniques that determine a model's data domain play an essential role, with prior work relying on large-scale corpora like ImageNet to approximate the target model's domain.

BadCLIP: Dual-Embedding Guided Backdoor Attack on Multimodal Contrastive Learning

no code implementations20 Nov 2023 Siyuan Liang, Mingli Zhu, Aishan Liu, Baoyuan Wu, Xiaochun Cao, Ee-Chien Chang

This paper reveals the threats in this practical scenario that backdoor attacks can remain effective even after defenses and introduces the \emph{\toolns} attack, which is resistant to backdoor detection and model fine-tuning defenses.

Backdoor Attack Contrastive Learning

Improving Adversarial Transferability by Stable Diffusion

no code implementations18 Nov 2023 Jiayang Liu, Siyu Zhu, Siyuan Liang, Jie Zhang, Han Fang, Weiming Zhang, Ee-Chien Chang

Various techniques have emerged to enhance the transferability of adversarial attacks for the black-box scenario.

Adaptive Attractors: A Defense Strategy against ML Adversarial Collusion Attacks

no code implementations2 Jun 2023 Jiyi Zhang, Han Fang, Ee-Chien Chang

This induces different adversarial regions in different copies, making adversarial samples generated on one copy not replicable on others.

Tracing the Origin of Adversarial Attack for Forensic Investigation and Deterrence

no code implementations ICCV 2023 Han Fang, Jiyi Zhang, Yupeng Qiu, Ke Xu, Chengfang Fang, Ee-Chien Chang

In this paper, we take the role of investigators who want to trace the attack and identify the source, that is, the particular model which the adversarial examples are generated from.

Adversarial Attack

Purifier: Defending Data Inference Attacks via Transforming Confidence Scores

no code implementations1 Dec 2022 Ziqi Yang, Lijin Wang, Da Yang, Jie Wan, Ziming Zhao, Ee-Chien Chang, Fan Zhang, Kui Ren

Besides, our further experiments show that PURIFIER is also effective in defending adversarial model inversion attacks and attribute inference attacks.

Attribute Inference Attack +1

Projecting Non-Fungible Token (NFT) Collections: A Contextual Generative Approach

no code implementations14 Oct 2022 Wesley Joon-Wie Tann, Akhil Vuputuri, Ee-Chien Chang

In this paper, we want to obtain a generative model that, given the early transactions history (first quarter Q1) of a newly minted collection, generates subsequent transactions (quarters Q2, Q3, Q4), where the generative model is trained using the transaction history of a few mature collections.

Mitigating Adversarial Attacks by Distributing Different Copies to Different Users

no code implementations30 Nov 2021 Jiyi Zhang, Han Fang, Wesley Joon-Wie Tann, Ke Xu, Chengfang Fang, Ee-Chien Chang

We point out that by distributing different copies of the model to different buyers, we can mitigate the attack such that adversarial samples found on one copy would not work on another copy.

SHADOWCAST: Controllable Graph Generation with Explainability

no code implementations28 Sep 2020 Wesley Joon-Wie Tann, Ee-Chien Chang, Bryan Hooi

We introduce the problem of explaining graph generation, formulated as controlling the generative process to produce desired graphs with explainable structures.

Generative Adversarial Network Graph Generation

SHADOWCAST: Controllable Graph Generation

no code implementations6 Jun 2020 Wesley Joon-Wie Tann, Ee-Chien Chang, Bryan Hooi

Given an observed graph and some user-specified Markov model parameters, ${\rm S{\small HADOW}C{\small AST}}$ controls the conditions to generate desired graphs.

Generative Adversarial Network Graph Generation

Defending Model Inversion and Membership Inference Attacks via Prediction Purification

no code implementations8 May 2020 Ziqi Yang, Bin Shao, Bohan Xuan, Ee-Chien Chang, Fan Zhang

Neural networks are susceptible to data inference attacks such as the model inversion attack and the membership inference attack, where the attacker could infer the reconstruction and the membership of a data sample from the confidence scores predicted by the target classifier.

Inference Attack Membership Inference Attack

Enhancing Transformation-Based Defenses Against Adversarial Attacks with a Distribution Classifier

no code implementations ICLR 2020 Connie Kou, Hwee Kuan Lee, Ee-Chien Chang, Teck Khim Ng

Furthermore, on the adversarial counterparts, with the image transformation, the resulting shapes of the distribution of softmax are similar to the distributions from the clean images.

Adversarial Attack

Confusing and Detecting ML Adversarial Attacks with Injected Attractors

no code implementations5 Mar 2020 Jiyi Zhang, Ee-Chien Chang, Hwee Kuan Lee

Many machine learning adversarial attacks find adversarial samples of a victim model ${\mathcal M}$ by following the gradient of some attack objective functions, either explicitly or implicitly.

Effectiveness of Distillation Attack and Countermeasure on Neural Network Watermarking

no code implementations14 Jun 2019 Ziqi Yang, Hung Dang, Ee-Chien Chang

In this paper, we show that distillation, a widely used transformation technique, is a quite effective attack to remove watermark embedded by existing algorithms.

Enhancing Transformation-based Defenses using a Distribution Classifier

no code implementations1 Jun 2019 Connie Kou, Hwee Kuan Lee, Ee-Chien Chang, Teck Khim Ng

Furthermore, on the adversarial counterparts, with the image transformation, the resulting shapes of the distribution of softmax are similar to the distributions from the clean images.

Adversarial Attack

Adversarial Neural Network Inversion via Auxiliary Knowledge Alignment

1 code implementation22 Feb 2019 Ziqi Yang, Ee-Chien Chang, Zhenkai Liang

In this work, we investigate the model inversion problem in the adversarial settings, where the adversary aims at inferring information about the target model's training data and test data from the model's prediction values.

Towards Scaling Blockchain Systems via Sharding

3 code implementations2 Apr 2018 Hung Dang, Tien Tuan Anh Dinh, Dumitrel Loghin, Ee-Chien Chang, Qian Lin, Beng Chin Ooi

In this work, we take a principled approach to apply sharding, which is a well-studied and proven technique to scale out databases, to blockchain systems in order to improve their transaction throughput at scale.

Distributed, Parallel, and Cluster Computing Cryptography and Security Databases

Flipped-Adversarial AutoEncoders

no code implementations13 Feb 2018 Jiyi Zhang, Hung Dang, Hwee Kuan Lee, Ee-Chien Chang

We propose a flipped-Adversarial AutoEncoder (FAAE) that simultaneously trains a generative model G that maps an arbitrary latent code distribution to a data distribution and an encoder E that embodies an "inverse mapping" that encodes a data sample into a latent code vector.

Cannot find the paper you are looking for? You can Submit a new open access paper.