Search Results for author: Stephen Obadinma

Found 5 papers, 2 papers with code

Calibration Attack: A Framework For Adversarial Attacks Targeting Calibration

no code implementations5 Jan 2024 Stephen Obadinma, Xiaodan Zhu, Hongyu Guo

We introduce a new framework of adversarial attacks, named calibration attacks, in which the attacks are generated and organized to trap victim models to be miscalibrated without altering their original accuracy, hence seriously endangering the trustworthiness of the models and any decision-making based on their confidence scores.

Decision Making

Effectiveness of Data Augmentation for Parameter Efficient Tuning with Limited Data

no code implementations5 Mar 2023 Stephen Obadinma, Hongyu Guo, Xiaodan Zhu

In this paper, we examine the effectiveness of several popular task-agnostic data augmentation techniques, i. e., EDA, Back Translation, and Mixup, when using two general parameter efficient tuning methods, P-tuning v2 and LoRA, under data scarcity.

Data Augmentation Sentence +1

How Curriculum Learning Impacts Model Calibration

no code implementations29 Sep 2021 Stephen Obadinma, Xiaodan Zhu, Hongyu Guo

Our studies suggest the following: most of the time curriculum learning has a negligible effect on calibration, but in certain cases under the context of limited training time and noisy data, curriculum learning can substantially reduce calibration error in a manner that cannot be explained by dynamically sampling the dataset.

Cannot find the paper you are looking for? You can Submit a new open access paper.