no code implementations • 23 Feb 2024 • Vinu Sankar Sadasivan, Shoumik Saha, Gaurang Sriramanan, Priyatham Kattakinda, Atoosa Chegini, Soheil Feizi
Through human evaluations, we find that our untargeted attack causes Vicuna-7B-v1. 5 to produce ~15% more incorrect outputs when compared to LM outputs in the absence of our attack.
1 code implementation • 29 Sep 2023 • Mehrdad Saberi, Vinu Sankar Sadasivan, Keivan Rezaei, Aounon Kumar, Atoosa Chegini, Wenxiao Wang, Soheil Feizi
Moreover, we show that watermarking methods are vulnerable to spoofing attacks where the attacker aims to have real images identified as watermarked ones, damaging the reputation of the developers.
no code implementations • 28 Mar 2023 • Aounon Kumar, Vinu Sankar Sadasivan, Soheil Feizi
Robustness certificates based on the assumption of independent input samples are not directly applicable in such scenarios.
1 code implementation • 17 Mar 2023 • Vinu Sankar Sadasivan, Aounon Kumar, Sriram Balasubramanian, Wenxiao Wang, Soheil Feizi
In particular, we develop a recursive paraphrasing attack to apply on AI text, which can break a whole range of detectors, including the ones using the watermarking schemes as well as neural network-based detectors, zero-shot classifiers, and retrieval-based detectors.
1 code implementation • CVPR 2023 • Vinu Sankar Sadasivan, Mahdi Soltanolkotabi, Soheil Feizi
Here, ERM on the clean training data achieves a clean test accuracy of 80. 66$\%$.
no code implementations • 29 Sep 2021 • Vinu Sankar Sadasivan, Jayesh Malaviya, Anirban Dasgupta
Recent works attempt to prune neural networks at initialization to design sparse networks that can be trained efficiently.
1 code implementation • 27 Feb 2021 • Vinu Sankar Sadasivan, Anirban Dasgupta
Curriculum learning is a training strategy that sorts the training examples by some measure of their difficulty and gradually exposes them to the learner to improve the network performance.
no code implementations • 1 Jan 2021 • Vinu Sankar Sadasivan, Anirban Dasgupta
Curriculum learning is a training strategy that sorts the training examples by their difficulty and gradually exposes them to the learner.
2 code implementations • NeurIPS 2019 • Don Dennis, Durmus Alp Emre Acar, Vikram Mandikal, Vinu Sankar Sadasivan, Venkatesh Saligrama, Harsha Vardhan Simhadri, Prateek Jain
The second layer consumes the output of the first layer using a second RNN thus capturing long dependencies.