no code implementations • 21 Mar 2024 • Jimin Mun, Liwei Jiang, Jenny Liang, Inyoung Cheong, Nicole DeCario, Yejin Choi, Tadayoshi Kohno, Maarten Sap
As a first step towards democratic governance and risk assessment of AI, we introduce Particip-AI, a framework to gather current and future AI use cases and their harms and benefits from non-expert public.
1 code implementation • 8 Mar 2024 • Yuhao Wu, Franziska Roesner, Tadayoshi Kohno, Ning Zhang, Umar Iqbal
These LLM apps leverage the de facto natural language-based automated execution paradigm of LLMs: that is, apps and their interactions are defined in natural language, provided access to user data, and allowed to freely interact with each other and the system.
1 code implementation • 19 Sep 2023 • Umar Iqbal, Tadayoshi Kohno, Franziska Roesner
In this paper, we propose a framework that lays a foundation for LLM platform designers to analyze and improve the security, privacy, and safety of current and future plugin-integrated LLM platforms.
no code implementations • 30 Aug 2023 • Inyoung Cheong, Aylin Caliskan, Tadayoshi Kohno
Our interdisciplinary study investigates how effectively U. S. laws confront the challenges posed by Generative AI to human values.
no code implementations • 8 Dec 2022 • Ashish Hooda, Andrey Labunets, Tadayoshi Kohno, Earlence Fernandes
Content scanning systems employ perceptual hashing algorithms to scan user content for illegal material, such as child pornography or terrorist recruitment flyers.
no code implementations • NeurIPS 2021 • Chunjong Park, Anas Awadalla, Tadayoshi Kohno, Shwetak Patel
We then translate the out-of-distribution score into a human interpretable CONFIDENCE SCORE to investigate its effect on the users' interaction with health ML applications.
no code implementations • ICML Workshop AML 2021 • Ivan Evtimov, Ian Covert, Aditya Kusupati, Tadayoshi Kohno
When data is publicly released for human consumption, it is unclear how to prevent its unauthorized usage for machine learning purposes.
1 code implementation • 15 Dec 2020 • Ivan Evtimov, Pascal Sturmfels, Tadayoshi Kohno
Searches in these databases are now being offered as a service to law enforcement and others and carry a multitude of privacy risks for social media users.
no code implementations • 13 Jul 2020 • Ivan Evtimov, Weidong Cui, Ece Kamar, Emre Kiciman, Tadayoshi Kohno, Jerry Li
Machine learning (ML) models deployed in many safety- and business-critical systems are vulnerable to exploitation through adversarial examples.
no code implementations • 20 Jul 2018 • Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, Tadayoshi Kohno, Dawn Song
In this work, we extend physical attacks to more challenging object detection models, a broader class of deep learning algorithms widely used to detect and label multiple objects within a scene.
no code implementations • CVPR 2018 • Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, Dawn Song
Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input.
no code implementations • 21 Dec 2017 • Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Dawn Song, Tadayoshi Kohno, Amir Rahmati, Atul Prakash, Florian Tramer
Given the fact that state-of-the-art objection detection algorithms are harder to be fooled by the same set of adversarial examples, here we show that these detectors can also be attacked by physical adversarial examples.
1 code implementation • 27 Jul 2017 • Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, Dawn Song
We propose a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions.
no code implementations • 16 Apr 2015 • Tamara Bonaci, Jeffrey Herron, Tariq Yusuf, Junjie Yan, Tadayoshi Kohno, Howard Jay Chizeck
Our work seeks to answer this question by systematically analyzing possible cyber security attacks against Raven II, an advanced teleoperated robotic surgery system.
Robotics Cryptography and Security