Search Results for author: Tadayoshi Kohno

Found 14 papers, 4 papers with code

Particip-AI: A Democratic Surveying Framework for Anticipating Future AI Use Cases, Harms and Benefits

no code implementations21 Mar 2024 Jimin Mun, Liwei Jiang, Jenny Liang, Inyoung Cheong, Nicole DeCario, Yejin Choi, Tadayoshi Kohno, Maarten Sap

As a first step towards democratic governance and risk assessment of AI, we introduce Particip-AI, a framework to gather current and future AI use cases and their harms and benefits from non-expert public.

SecGPT: An Execution Isolation Architecture for LLM-Based Systems

1 code implementation8 Mar 2024 Yuhao Wu, Franziska Roesner, Tadayoshi Kohno, Ning Zhang, Umar Iqbal

These LLM apps leverage the de facto natural language-based automated execution paradigm of LLMs: that is, apps and their interactions are defined in natural language, provided access to user data, and allowed to freely interact with each other and the system.

LLM Platform Security: Applying a Systematic Evaluation Framework to OpenAI's ChatGPT Plugins

1 code implementation19 Sep 2023 Umar Iqbal, Tadayoshi Kohno, Franziska Roesner

In this paper, we propose a framework that lays a foundation for LLM platform designers to analyze and improve the security, privacy, and safety of current and future plugin-integrated LLM platforms.

Language Modelling Large Language Model

Is the U.S. Legal System Ready for AI's Challenges to Human Values?

no code implementations30 Aug 2023 Inyoung Cheong, Aylin Caliskan, Tadayoshi Kohno

Our interdisciplinary study investigates how effectively U. S. laws confront the challenges posed by Generative AI to human values.

Re-purposing Perceptual Hashing based Client Side Scanning for Physical Surveillance

no code implementations8 Dec 2022 Ashish Hooda, Andrey Labunets, Tadayoshi Kohno, Earlence Fernandes

Content scanning systems employ perceptual hashing algorithms to scan user content for illegal material, such as child pornography or terrorist recruitment flyers.

Reliable and Trustworthy Machine Learning for Health Using Dataset Shift Detection

no code implementations NeurIPS 2021 Chunjong Park, Anas Awadalla, Tadayoshi Kohno, Shwetak Patel

We then translate the out-of-distribution score into a human interpretable CONFIDENCE SCORE to investigate its effect on the users' interaction with health ML applications.

BIG-bench Machine Learning Medical Diagnosis +1

Disrupting Model Training with Adversarial Shortcuts

no code implementations ICML Workshop AML 2021 Ivan Evtimov, Ian Covert, Aditya Kusupati, Tadayoshi Kohno

When data is publicly released for human consumption, it is unclear how to prevent its unauthorized usage for machine learning purposes.

BIG-bench Machine Learning Image Classification

FoggySight: A Scheme for Facial Lookup Privacy

1 code implementation15 Dec 2020 Ivan Evtimov, Pascal Sturmfels, Tadayoshi Kohno

Searches in these databases are now being offered as a service to law enforcement and others and carry a multitude of privacy risks for social media users.

Face Recognition Privacy Preserving

Security and Machine Learning in the Real World

no code implementations13 Jul 2020 Ivan Evtimov, Weidong Cui, Ece Kamar, Emre Kiciman, Tadayoshi Kohno, Jerry Li

Machine learning (ML) models deployed in many safety- and business-critical systems are vulnerable to exploitation through adversarial examples.

BIG-bench Machine Learning

Physical Adversarial Examples for Object Detectors

no code implementations20 Jul 2018 Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Florian Tramer, Atul Prakash, Tadayoshi Kohno, Dawn Song

In this work, we extend physical attacks to more challenging object detection models, a broader class of deep learning algorithms widely used to detect and label multiple objects within a scene.

Object object-detection +1

Robust Physical-World Attacks on Deep Learning Visual Classification

no code implementations CVPR 2018 Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, Dawn Song

Recent studies show that the state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples, resulting from small-magnitude perturbations added to the input.

Classification General Classification

Note on Attacking Object Detectors with Adversarial Stickers

no code implementations21 Dec 2017 Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Dawn Song, Tadayoshi Kohno, Amir Rahmati, Atul Prakash, Florian Tramer

Given the fact that state-of-the-art objection detection algorithms are harder to be fooled by the same set of adversarial examples, here we show that these detectors can also be attacked by physical adversarial examples.

Object

Robust Physical-World Attacks on Deep Learning Models

1 code implementation27 Jul 2017 Kevin Eykholt, Ivan Evtimov, Earlence Fernandes, Bo Li, Amir Rahmati, Chaowei Xiao, Atul Prakash, Tadayoshi Kohno, Dawn Song

We propose a general attack algorithm, Robust Physical Perturbations (RP2), to generate robust visual adversarial perturbations under different physical conditions.

To Make a Robot Secure: An Experimental Analysis of Cyber Security Threats Against Teleoperated Surgical Robots

no code implementations16 Apr 2015 Tamara Bonaci, Jeffrey Herron, Tariq Yusuf, Junjie Yan, Tadayoshi Kohno, Howard Jay Chizeck

Our work seeks to answer this question by systematically analyzing possible cyber security attacks against Raven II, an advanced teleoperated robotic surgery system.

Robotics Cryptography and Security

Cannot find the paper you are looking for? You can Submit a new open access paper.