no code implementations • 22 Apr 2024 • Seongmin Lee, Benjamin Hoover, Hendrik Strobelt, Zijie J. Wang, Shengyun Peng, Austin Wright, Kevin Li, Haekyu Park, Haoyang Yang, Polo Chau
Diffusion-based generative models' impressive ability to create convincing images has garnered global attention.
2 code implementations • 1 Apr 2024 • Seongmin Lee, Zijie J. Wang, Aishwarya Chakravarthy, Alec Helbling, Shengyun Peng, Mansi Phute, Duen Horng Chau, Minsuk Kahng
Our library offers a new way to quickly attribute an LLM's text generation to training data points to inspect model behaviors, enhance its trustworthiness, and compare model-generated text with user-provided text.
1 code implementation • 7 Mar 2024 • Shengyun Peng, Seongmin Lee, XiaoJing Wang, Rajarajeswari Balasubramaniyan, Duen Horng Chau
Tables convey factual and quantitative data with implicit conventions created by humans that are often challenging for machines to parse.
1 code implementation • 23 Feb 2024 • Shengyun Peng, Seongmin Lee, XiaoJing Wang, Rajarajeswari Balasubramaniyan, Duen Horng Chau
We discover that the performance gap between the linear projection transformer and the hybrid CNN-transformer can be mitigated by SSP of the visual encoder in the TSR model.
2 code implementations • 9 Nov 2023 • Shengyun Peng, Seongmin Lee, XiaoJing Wang, Rajarajeswari Balasubramaniyan, Duen Horng Chau
This allows it to "see" an appropriate portion of the table and "store" the complex table structure within sufficient context length for the subsequent transformer.
Ranked #3 on Table Recognition on PubTabNet
1 code implementation • 30 Aug 2023 • Shengyun Peng, Weilin Xu, Cory Cornelius, Matthew Hull, Kevin Li, Rahul Duggal, Mansi Phute, Jason Martin, Duen Horng Chau
Our research aims to unify existing works' diverging opinions on how architectural components affect the adversarial robustness of CNNs.
1 code implementation • 14 Aug 2023 • Mansi Phute, Alec Helbling, Matthew Hull, Shengyun Peng, Sebastian Szyller, Cory Cornelius, Duen Horng Chau
We test LLM Self Defense on GPT 3. 5 and Llama 2, two of the current most prominent LLMs against various types of attacks, such as forcefully inducing affirmative responses to prompts and prompt engineering attacks.
1 code implementation • 4 May 2023 • Seongmin Lee, Benjamin Hoover, Hendrik Strobelt, Zijie J. Wang, Shengyun Peng, Austin Wright, Kevin Li, Haekyu Park, Haoyang Yang, Duen Horng Chau
Diffusion Explainer tightly integrates a visual overview of Stable Diffusion's complex components with detailed explanations of their underlying operations, enabling users to fluidly transition between multiple levels of abstraction through animations and interactive elements.
1 code implementation • 8 Jan 2023 • Shengyun Peng, Weilin Xu, Cory Cornelius, Kevin Li, Rahul Duggal, Duen Horng Chau, Jason Martin
Adversarial Training is the most effective approach for improving the robustness of Deep Neural Networks (DNNs).
no code implementations • 30 Sep 2022 • Rahul Duggal, Shengyun Peng, Hao Zhou, Duen Horng Chau
In this paper, we propose a new and complementary direction for improving performance on long tailed datasets - optimizing the backbone architecture through neural architecture search (NAS).
1 code implementation • CVPR 2022 • Sivapriya Vellaichamy, Matthew Hull, Zijie J. Wang, Nilaksh Das, Shengyun Peng, Haekyu Park, Duen Horng (Polo) Chau
With deep learning based systems performing exceedingly well in many vision-related tasks, a major concern with their widespread deployment especially in safety-critical applications is their susceptibility to adversarial attacks.
no code implementations • 13 Jun 2020 • Shengyun Peng, Yunxuan Yu, Kun Wang, Lei He
Specifically, a target object is defined by a bounding box center, tracking offset, and object size.