no code implementations • 22 Apr 2024 • Seongmin Lee, Benjamin Hoover, Hendrik Strobelt, Zijie J. Wang, Shengyun Peng, Austin Wright, Kevin Li, Haekyu Park, Haoyang Yang, Polo Chau
Diffusion-based generative models' impressive ability to create convincing images has garnered global attention.
2 code implementations • 5 Apr 2024 • Alec Helbling, Seongmin Lee, Polo Chau
We demonstrate that by serializing both an image and a multi-modal instruction into a textual representation it is possible to leverage LLMs to perform precise transformations of the layout and appearance of an image.
1 code implementation • 1 Apr 2024 • Seongmin Lee, Zijie J. Wang, Aishwarya Chakravarthy, Alec Helbling, Shengyun Peng, Mansi Phute, Duen Horng Chau, Minsuk Kahng
Our library offers a new way to quickly attribute an LLM's text generation to training data points to inspect model behaviors, enhance its trustworthiness, and compare model-generated text with user-provided text.
1 code implementation • 7 Mar 2024 • Shengyun Peng, Seongmin Lee, XiaoJing Wang, Rajarajeswari Balasubramaniyan, Duen Horng Chau
Tables convey factual and quantitative data with implicit conventions created by humans that are often challenging for machines to parse.
1 code implementation • 23 Feb 2024 • Shengyun Peng, Seongmin Lee, XiaoJing Wang, Rajarajeswari Balasubramaniyan, Duen Horng Chau
We discover that the performance gap between the linear projection transformer and the hybrid CNN-transformer can be mitigated by SSP of the visual encoder in the TSR model.
no code implementations • 8 Feb 2024 • Seongmin Lee, Marcel Böhme
In our experiments, our genetic algorithm discovers estimators that have a substantially smaller MSE than the state-of-the-art Good-Turing estimator.
no code implementations • 5 Feb 2024 • Alec Helbling, Seongmin Lee, Polo Chau
This allows users to benefit from both the visual descriptiveness of natural language and the spatial precision of direct manipulation.
no code implementations • 2 Feb 2024 • Justin Blalock, David Munechika, Harsha Karanth, Alec Helbling, Pratham Mehta, Seongmin Lee, Duen Horng Chau
The growing digital landscape of fashion e-commerce calls for interactive and user-friendly interfaces for virtually trying on clothes.
2 code implementations • 9 Nov 2023 • Shengyun Peng, Seongmin Lee, XiaoJing Wang, Rajarajeswari Balasubramaniyan, Duen Horng Chau
This allows it to "see" an appropriate portion of the table and "store" the complex table structure within sufficient context length for the subsequent transformer.
Ranked #3 on Table Recognition on PubTabNet
3 code implementations • 4 May 2023 • Zijie J. Wang, David Munechika, Seongmin Lee, Duen Horng Chau
Through this study, we identify key design implications and trade-offs, such as leveraging multimodal data in notebooks as well as balancing the degree of visualization-notebook integration.
1 code implementation • 4 May 2023 • Seongmin Lee, Benjamin Hoover, Hendrik Strobelt, Zijie J. Wang, Shengyun Peng, Austin Wright, Kevin Li, Haekyu Park, Haoyang Yang, Duen Horng Chau
Diffusion Explainer tightly integrates a visual overview of Stable Diffusion's complex components with detailed explanations of their underlying operations, enabling users to fluidly transition between multiple levels of abstraction through animations and interactive elements.
1 code implementation • CVPR 2022 • Seongmin Lee, Zijie J. Wang, Judy Hoffman, Duen Horng Chau
CNN image classifiers are widely used, thanks to their efficiency and accuracy.
no code implementations • 30 Mar 2022 • Haekyu Park, Seongmin Lee, Benjamin Hoover, Austin P. Wright, Omar Shaikh, Rahul Duggal, Nilaksh Das, Kevin Li, Judy Hoffman, Duen Horng Chau
We present ConceptEvo, a unified interpretation framework for deep neural networks (DNNs) that reveals the inception and evolution of learned concepts during training.
no code implementations • 1 Jan 2021 • Seongmin Lee, Hyunsik Jeon, U Kang
Given multiple source datasets with labels, how can we train a target model with no labeled data?
no code implementations • 30 Sep 2020 • Hyun Dong Lee, Seongmin Lee, U Kang
How can we effectively regularize BERT?
no code implementations • 29 Sep 2020 • Seongmin Lee, Hyunsik Jeon, U Kang
Multi-source domain adaptation (MSDA) aims to train a model using multiple source datasets different from a target dataset in the absence of target data labels.
no code implementations • 31 Jul 2020 • William B. Langdon, Westley Weimer, Justyna Petke, Erik Fredericks, Seongmin Lee, Emily Winter, Michail Basios, Myra B. Cohen, Aymeric Blot, Markus Wagner, Bobby R. Bruce, Shin Yoo, Simos Gerasimou, Oliver Krauss, Yu Huang, Michael Gerten
Following Prof. Mark Harman of Facebook's keynote and formal presentations (which are recorded in the proceedings) there was a wide ranging discussion at the eighth international Genetic Improvement workshop, GI-2020 @ ICSE (held as part of the 42nd ACM/IEEE International Conference on Software Engineering on Friday 3rd July 2020).
no code implementations • WS 2019 • Cheoneum Park, Young-Jun Jung, Kihoon Kim, Geonyeong Kim, Jae-Won Jeon, Seongmin Lee, Jun-Seok Kim, Chang-Ki Lee
In this paper, we describe the neural machine translation (NMT) system submitted by the Kangwon National University and HYUNDAI (KNU-HYUNDAI) team to the translation tasks of the 6th workshop on Asian Translation (WAT 2019).