Search Results for author: Seongmin Lee

Found 17 papers, 8 papers with code

ClickDiffusion: Harnessing LLMs for Interactive Precise Image Editing

2 code implementations5 Apr 2024 Alec Helbling, Seongmin Lee, Polo Chau

We demonstrate that by serializing both an image and a multi-modal instruction into a textual representation it is possible to leverage LLMs to perform precise transformations of the layout and appearance of an image.

Image Manipulation

LLM Attributor: Interactive Visual Attribution for LLM Generation

1 code implementation1 Apr 2024 Seongmin Lee, Zijie J. Wang, Aishwarya Chakravarthy, Alec Helbling, Shengyun Peng, Mansi Phute, Duen Horng Chau, Minsuk Kahng

Our library offers a new way to quickly attribute an LLM's text generation to training data points to inspect model behaviors, enhance its trustworthiness, and compare model-generated text with user-provided text.

Attribute Text Generation

UniTable: Towards a Unified Framework for Table Structure Recognition via Self-Supervised Pretraining

1 code implementation7 Mar 2024 Shengyun Peng, Seongmin Lee, XiaoJing Wang, Rajarajeswari Balasubramaniyan, Duen Horng Chau

Tables convey factual and quantitative data with implicit conventions created by humans that are often challenging for machines to parse.

Language Modelling

Self-Supervised Pre-Training for Table Structure Recognition Transformer

1 code implementation23 Feb 2024 Shengyun Peng, Seongmin Lee, XiaoJing Wang, Rajarajeswari Balasubramaniyan, Duen Horng Chau

We discover that the performance gap between the linear projection transformer and the hybrid CNN-transformer can be mitigated by SSP of the visual encoder in the TSR model.

Representation Learning

How Much is Unseen Depends Chiefly on Information About the Seen

no code implementations8 Feb 2024 Seongmin Lee, Marcel Böhme

In our experiments, our genetic algorithm discovers estimators that have a substantially smaller MSE than the state-of-the-art Good-Turing estimator.

Point and Instruct: Enabling Precise Image Editing by Unifying Direct Manipulation and Text Instructions

no code implementations5 Feb 2024 Alec Helbling, Seongmin Lee, Polo Chau

This allows users to benefit from both the visual descriptiveness of natural language and the spatial precision of direct manipulation.

Image Manipulation

Mobile Fitting Room: On-device Virtual Try-on via Diffusion Models

no code implementations2 Feb 2024 Justin Blalock, David Munechika, Harsha Karanth, Alec Helbling, Pratham Mehta, Seongmin Lee, Duen Horng Chau

The growing digital landscape of fashion e-commerce calls for interactive and user-friendly interfaces for virtually trying on clothes.

Image Generation Model Compression +1

High-Performance Transformers for Table Structure Recognition Need Early Convolutions

2 code implementations9 Nov 2023 Shengyun Peng, Seongmin Lee, XiaoJing Wang, Rajarajeswari Balasubramaniyan, Duen Horng Chau

This allows it to "see" an appropriate portion of the table and "store" the complex table structure within sufficient context length for the subsequent transformer.

Representation Learning Self-Supervised Learning +1

SuperNOVA: Design Strategies and Opportunities for Interactive Visualization in Computational Notebooks

3 code implementations4 May 2023 Zijie J. Wang, David Munechika, Seongmin Lee, Duen Horng Chau

Through this study, we identify key design implications and trade-offs, such as leveraging multimodal data in notebooks as well as balancing the degree of visualization-notebook integration.

Diffusion Explainer: Visual Explanation for Text-to-image Stable Diffusion

1 code implementation4 May 2023 Seongmin Lee, Benjamin Hoover, Hendrik Strobelt, Zijie J. Wang, Shengyun Peng, Austin Wright, Kevin Li, Haekyu Park, Haoyang Yang, Duen Horng Chau

Diffusion Explainer tightly integrates a visual overview of Stable Diffusion's complex components with detailed explanations of their underlying operations, enabling users to fluidly transition between multiple levels of abstraction through animations and interactive elements.

Image Generation

Concept Evolution in Deep Learning Training: A Unified Interpretation Framework and Discoveries

no code implementations30 Mar 2022 Haekyu Park, Seongmin Lee, Benjamin Hoover, Austin P. Wright, Omar Shaikh, Rahul Duggal, Nilaksh Das, Kevin Li, Judy Hoffman, Duen Horng Chau

We present ConceptEvo, a unified interpretation framework for deep neural networks (DNNs) that reveals the inception and evolution of learned concepts during training.

Decision Making

Multi-EPL: Accurate Multi-source Domain Adaptation

no code implementations1 Jan 2021 Seongmin Lee, Hyunsik Jeon, U Kang

Given multiple source datasets with labels, how can we train a target model with no labeled data?

Domain Adaptation

Ensemble Multi-Source Domain Adaptation with Pseudolabels

no code implementations29 Sep 2020 Seongmin Lee, Hyunsik Jeon, U Kang

Multi-source domain adaptation (MSDA) aims to train a model using multiple source datasets different from a target dataset in the absence of target data labels.

Domain Adaptation Ensemble Learning

Genetic Improvement @ ICSE 2020

no code implementations31 Jul 2020 William B. Langdon, Westley Weimer, Justyna Petke, Erik Fredericks, Seongmin Lee, Emily Winter, Michail Basios, Myra B. Cohen, Aymeric Blot, Markus Wagner, Bobby R. Bruce, Shin Yoo, Simos Gerasimou, Oliver Krauss, Yu Huang, Michael Gerten

Following Prof. Mark Harman of Facebook's keynote and formal presentations (which are recorded in the proceedings) there was a wide ranging discussion at the eighth international Genetic Improvement workshop, GI-2020 @ ICSE (held as part of the 42nd ACM/IEEE International Conference on Software Engineering on Friday 3rd July 2020).

KNU-HYUNDAI's NMT system for Scientific Paper and Patent Tasks onWAT 2019

no code implementations WS 2019 Cheoneum Park, Young-Jun Jung, Kihoon Kim, Geonyeong Kim, Jae-Won Jeon, Seongmin Lee, Jun-Seok Kim, Chang-Ki Lee

In this paper, we describe the neural machine translation (NMT) system submitted by the Kangwon National University and HYUNDAI (KNU-HYUNDAI) team to the translation tasks of the 6th workshop on Asian Translation (WAT 2019).

Data Augmentation Machine Translation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.