Search Results for author: Ting-Chun Wang

Found 26 papers, 10 papers with code

SPACE: Speech-driven Portrait Animation with Controllable Expression

no code implementations ICCV 2023 Siddharth Gururani, Arun Mallya, Ting-Chun Wang, Rafael Valle, Ming-Yu Liu

It uses a multi-stage approach, combining the controllability of facial landmarks with the high-quality synthesis power of a pretrained face generator.

Implicit Warping for Animation with Image Sets

no code implementations4 Oct 2022 Arun Mallya, Ting-Chun Wang, Ming-Yu Liu

We present a new implicit warping framework for image animation using sets of source images through the transfer of the motion of a driving video.

Image Animation

Learning to Relight Portrait Images via a Virtual Light Stage and Synthetic-to-Real Adaptation

no code implementations21 Sep 2022 Yu-Ying Yeh, Koki Nagano, Sameh Khamis, Jan Kautz, Ming-Yu Liu, Ting-Chun Wang

An effective approach is to supervise the training of deep neural networks with a high-fidelity dataset of desired input-output pairs, captured with a light stage.

Generating Long Videos of Dynamic Scenes

1 code implementation7 Jun 2022 Tim Brooks, Janne Hellsten, Miika Aittala, Ting-Chun Wang, Timo Aila, Jaakko Lehtinen, Ming-Yu Liu, Alexei A. Efros, Tero Karras

Existing video generation methods often fail to produce new content as a function of time while maintaining consistencies expected in real environments, such as plausible dynamics and object persistence.

MORPH Video Generation

Multimodal Conditional Image Synthesis with Product-of-Experts GANs

no code implementations9 Dec 2021 Xun Huang, Arun Mallya, Ting-Chun Wang, Ming-Yu Liu

Existing conditional image synthesis frameworks generate images based on user inputs in a single modality, such as text, segmentation, sketch, or style reference.

Image-to-Image Translation

One-Shot Free-View Neural Talking-Head Synthesis for Video Conferencing

2 code implementations CVPR 2021 Ting-Chun Wang, Arun Mallya, Ming-Yu Liu

We propose a neural talking-head video synthesis model and demonstrate its application to video conferencing.

Generative Adversarial Networks for Image and Video Synthesis: Algorithms and Applications

no code implementations6 Aug 2020 Ming-Yu Liu, Xun Huang, Jiahui Yu, Ting-Chun Wang, Arun Mallya

The generative adversarial network (GAN) framework has emerged as a powerful tool for various image and video synthesis tasks, allowing the synthesis of visual content in an unconditional or input-conditional manner.

Generative Adversarial Network Neural Rendering +1

World-Consistent Video-to-Video Synthesis

no code implementations ECCV 2020 Arun Mallya, Ting-Chun Wang, Karan Sapra, Ming-Yu Liu

This is because they lack knowledge of the 3D world being rendered and generate each frame only based on the past few frames.

Video-to-Video Synthesis

Transposer: Universal Texture Synthesis Using Feature Maps as Transposed Convolution Filter

no code implementations14 Jul 2020 Guilin Liu, Rohan Taori, Ting-Chun Wang, Zhiding Yu, Shiqiu Liu, Fitsum A. Reda, Karan Sapra, Andrew Tao, Bryan Catanzaro

Specifically, we directly treat the whole encoded feature map of the input texture as transposed convolution filters and the features' self-similarity map, which captures the auto-correlation information, as input to the transposed convolution.

Texture Synthesis

Dancing to Music

2 code implementations NeurIPS 2019 Hsin-Ying Lee, Xiaodong Yang, Ming-Yu Liu, Ting-Chun Wang, Yu-Ding Lu, Ming-Hsuan Yang, Jan Kautz

In the analysis phase, we decompose a dance into a series of basic dance units, through which the model learns how to move.

Motion Synthesis Pose Estimation

Few-shot Video-to-Video Synthesis

6 code implementations NeurIPS 2019 Ting-Chun Wang, Ming-Yu Liu, Andrew Tao, Guilin Liu, Jan Kautz, Bryan Catanzaro

To address the limitations, we propose a few-shot vid2vid framework, which learns to synthesize videos of previously unseen subjects or scenes by leveraging few example images of the target at test time.

Video-to-Video Synthesis

Semantic Image Synthesis with Spatially-Adaptive Normalization

26 code implementations CVPR 2019 Taesung Park, Ming-Yu Liu, Ting-Chun Wang, Jun-Yan Zhu

Previous methods directly feed the semantic layout as input to the deep network, which is then processed through stacks of convolution, normalization, and nonlinearity layers.

Image-to-Image Translation Sketch-to-Image Translation

Partial Convolution based Padding

4 code implementations28 Nov 2018 Guilin Liu, Kevin J. Shih, Ting-Chun Wang, Fitsum A. Reda, Karan Sapra, Zhiding Yu, Andrew Tao, Bryan Catanzaro

In this paper, we present a simple yet effective padding scheme that can be used as a drop-in module for existing convolutional neural networks.

General Classification Semantic Segmentation

Video-to-Video Synthesis

11 code implementations NeurIPS 2018 Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Guilin Liu, Andrew Tao, Jan Kautz, Bryan Catanzaro

We study the problem of video-to-video synthesis, whose goal is to learn a mapping function from an input source video (e. g., a sequence of semantic segmentation masks) to an output photorealistic video that precisely depicts the content of the source video.

2k Semantic Segmentation +2

Domain Stylization: A Strong, Simple Baseline for Synthetic to Real Image Domain Adaptation

no code implementations24 Jul 2018 Aysegul Dundar, Ming-Yu Liu, Ting-Chun Wang, John Zedlewski, Jan Kautz

Deep neural networks have largely failed to effectively utilize synthetic data when applied to real images due to the covariate shift problem.

Domain Adaptation object-detection +5

Image Inpainting for Irregular Holes Using Partial Convolutions

60 code implementations ECCV 2018 Guilin Liu, Fitsum A. Reda, Kevin J. Shih, Ting-Chun Wang, Andrew Tao, Bryan Catanzaro

Existing deep learning based image inpainting methods use a standard convolutional network over the corrupted image, using convolutional filter responses conditioned on both valid pixels as well as the substitute values in the masked holes (typically the mean value).

Image Inpainting valid

High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs

20 code implementations CVPR 2018 Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Andrew Tao, Jan Kautz, Bryan Catanzaro

We present a new method for synthesizing high-resolution photo-realistic images from semantic label maps using conditional generative adversarial networks (conditional GANs).

Conditional Image Generation Fundus to Angiography Generation +5

Light Field Video Capture Using a Learning-Based Hybrid Imaging System

1 code implementation8 May 2017 Ting-Chun Wang, Jun-Yan Zhu, Nima Khademi Kalantari, Alexei A. Efros, Ravi Ramamoorthi

Given a 3 fps light field sequence and a standard 30 fps 2D video, our system can then generate a full light field video at 30 fps.

Learning-Based View Synthesis for Light Field Cameras

no code implementations9 Sep 2016 Nima Khademi Kalantari, Ting-Chun Wang, Ravi Ramamoorthi

Specifically, we propose a novel learning-based approach to synthesize new views from a sparse set of input views.

A 4D Light-Field Dataset and CNN Architectures for Material Recognition

no code implementations24 Aug 2016 Ting-Chun Wang, Jun-Yan Zhu, Ebi Hiroaki, Manmohan Chandraker, Alexei A. Efros, Ravi Ramamoorthi

We introduce a new light-field dataset of materials, and take advantage of the recent success of deep learning to perform material recognition on the 4D light-field.

Image Classification Image Segmentation +4

Depth From Semi-Calibrated Stereo and Defocus

no code implementations CVPR 2016 Ting-Chun Wang, Manohar Srikanth, Ravi Ramamoorthi

In this work, we propose a multi-camera system where we combine a main high-quality camera with two low-res auxiliary cameras.

Stereo Matching Stereo Matching Hand

Occlusion-Aware Depth Estimation Using Light-Field Cameras

no code implementations ICCV 2015 Ting-Chun Wang, Alexei A. Efros, Ravi Ramamoorthi

In this paper, we develop a depth estimation algorithm that treats occlusion explicitly; the method also enables identification of occlusion edges, which may be useful in other applications.

Depth Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.