no code implementations • 18 Aug 2023 • Daniel Jiwoong Im, Kyunghyun Cho
This paper serves as a starting point for machine learning researchers, engineers and students who are interested in but not yet familiar with causal inference.
no code implementations • 18 Aug 2023 • Daniel Jiwoong Im, Alexander Kondratskiy, Vincent Harvey, Hsuan-Wei Fu
The paper underscores how decentralization in sports betting addresses the drawbacks of traditional centralized platforms, ensuring transparency, security, and lower fees.
no code implementations • 11 Aug 2023 • Daniel Jiwoong Im, Alexander Kondratskiy, Vincent Harvey, Hsuan-Wei Fu
In this paper, we propose a new approach known as UBET AMM (UAMM), which calculates prices by considering external market prices and the impermanent loss of the liquidity pool.
no code implementations • 16 Nov 2021 • Daniel Jiwoong Im, Kyunghyun Cho, Narges Razavian
In this paper, we introduce uniform treatment variational autoencoders (UTVAE) that are trained with uniform treatment distribution using importance sampling and show that using uniform treatment over observational treatment distribution leads to better causal inference by mitigating the distribution shift that occurs from training to test time.
1 code implementation • 15 Feb 2021 • Daniel Jiwoong Im, Cristina Savin, Kyunghyun Cho
Conventional hyperparameter optimization methods are computationally intensive and hard to generalize to scenarios that require dynamically adapting hyperparameters, such as life-long learning.
no code implementations • 23 Jul 2020 • Daniel Jiwoong Im, Iljung Kwak, Kristin Branson
A primary difficulty with unsupervised discovery of structure in large data sets is a lack of quantitative evaluation criteria.
no code implementations • NeurIPS Workshop Neuro_AI 2019 • Daniel Jiwoong Im, Rutuja Patil, Kristin Branson
Backpropagation is the workhorse of deep learning, however, several other biologically-motivated learning rules have been introduced, such as random feedback alignment and difference target propagation.
no code implementations • 16 Oct 2019 • Daniel Jiwoong Im, Yibo Jiang, Nakul Verma
By leveraging this refined control, we demonstrate that there are multiple principled ways to update MAML and show that the classic MAML optimization is simply a special case of second-order Runge-Kutta method that mainly focuses on fast-adaptation.
no code implementations • 7 Jun 2019 • Daniel Jiwoong Im, Sridhama Prakhya, Jinyao Yan, Srinivas Turaga, Kristin Branson
The Importance Weighted Auto Encoder (IWAE) objective has been shown to improve the training of generative models over the standard Variational Auto Encoder (VAE) objective.
no code implementations • 3 Nov 2018 • Daniel Jiwoong Im, Nakul Verma, Kristin Branson
A common concern with $t$-SNE criterion is that it is optimized using gradient descent, and can become stuck in poor local minima.
no code implementations • ICLR 2018 • Daniel Jiwoong Im, He Ma, Graham Taylor, Kristin Branson
Generative adversarial networks (GANs) have been extremely effective in approximating complex distributions of high-dimensional, input data samples, and substantial progress has been made in understanding and improving GAN performance in terms of both theory and application.
no code implementations • 22 Jun 2017 • Jiatao Gu, Daniel Jiwoong Im, Victor O. K. Li
Previous neural machine translation models used some heuristic search algorithms (e. g., beam search) in order to avoid solving the maximum a posteriori problem over translation sentences at test time.
no code implementations • 13 Dec 2016 • Daniel Jiwoong Im, He Ma, Chris Dongjoo Kim, Graham Taylor
Generative Adversarial Networks have become one of the most studied frameworks for unsupervised learning due to their intuitive formulation.
no code implementations • 13 Dec 2016 • Daniel Jiwoong Im, Michael Tao, Kristin Branson
The success of deep neural networks hinges on our ability to accurately and efficiently optimize high-dimensional, non-convex functions.
1 code implementation • 11 Jul 2016 • Daniel Jiwoong Im, Graham W. Taylor
To extend its applicability outside of image-based domains, we propose to learn a metric which captures perceptual similarity.
1 code implementation • 16 Feb 2016 • Daniel Jiwoong Im, Chris Dongjoo Kim, Hui Jiang, Roland Memisevic
Gatys et al. (2015) showed that optimizing pixels to match features in a convolutional network with respect reference image features is a way to render images of high visual quality.
no code implementations • 19 Nov 2015 • Daniel Jiwoong Im, Sungjin Ahn, Roland Memisevic, Yoshua Bengio
Denoising autoencoders (DAE) are trained to reconstruct their clean inputs with noise injected at the input level, while variational autoencoders (VAE) are trained with noise injected in their stochastic hidden layer, with a regularizer that encourages this noise injection.
no code implementations • 25 Jun 2015 • Daniel Jiwoong Im, Mohamed Ishmael Diwan Belghazi, Roland Memisevic
We discuss necessary and sufficient conditions for an auto-encoder to define a conservative vector field, in which case it is associated with an energy function akin to the unnormalized log-probability of the data.
1 code implementation • 20 Dec 2014 • Daniel Jiwoong Im, Ethan Buchman, Graham W. Taylor
Here we propose a more general form for the sampling dynamics in MPF, and explore the consequences of different choices for these dynamics for training RBMs.
no code implementations • 20 Dec 2014 • Daniel Jiwoong Im, Graham W. Taylor
In this work, we apply a dynamical systems view to GAEs, deriving a scoring function, and drawing connections to Restricted Boltzmann Machines.
no code implementations • 20 Dec 2014 • Jan Rudy, Weiguang Ding, Daniel Jiwoong Im, Graham W. Taylor
Regularization is essential when training large neural networks.