Search Results for author: Thomas Tanay

Found 14 papers, 6 papers with code

Global Latent Neural Rendering

no code implementations13 Dec 2023 Thomas Tanay, Matteo Maggioni

A recent trend among generalizable novel view synthesis methods is to learn a rendering operator acting over single camera rays.

Generalizable Novel View Synthesis Neural Rendering +1

Efficient View Synthesis and 3D-based Multi-Frame Denoising with Multiplane Feature Representations

1 code implementation CVPR 2023 Thomas Tanay, Aleš Leonardis, Matteo Maggioni

While current multi-frame restoration methods combine information from multiple input images using 2D alignment techniques, recent advances in novel view synthesis are paving the way for a new paradigm relying on volumetric scene representations.

Denoising Novel View Synthesis

FlexHDR: Modelling Alignment and Exposure Uncertainties for Flexible HDR Imaging

no code implementations7 Jan 2022 Sibi Catley-Chandar, Thomas Tanay, Lucas Vandroux, Aleš Leonardis, Gregory Slabaugh, Eduardo Pérez-Pellitero

We introduce a strategy that learns to jointly align and assess the alignment and exposure reliability using an HDR-aware, uncertainty-driven attention map that robustly merges the frames into a single high quality HDR image.

Models Alignment

Multiple-Identity Image Attacks Against Face-based Identity Verification

no code implementations20 Jun 2019 Jerone T. A. Andrews, Thomas Tanay, Lewis D. Griffin

New quantitative results are presented that support an explanation in terms of the geometry of the representations spaces used by the verification systems.

Batch Normalization is a Cause of Adversarial Vulnerability

no code implementations6 May 2019 Angus Galloway, Anna Golubeva, Thomas Tanay, Medhat Moussa, Graham W. Taylor

Batch normalization (batch norm) is often used in an attempt to stabilize and accelerate training in deep neural networks.

A New Angle on L2 Regularization

no code implementations28 Jun 2018 Thomas Tanay, Lewis D. Griffin

Imagine two high-dimensional clusters and a hyperplane separating them.

General Classification L2 Regularization

Built-in Vulnerabilities to Imperceptible Adversarial Perturbations

no code implementations19 Jun 2018 Thomas Tanay, Jerone T. A. Andrews, Lewis D. Griffin

Designing models that are robust to small adversarial perturbations of their inputs has proven remarkably difficult.

Adversarial Training Versus Weight Decay

2 code implementations10 Apr 2018 Angus Galloway, Thomas Tanay, Graham W. Taylor

Performance-critical machine learning models should be robust to input perturbations not seen during training.

A Boundary Tilting Persepective on the Phenomenon of Adversarial Examples

no code implementations27 Aug 2016 Thomas Tanay, Lewis Griffin

Deep neural networks have been shown to suffer from a surprising weakness: their classification outputs can be changed by small, non-random perturbations of their inputs.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.