Search Results for author: Thomas Müller

Found 29 papers, 15 papers with code

Compact Neural Graphics Primitives with Learned Hash Probing

no code implementations28 Dec 2023 Towaki Takikawa, Thomas Müller, Merlin Nimier-David, Alex Evans, Sanja Fidler, Alec Jacobson, Alexander Keller

Neural graphics primitives are faster and achieve higher quality when their neural networks are augmented by spatial data structures that hold trainable features arranged in a grid.

Quantization

Adaptive Shells for Efficient Neural Radiance Field Rendering

no code implementations16 Nov 2023 Zian Wang, Tianchang Shen, Merlin Nimier-David, Nicholas Sharp, Jun Gao, Alexander Keller, Sanja Fidler, Thomas Müller, Zan Gojcic

We then extract an explicit mesh of a narrow band around the surface, with width determined by the kernel size, and fine-tune the radiance field within this band.

Novel View Synthesis Stochastic Optimization

Towards interpretable quantum machine learning via single-photon quantum walks

no code implementations31 Jan 2023 Fulvio Flamini, Marius Krumm, Lukas J. Fiderer, Thomas Müller, Hans J. Briegel

Variational quantum algorithms represent a promising approach to quantum machine learning where classical neural networks are replaced by parametrized quantum circuits.

Decision Making Quantum Machine Learning +3

Parallel Inversion of Neural Radiance Fields for Robust Pose Estimation

1 code implementation18 Oct 2022 Yunzhi Lin, Thomas Müller, Jonathan Tremblay, Bowen Wen, Stephen Tyree, Alex Evans, Patricio A. Vela, Stan Birchfield

We present a parallelized optimization method based on fast Neural Radiance Fields (NeRF) for estimating 6-DoF pose of a camera with respect to an object or scene.

Pose Estimation

Variable Bitrate Neural Fields

1 code implementation15 Jun 2022 Towaki Takikawa, Alex Evans, Jonathan Tremblay, Thomas Müller, Morgan McGuire, Alec Jacobson, Sanja Fidler

Neural approximations of scalar and vector fields, such as signed distance functions and radiance fields, have emerged as accurate, high-quality representations.

RTMV: A Ray-Traced Multi-View Synthetic Dataset for Novel View Synthesis

no code implementations14 May 2022 Jonathan Tremblay, Moustafa Meshry, Alex Evans, Jan Kautz, Alexander Keller, Sameh Khamis, Thomas Müller, Charles Loop, Nathan Morrical, Koki Nagano, Towaki Takikawa, Stan Birchfield

We present a large-scale synthetic dataset for novel view synthesis consisting of ~300k images rendered from nearly 2000 complex scenes using high-quality ray tracing at high resolution (1600 x 1600 pixels).

Novel View Synthesis

Zero and Few-shot Learning for Author Profiling

no code implementations22 Apr 2022 Mara Chinea-Rios, Thomas Müller, Gretel Liz De la Peña Sarracén, Francisco Rangel, Marc Franco-Salvador

We find that entailment-based models out-perform supervised text classifiers based on roberta-XLM and that we can reach 80% of the accuracy of previous approaches using less than 50\% of the training data on average.

Few-Shot Learning

Active Few-Shot Learning with FASL

1 code implementation20 Apr 2022 Thomas Müller, Guillermo Pérez-Torró, Angelo Basile, Marc Franco-Salvador

Recent advances in natural language processing (NLP) have led to strong text classification models for many tasks.

Active Learning Few-Shot Learning +2

Few-Shot Learning with Siamese Networks and Label Tuning

1 code implementation ACL 2022 Thomas Müller, Guillermo Pérez-Torró, Marc Franco-Salvador

We study the problem of building text classifiers with little or no training data, commonly known as zero and few-shot text classification.

Few-Shot Learning Few-Shot Text Classification +2

Instant Neural Graphics Primitives with a Multiresolution Hash Encoding

16 code implementations16 Jan 2022 Thomas Müller, Alex Evans, Christoph Schied, Alexander Keller

Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate.

3D Reconstruction 3D Shape Reconstruction +2

Path Guiding Using Spatio-Directional Mixture Models

1 code implementation25 Nov 2021 Ana Dodik, Marios Papas, Cengiz Öztireli, Thomas Müller

In particular, we approximate incident radiance as an online-trained $5$D mixture that is accelerated by a $k$D-tree.

MATE: Multi-view Attention for Table Transformer Efficiency

1 code implementation EMNLP 2021 Julian Martin Eisenschlos, Maharshi Gor, Thomas Müller, William W. Cohen

However, more than 20% of relational tables on the web have 20 or more rows (Cafarella et al., 2008), and these large tables present a challenge for current Transformer models, which are typically limited to 512 tokens.

Inductive Bias Question Answering

Real-time Neural Radiance Caching for Path Tracing

2 code implementations23 Jun 2021 Thomas Müller, Fabrice Rousselle, Jan Novák, Alexander Keller

Since pretraining neural networks to handle novel, dynamic scenes is a formidable generalization challenge, we do away with pretraining and instead achieve generalization via adaptation, i. e. we opt for training the radiance cache while rendering.

Neural Radiance Caching

DoT: An efficient Double Transformer for NLP tasks with tables

1 code implementation Findings (ACL) 2021 Syrine Krichene, Thomas Müller, Julian Martin Eisenschlos

To improve efficiency while maintaining a high accuracy, we propose a new architecture, DoT, a double transformer model, that decomposes the problem into two sub-tasks: A shallow pruning transformer that selects the top-K tokens, followed by a deep task-specific transformer that takes as input those K tokens.

Question Answering

Collective defense of honeybee colonies: experimental results and theoretical modeling

no code implementations14 Oct 2020 Andrea López-Incera, Morgane Nouvian, Katja Ried, Thomas Müller, Hans J. Briegel

Social insect colonies routinely face large vertebrate predators, against which they need to mount a collective defense.

Understanding tables with intermediate pre-training

1 code implementation Findings of the Association for Computational Linguistics 2020 Julian Martin Eisenschlos, Syrine Krichene, Thomas Müller

To be able to use long examples as input of BERT models, we evaluate table pruning techniques as a pre-processing step to drastically improve the training and prediction efficiency at a moderate drop in accuracy.

Binary Classification Data Augmentation +3

Neural Control Variates

no code implementations2 Jun 2020 Thomas Müller, Fabrice Rousselle, Jan Novák, Alexander Keller

We propose neural control variates (NCV) for unbiased variance reduction in parametric Monte Carlo integration.

Development of swarm behavior in artificial learning agents that adapt to different foraging environments

no code implementations1 Apr 2020 Andrea López-Incera, Katja Ried, Thomas Müller, Hans J. Briegel

Collective behavior, and swarm formation in particular, has been studied from several perspectives within a large variety of fields, ranging from biology to physics.

How a minimal learning agent can infer the existence of unobserved variables in a complex environment

no code implementations15 Oct 2019 Katja Ried, Benjamin Eva, Thomas Müller, Hans J. Briegel

According to a mainstream position in contemporary cognitive science and philosophy, the use of abstract compositional concepts is both a necessary and a sufficient condition for the presence of genuine thought.

Explainable artificial intelligence Explainable Artificial Intelligence (XAI) +1

Answering Conversational Questions on Structured Data without Logical Forms

no code implementations IJCNLP 2019 Thomas Müller, Francesco Piccinno, Massimo Nicosia, Peter Shaw, Yasemin Altun

We present a novel approach to answering sequential questions based on structured objects such as knowledge bases or tables without using a logical form as an intermediate representation.

Question Answering

Neural Importance Sampling

2 code implementations11 Aug 2018 Thomas Müller, Brian McWilliams, Fabrice Rousselle, Markus Gross, Jan Novák

We propose to use deep neural networks for generating samples in Monte Carlo integration.

Modelling collective motion based on the principle of agency

no code implementations4 Dec 2017 Katja Ried, Thomas Müller, Hans J. Briegel

Collective motion is an intriguing phenomenon, especially considering that it arises from a set of simple rules governing local interactions between individuals.

Deep Scattering: Rendering Atmospheric Clouds with Radiance-Predicting Neural Networks

no code implementations15 Sep 2017 Simon Kallweit, Thomas Müller, Brian McWilliams, Markus Gross, Jan Novák

To render a new scene, we sample visible points of the cloud and, for each, extract a hierarchical 3D descriptor of the cloud geometry with respect to the shading location and the light source.

Cannot find the paper you are looking for? You can Submit a new open access paper.