Search Results for author: Partha Maji

Found 7 papers, 3 papers with code

On Efficient Uncertainty Estimation for Resource-Constrained Mobile Applications

no code implementations11 Nov 2021 Johanna Rock, Tiago Azevedo, René de Jong, Daniel Ruiz-Muñoz, Partha Maji

Deep neural networks have shown great success in prediction quality while reliable and robust uncertainty estimation remains a challenge.

Multi-class Classification Sensor Fusion

An Underexplored Dilemma between Confidence and Calibration in Quantized Neural Networks

1 code implementation NeurIPS Workshop ICBINB 2021 Guoxuan Xia, Sangwon Ha, Tiago Azevedo, Partha Maji

We show that this robustness can be partially explained by the calibration behavior of modern CNNs, and may be improved with overconfidence.

Decision Making Quantization

Towards Efficient Point Cloud Graph Neural Networks Through Architectural Simplification

no code implementations13 Aug 2021 Shyam A. Tailor, René de Jong, Tiago Azevedo, Matthew Mattina, Partha Maji

In recent years graph neural network (GNN)-based approaches have become a popular strategy for processing point cloud data, regularly achieving state-of-the-art performance on a variety of tasks.

Mixed Reality

On the Effects of Quantisation on Model Uncertainty in Bayesian Neural Networks

1 code implementation22 Feb 2021 Martin Ferianc, Partha Maji, Matthew Mattina, Miguel Rodrigues

Bayesian neural networks (BNNs) are making significant progress in many research areas where decision-making needs to be accompanied by uncertainty estimation.

Autonomous Driving Decision Making

Stochastic-YOLO: Efficient Probabilistic Object Detection under Dataset Shifts

1 code implementation7 Sep 2020 Tiago Azevedo, René de Jong, Matthew Mattina, Partha Maji

In this paper, we adapt the well-established YOLOv3 architecture to generate uncertainty estimations by introducing stochasticity in the form of Monte Carlo Dropout (MC-Drop), and evaluate it across different levels of dataset shift.

Image Classification Object +2

Efficient Winograd or Cook-Toom Convolution Kernel Implementation on Widely Used Mobile CPUs

no code implementations4 Mar 2019 Partha Maji, Andrew Mundy, Ganesh Dasika, Jesse Beu, Matthew Mattina, Robert Mullins

The Winograd or Cook-Toom class of algorithms help to reduce the overall compute complexity of many modern deep convolutional neural networks (CNNs).

Cannot find the paper you are looking for? You can Submit a new open access paper.