Search Results for author: Supriyo Chakraborty

Found 16 papers, 6 papers with code

OrthoSeisnet: Seismic Inversion through Orthogonal Multi-scale Frequency Domain U-Net for Geophysical Exploration

1 code implementation9 Jan 2024 Supriyo Chakraborty, Aurobinda Routray, Sanjay Bhargav Dharavath, Tanmoy Dam

However, the detection of sparse thin layers within seismic datasets presents a significant challenge due to the ill-posed nature and poor non-linearity of the problem.

Seismic Inversion SSIM

Knowledge from Uncertainty in Evidential Deep Learning

no code implementations19 Oct 2023 Cai Davies, Marc Roig Vilamala, Alun D. Preece, Federico Cerutti, Lance M. Kaplan, Supriyo Chakraborty

In this paper, we empirically investigate the correlations between misclassification and evaluated uncertainty, and show that EDL's `evidential signal' is due to misclassification bias.

On the amplification of security and privacy risks by post-hoc explanations in machine learning models

no code implementations28 Jun 2022 Pengrui Quan, Supriyo Chakraborty, Jeya Vikranth Jeyakumar, Mani Srivastava

A variety of explanation methods have been proposed in recent years to help users gain insights into the results returned by neural networks, which are otherwise complex and opaque black-boxes.

Model extraction

SparseFed: Mitigating Model Poisoning Attacks in Federated Learning with Sparsification

1 code implementation12 Dec 2021 Ashwinee Panda, Saeed Mahloujifar, Arjun N. Bhagoji, Supriyo Chakraborty, Prateek Mittal

Federated learning is inherently vulnerable to model poisoning attacks because its decentralized nature allows attackers to participate with compromised devices.

Federated Learning Model Poisoning

Adversarial training in communication constrained federated learning

no code implementations1 Mar 2021 Devansh Shah, Parijat Dube, Supriyo Chakraborty, Ashish Verma

We observe a significant drop in both natural and adversarial accuracies when AT is used in the federated setting as opposed to centralized training.

Attribute Federated Learning

Explaining Motion Relevance for Activity Recognition in Video Deep Learning Models

no code implementations31 Mar 2020 Liam Hiley, Alun Preece, Yulia Hicks, Supriyo Chakraborty, Prudhvi Gurram, Richard Tomsett

Our results show that the selective relevance method can not only provide insight on the role played by motion in the model's decision -- in effect, revealing and quantifying the model's spatial bias -- but the method also simplifies the resulting explanations for human consumption.

Activity Recognition

SAT: Improving Adversarial Training via Curriculum-Based Loss Smoothing

no code implementations18 Mar 2020 Chawin Sitawarin, Supriyo Chakraborty, David Wagner

This leads to a significant improvement in both clean accuracy and robustness compared to AT, TRADES, and other baselines.

Adversarial Robustness

Sanity Checks for Saliency Metrics

no code implementations29 Nov 2019 Richard Tomsett, Dan Harborne, Supriyo Chakraborty, Prudhvi Gurram, Alun Preece

Despite a proliferation of such methods, little effort has been made to quantify how good these saliency maps are at capturing the true relevance of the pixels to the classifier output (i. e. their "fidelity").

Analyzing Federated Learning through an Adversarial Lens

2 code implementations ICLR 2019 Arjun Nitin Bhagoji, Supriyo Chakraborty, Prateek Mittal, Seraphin Calo

Federated learning distributes model training among a multitude of agents, who, guided by privacy concerns, perform training using their local data but share only model parameter updates, for iterative aggregation at the server.

Federated Learning Model Poisoning

Stakeholders in Explainable AI

no code implementations29 Sep 2018 Alun Preece, Dan Harborne, Dave Braines, Richard Tomsett, Supriyo Chakraborty

There is general consensus that it is important for artificial intelligence (AI) and machine learning systems to be explainable and/or interpretable.

Interpretable to Whom? A Role-based Model for Analyzing Interpretable Machine Learning Systems

no code implementations20 Jun 2018 Richard Tomsett, Dave Braines, Dan Harborne, Alun Preece, Supriyo Chakraborty

Several researchers have argued that a machine learning system's interpretability should be defined in relation to a specific agent or task: we should not ask if the system is interpretable, but to whom is it interpretable.

BIG-bench Machine Learning Interpretable Machine Learning +1

GenAttack: Practical Black-box Attacks with Gradient-Free Optimization

3 code implementations28 May 2018 Moustafa Alzantot, Yash Sharma, Supriyo Chakraborty, huan zhang, Cho-Jui Hsieh, Mani Srivastava

Our experiments on different datasets (MNIST, CIFAR-10, and ImageNet) show that GenAttack can successfully generate visually imperceptible adversarial examples against state-of-the-art image recognition models with orders of magnitude fewer queries than previous approaches.

Adversarial Attack Adversarial Robustness +1

SenseGen: A Deep Learning Architecture for Synthetic Sensor Data Generation

no code implementations31 Jan 2017 Moustafa Alzantot, Supriyo Chakraborty, Mani B. Srivastava

second, we use another LSTM network based discriminator model for distinguishing between the true and the synthesized data.

Get More With Less: Near Real-Time Image Clustering on Mobile Phones

no code implementations9 Dec 2015 Jorge Ortiz, Chien-chin Huang, Supriyo Chakraborty

In this paper, we show that by combining the computing power distributed over a number of phones, judicious optimization choices, and contextual information it is possible to execute the end-to-end pipeline entirely on the phones at the edge of the network, efficiently.

Clustering Image Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.