Search Results for author: Ameya Joshi

Found 20 papers, 9 papers with code

A Curious Case of Remarkable Resilience to Gradient Attacks via Fully Convolutional and Differentiable Front End with a Skip Connection

no code implementations26 Feb 2024 Leonid Boytsov, Ameya Joshi, Filipe Condessa

By training them using a small learning rate for about one epoch, we obtained models that retained the accuracy of the backbone classifier while being unusually resistant to gradient attacks including APGD and FAB-T attacks from the AutoAttack package, which we attributed to gradient masking.

Adversarial Robustness

PriViT: Vision Transformers for Fast Private Inference

1 code implementation6 Oct 2023 Naren Dhyani, Jianqiao Mo, Minsu Cho, Ameya Joshi, Siddharth Garg, Brandon Reagen, Chinmay Hegde

The Vision Transformer (ViT) architecture has emerged as the backbone of choice for state-of-the-art deep models for computer vision applications.

Image Classification

Distributionally Robust Classification on a Data Budget

1 code implementation7 Aug 2023 Benjamin Feuer, Ameya Joshi, Minh Pham, Chinmay Hegde

To our knowledge, this is the first result showing (near) state-of-the-art distributional robustness on limited data budgets.

Classification Image Classification +1

Identity-Preserving Aging of Face Images via Latent Diffusion Models

1 code implementation17 Jul 2023 Sudipta Banerjee, Govind Mittal, Ameya Joshi, Chinmay Hegde, Nasir Memon

The performance of automated face recognition systems is inevitably impacted by the facial aging process.

Face Recognition

Vision-Language Models can Identify Distracted Driver Behavior from Naturalistic Videos

1 code implementation16 Jun 2023 Md Zahid Hasan, Jiajing Chen, Jiyang Wang, Mohammed Shaiqur Rahman, Ameya Joshi, Senem Velipasalar, Chinmay Hegde, Anuj Sharma, Soumik Sarkar

Our results show that this framework offers state-of-the-art performance on zero-shot transfer and video-based CLIP for predicting the driver's state on two public datasets.

Activity Recognition

ZeroForge: Feedforward Text-to-Shape Without 3D Supervision

1 code implementation14 Jun 2023 Kelly O. Marshall, Minh Pham, Ameya Joshi, Anushrut Jignasu, Aditya Balu, Adarsh Krishnamurthy, Chinmay Hegde

Current state-of-the-art methods for text-to-shape generation either require supervised training using a labeled dataset of pre-defined 3D shapes, or perform expensive inference-time optimization of implicit neural representations.

Text-to-Shape Generation

Caption supervision enables robust learners

1 code implementation13 Oct 2022 Benjamin Feuer, Ameya Joshi, Chinmay Hegde

Vision language (VL) models like CLIP are robust to natural distribution shifts, in part because CLIP learns on unstructured data using a technique called caption supervision; the model inteprets image-linked texts as ground-truth labels.

Revisiting Self-Distillation

no code implementations17 Jun 2022 Minh Pham, Minsu Cho, Ameya Joshi, Chinmay Hegde

We first show that even with a highly accurate teacher, self-distillation allows a student to surpass the teacher in all cases.

Knowledge Distillation Model Compression

A Meta-Analysis of Distributionally-Robust Models

no code implementations15 Jun 2022 Benjamin Feuer, Ameya Joshi, Chinmay Hegde

State-of-the-art image classifiers trained on massive datasets (such as ImageNet) have been shown to be vulnerable to a range of both intentional and incidental distribution shifts.

Smooth-Reduce: Leveraging Patches for Improved Certified Robustness

no code implementations12 May 2022 Ameya Joshi, Minh Pham, Minsu Cho, Leonid Boytsov, Filipe Condessa, J. Zico Kolter, Chinmay Hegde

Randomized smoothing (RS) has been shown to be a fast, scalable technique for certifying the robustness of deep neural network classifiers.

Selective Network Linearization for Efficient Private Inference

1 code implementation4 Feb 2022 Minsu Cho, Ameya Joshi, Siddharth Garg, Brandon Reagen, Chinmay Hegde

To reduce PI latency we propose a gradient-based algorithm that selectively linearizes ReLUs while maintaining prediction accuracy.

Adversarial Token Attacks on Vision Transformers

no code implementations8 Oct 2021 Ameya Joshi, Gauri Jagatap, Chinmay Hegde

Vision transformers rely on a patch token based self attention mechanism, in contrast to convolutional networks.

Differentiable Spline Approximations

no code implementations NeurIPS 2021 Minsu Cho, Aditya Balu, Ameya Joshi, Anjana Deva Prasad, Biswajit Khara, Soumik Sarkar, Baskar Ganapathysubramanian, Adarsh Krishnamurthy, Chinmay Hegde

Overall, we show that leveraging this redesigned Jacobian in the form of a differentiable "layer" in predictive models leads to improved performance in diverse applications such as image segmentation, 3D point cloud reconstruction, and finite element analysis.

3D Point Cloud Reconstruction BIG-bench Machine Learning +3

NeuFENet: Neural Finite Element Solutions with Theoretical Bounds for Parametric PDEs

no code implementations4 Oct 2021 Biswajit Khara, Aditya Balu, Ameya Joshi, Soumik Sarkar, Chinmay Hegde, Adarsh Krishnamurthy, Baskar Ganapathysubramanian

We consider a mesh-based approach for training a neural network to produce field predictions of solutions to parametric partial differential equations (PDEs).

Deep Generative Models that Solve PDEs: Distributed Computing for Training Large Data-Free Models

no code implementations24 Jul 2020 Sergio Botelho, Ameya Joshi, Biswajit Khara, Soumik Sarkar, Chinmay Hegde, Santi Adavani, Baskar Ganapathysubramanian

Here we report on a software framework for data parallel distributed deep learning that resolves the twin challenges of training these large SciML models - training in reasonable time as well as distributing the storage requirements.

Decoder Distributed Computing

ESPN: Extremely Sparse Pruned Networks

1 code implementation28 Jun 2020 Minsu Cho, Ameya Joshi, Chinmay Hegde

Deep neural networks are often highly overparameterized, prohibiting their use in compute-limited systems.

Network Pruning

Encoding Invariances in Deep Generative Models

no code implementations4 Jun 2019 Viraj Shah, Ameya Joshi, Sambuddha Ghosal, Balaji Pokuri, Soumik Sarkar, Baskar Ganapathysubramanian, Chinmay Hegde

Reliable training of generative adversarial networks (GANs) typically require massive datasets in order to model complicated distributions.

Semantic Adversarial Attacks: Parametric Transformations That Fool Deep Classifiers

1 code implementation ICCV 2019 Ameya Joshi, Amitangshu Mukherjee, Soumik Sarkar, Chinmay Hegde

We propose a novel approach to generate such `semantic' adversarial examples by optimizing a particular adversarial loss over the range-space of a parametric conditional generative model.

Cannot find the paper you are looking for? You can Submit a new open access paper.