Search Results for author: Quoc-Tung Le

Found 4 papers, 0 papers with code

Make Inference Faster: Efficient GPU Memory Management for Butterfly Sparse Matrix Multiplication

no code implementations23 May 2024 Antoine Gonon, Léon Zheng, Pascal Carrivain, Quoc-Tung Le

We show that these memory operations can be optimized by introducing a new CUDA kernel that minimizes the transfers between the different levels of GPU memory, achieving a median speed-up factor of x1. 4 while also reducing energy consumption (median of x0. 85).

Does a sparse ReLU network training problem always admit an optimum?

no code implementations5 Jun 2023 Quoc-Tung Le, Elisa Riccietti, Rémi Gribonval

Then, the existence of a global optimum is proved for every concrete optimization problem involving a shallow sparse ReLU neural network of output dimension one.

Network Pruning

Sparsity in neural networks can improve their privacy

no code implementations20 Apr 2023 Antoine Gonon, Léon Zheng, Clément Lalanne, Quoc-Tung Le, Guillaume Lauga, Can Pouliquen

This article measures how sparsity can make neural networks more robust to membership inference attacks.

Sparsity in neural networks can increase their privacy

no code implementations11 Apr 2023 Antoine Gonon, Léon Zheng, Clément Lalanne, Quoc-Tung Le, Guillaume Lauga, Can Pouliquen

This article measures how sparsity can make neural networks more robust to membership inference attacks.

Cannot find the paper you are looking for? You can Submit a new open access paper.