Search Results for author: Rana Ali Amjad

Found 11 papers, 1 papers with code

Context-Aware Clustering using Large Language Models

no code implementations2 May 2024 Sindhu Tipirneni, Ravinarayana Adkathimar, Nurendra Choudhary, Gaurush Hiranandani, Rana Ali Amjad, Vassilis N. Ioannidis, Changhe Yuan, Chandan K. Reddy

Thus, we propose CACTUS (Context-Aware ClusTering with aUgmented triplet losS), a systematic approach that leverages open-source LLMs for efficient and effective supervised clustering of entity subsets, particularly focusing on text-based entities.

Clustering Language Modelling +2

Neural Augmentation of Kalman Filter with Hypernetwork for Channel Tracking

no code implementations26 Sep 2021 Kumar Pratik, Rana Ali Amjad, Arash Behboodi, Joseph B. Soriaga, Max Welling

Through extensive experiments on CDL-B channel model, we show that the HKF can be used for tracking the channel over a wide range of Doppler values, matching Kalman filter performance with genie Doppler information.

A White Paper on Neural Network Quantization

no code implementations15 Jun 2021 Markus Nagel, Marios Fournarakis, Rana Ali Amjad, Yelysei Bondarenko, Mart van Baalen, Tijmen Blankevoort

Neural network quantization is one of the most effective ways of achieving these savings but the additional noise it induces can lead to accuracy degradation.

Quantization

Bayesian Bits: Unifying Quantization and Pruning

1 code implementation NeurIPS 2020 Mart van Baalen, Christos Louizos, Markus Nagel, Rana Ali Amjad, Ying Wang, Tijmen Blankevoort, Max Welling

We introduce Bayesian Bits, a practical method for joint mixed precision quantization and pruning through gradient based optimization.

Quantization

Up or Down? Adaptive Rounding for Post-Training Quantization

no code implementations ICML 2020 Markus Nagel, Rana Ali Amjad, Mart van Baalen, Christos Louizos, Tijmen Blankevoort

In this paper, we propose AdaRound, a better weight-rounding mechanism for post-training quantization that adapts to the data and the task loss.

Quantization

Understanding Neural Networks and Individual Neuron Importance via Information-Ordered Cumulative Ablation

no code implementations18 Apr 2018 Rana Ali Amjad, Kairen Liu, Bernhard C. Geiger

In this work, we investigate the use of three information-theoretic quantities -- entropy, mutual information with the class variable, and a class selectivity measure based on Kullback-Leibler divergence -- to understand and study the behavior of already trained fully-connected feed-forward neural networks.

Classification General Classification

Extended Affinity Propagation: Global Discovery and Local Insights

no code implementations12 Mar 2018 Rayyan Ahmad Khan, Rana Ali Amjad, Martin Kleinsteuber

We propose a new clustering algorithm, Extended Affinity Propagation, based on pairwise similarities.

Clustering

Learning Representations for Neural Network-Based Classification Using the Information Bottleneck Principle

no code implementations27 Feb 2018 Rana Ali Amjad, Bernhard C. Geiger

In this theory paper, we investigate training deep neural networks (DNNs) for classification via minimizing the information bottleneck (IB) functional.

General Classification

Co-Clustering via Information-Theoretic Markov Aggregation

no code implementations2 Jan 2018 Clemens Bloechl, Rana Ali Amjad, Bernhard C. Geiger

We present an information-theoretic cost function for co-clustering, i. e., for simultaneous clustering of two sets based on similarities between their elements.

Clustering

Hard Clusters Maximize Mutual Information

no code implementations17 Aug 2016 Bernhard C. Geiger, Rana Ali Amjad

In this paper, we investigate mutual information as a cost function for clustering, and show in which cases hard, i. e., deterministic, clusters are optimal.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.