no code implementations • 16 May 2024 • Abhishek Divekar, Greg Durrett
Large language models (LLMs) are versatile and can address many tasks, but for computational efficiency, it is often desirable to distill their capabilities into smaller student models.
1 code implementation • 13 Nov 2018 • Abhishek Divekar, Meet Parekh, Vaibhav Savla, Rudra Mishra, Mahesh Shirole
Applying the SMOTE oversampling technique and random undersampling, we create a balanced version of NSL-KDD and prove that skewed target classes in KDD-99 and NSL-KDD hamper the efficacy of classifiers on minority classes (U2R and R2L), leading to possible security risks.