Search Results for author: Junda Lu

Found 3 papers, 2 papers with code

Dataset Distillation via Adversarial Prediction Matching

1 code implementation14 Dec 2023 Mingyang Chen, Bo Huang, Junda Lu, Bing Li, Yi Wang, Minhao Cheng, Wei Wang

This ensures the memory efficiency of our method and provides a flexible tradeoff between time and memory budgets, allowing us to distil ImageNet-1K using a minimum of only 6. 5GB of GPU memory.

DialogBench: Evaluating LLMs as Human-like Dialogue Systems

no code implementations3 Nov 2023 Jiao Ou, Junda Lu, Che Liu, Yihong Tang, Fuzheng Zhang, Di Zhang, Kun Gai

In this paper, we propose DialogBench, a dialogue evaluation benchmark that contains 12 dialogue tasks to probe the capabilities of LLMs as human-like dialogue systems should have.

Dialogue Evaluation

Boosting Accuracy and Robustness of Student Models via Adaptive Adversarial Distillation

1 code implementation CVPR 2023 Bo Huang, Mingyang Chen, Yi Wang, Junda Lu, Minhao Cheng, Wei Wang

Thus, recent studies concern about adversarial distillation (AD) that aims to inherit not only prediction accuracy but also adversarial robustness of a robust teacher model under the paradigm of robust optimization.

Adversarial Robustness Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.