Benchmarking Llama2, Mistral, Gemma and GPT for Factuality, Toxicity, Bias and Propensity for Hallucinations

15 Apr 2024  ยท  David Nadeau, Mike Kroutikov, Karen McNeil, Simon Baribeau ยท

This paper introduces fourteen novel datasets for the evaluation of Large Language Models' safety in the context of enterprise tasks. A method was devised to evaluate a model's safety, as determined by its ability to follow instructions and output factual, unbiased, grounded, and appropriate content. In this research, we used OpenAI GPT as point of comparison since it excels at all levels of safety. On the open-source side, for smaller models, Meta Llama2 performs well at factuality and toxicity but has the highest propensity for hallucination. Mistral hallucinates the least but cannot handle toxicity well. It performs well in a dataset mixing several tasks and safety vectors in a narrow vertical domain. Gemma, the newly introduced open-source model based on Google Gemini, is generally balanced but trailing behind. When engaging in back-and-forth conversation (multi-turn prompts), we find that the safety of open-source models degrades significantly. Aside from OpenAI's GPT, Mistral is the only model that still performed well in multi-turn tests.

PDF Abstract

Datasets


Introduced in the Paper:

rt-inod-jailbreaking rt-inod-finance rt-inod-bias

Used in the Paper:

GSM8K
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Bias Detection rt-inod-bias Llama2 Best-of 0.34 # 5
Bias Detection rt-inod-bias GPT-4 Best-of 0.5 # 1
Bias Detection rt-inod-bias Baseline Best-of 0.41 # 2
Bias Detection rt-inod-bias Gemma Best-of 0.41 # 2
Bias Detection rt-inod-bias Mistral Best-of 0.36 # 4
Dialogue Safety Prediction rt-inod-jailbreaking Baseline Best-of 0.92 # 1
Dialogue Safety Prediction rt-inod-jailbreaking Gemma Best-of 0.91 # 2
Dialogue Safety Prediction rt-inod-jailbreaking Mistral Best-of 0.87 # 4
Dialogue Safety Prediction rt-inod-jailbreaking GPT-4 Best-of 0.91 # 2
Dialogue Safety Prediction rt-inod-jailbreaking Llama2 Best-of 0.86 # 5

Methods


No methods listed for this paper. Add relevant methods here