MLLM Evaluation: Aesthetics
1 papers with code • 0 benchmarks • 1 datasets
This task has no description! Would you like to contribute one?
Benchmarks
These leaderboards are used to track progress in MLLM Evaluation: Aesthetics
No evaluation results yet. Help compare methods by
submitting
evaluation metrics.
Most implemented papers
AesBench: An Expert Benchmark for Multimodal Large Language Models on Image Aesthetics Perception
An obvious obstacle lies in the absence of a specific benchmark to evaluate the effectiveness of MLLMs on aesthetic perception.