Evaluating Vision-Language Models for Emotion Recognition

Research Poster Engineering 2025 Graduate Exhibition

Presentation by Sree Bhattacharyya

Exhibition Number 91

Abstract

Large Vision-Language Models (VLMs) have achieved unprecedented success in several objective multimodal reasoning tasks. However, to further enhance their capabilities of empathetic and effective communication with humans, improving how VLMs process and understand emotions is crucial. Despite significant research attention on improving affective understanding, there is a lack of detailed evaluations of VLMs for emotion-related tasks, which can potentially help inform downstream fine-tuning efforts. In this work, we present the first comprehensive evaluation of VLMs for recognizing evoked emotions from images. We create a benchmark for the task of evoked emotion recognition and study the performance of VLMs for this task, from perspectives of correctness and robustness. Through several experiments, we demonstrate important factors that emotion recognition performance depends on, and also characterize the various errors made by VLMs in the process. Finally, we pinpoint potential causes for errors through a human evaluation study. We use our experimental results to inform recommendations for the future of emotion research in the context of VLMs.

Importance

This is one of the first works comprehensively benchmarking large vision-language models for the task of evoked emotion recognition. It provides directions for how to enhance large models for subjective tasks like emotional reasoning.

Comments