Speaker
Description
Despite advances in conversational systems, the evaluation of such systems remains a challenging problem. Current evaluation paradigms often rely on costly homogeneous human annotators or oversimplified automated metrics, leading to a critical gap in socially aligned conversational agents, where pluralistic values (i.e., acknowledging diverse human experiences) are essential to reflect the inherently subjective and contextual nature of dialogue quality. In this paper, we propose CINEMETRIC, a novel framework that operationalizes pluralistic alignment by leveraging the perspectivist capacities of large language models (LLMs). Our approach introduces a mechanism where LLMs simulate a diverse set of evaluators, each with distinct personas defined by attributes such as gender, cognitive style, personality type, and more. These role-played characters independently assess each conversational turn and the overall dialogue based on a range of evaluation dimensions. By integrating rich, persona-driven annotations into the evaluation pipeline, CINEMETRIC offers a scalable and more human-aligned alternative to traditional dialogue evaluation methods.