Journal article
arXiv.org, 2024
APA
Click to copy
Zou, H., Wang, P., Yan, Z., Sun, T., & Xiao, Z. (2024). Can LLM "Self-report"?: Evaluating the Validity of Self-report Scales in Measuring Personality Design in LLM-based Chatbots. ArXiv.org.
Chicago/Turabian
Click to copy
Zou, Huiqi, Pengda Wang, Zihan Yan, Tianjun Sun, and Ziang Xiao. “Can LLM &Quot;Self-Report&Quot;?: Evaluating the Validity of Self-Report Scales in Measuring Personality Design in LLM-Based Chatbots.” arXiv.org (2024).
MLA
Click to copy
Zou, Huiqi, et al. “Can LLM &Quot;Self-Report&Quot;?: Evaluating the Validity of Self-Report Scales in Measuring Personality Design in LLM-Based Chatbots.” ArXiv.org, 2024.
BibTeX Click to copy
@article{huiqi2024a,
title = {Can LLM "Self-report"?: Evaluating the Validity of Self-report Scales in Measuring Personality Design in LLM-based Chatbots},
year = {2024},
journal = {arXiv.org},
author = {Zou, Huiqi and Wang, Pengda and Yan, Zihan and Sun, Tianjun and Xiao, Ziang}
}
A chatbot's personality design is key to interaction quality. As chatbots evolved from rule-based systems to those powered by large language models (LLMs), evaluating the effectiveness of their personality design has become increasingly complex, particularly due to the open-ended nature of interactions. A recent and widely adopted method for assessing the personality design of LLM-based chatbots is the use of self-report questionnaires. These questionnaires, often borrowed from established human personality inventories, ask the chatbot to rate itself on various personality traits. Can LLM-based chatbots meaningfully"self-report"their personality? We created 500 chatbots with distinct personality designs and evaluated the validity of their self-report personality scores by examining human perceptions formed during interactions with these chatbots. Our findings indicate that the chatbot's answers on human personality scales exhibit weak correlations with both human-perceived personality traits and the overall interaction quality. These findings raise concerns about both the criterion validity and the predictive validity of self-report methods in this context. Further analysis revealed the role of task context and interaction in the chatbot's personality design assessment. We further discuss design implications for creating more contextualized and interactive evaluation.