Back to All Events

Can Large Language Models Evaluate Personality from Asynchronous Video Interviews?

  • Amsterdam Leadership Lab (MF-D134) 7 Van der Boechorststraat Amsterdam, NH, 1081 BT Netherlands (map)

The swift advancement of Large Language Models (LLMs) has significantly decreased the cost and technical barriers to developing AI systems for automatic personality and interview performance evaluations thanks to their good zero-shot performance on language tasks. However, our understanding remains limited regarding whether the LLMs-based evaluation adheres to the psychometric standards that typify the assessment methodologies employed by human evaluators. In this presentation, I will present a comprehensive assessment of the validity, reliability, fairness, and rating patterns GPT-3.5 and GPT-4 (the backend models for ChatGPT) for automatic personality and interview performance evaluation. Our study is conducted to answer both the research question of Performance (whether LLMs can provide valid, reliable, and fair predictions?) and the research question of Interpretability (whether the LLMs follow similar rating patterns as human annotators?) The exploration of these two research questions can help us to understand the potential response motivation and thinking mode of LLMs, thereby facilitating the development of more trustworthy and human-friendly LLMs for human-related applications.

Tianyi Zhang is currently working as a postdoc reseacher at organizational psychology section in Vrije Universiteit Amsterdam. He got his PhD degree in the faculty of Electrical Engineering, Mathematics & Computer Science (EEMCS) in Delft University of Technology. He was also associated with the Distributed & Interactive Systems (DIS) group at Centrum Wiskunde & Informatica (CWI), the national research institute for mathematics and computer science in the Netherlands. His research interests lie in machine learning and deep learning based human-computer interaction, affective computing, and personality recognition.