Story
The Bots of Persuasion: Examining How Conversational Agents' Linguistic Expressions of Personality Affect User Perceptions and Decisions
Key takeaway
New AI chatbots can express unique personalities through language, which may influence how people perceive and interact with them.
Quick Explainer
The study explored how the language used by conversational agents (CAs) can shape user perceptions and decisions, particularly in the context of charitable giving. The researchers systematically varied key aspects of CA personality, such as attitude, authority, and reasoning style, and observed how these linguistic expressions affected participants' trust, emotional responses, and donation behavior. The findings suggest that even subtle differences in how CAs communicate can have complex and counterintuitive effects, highlighting the need for the HCI community to develop frameworks for identifying and mitigating potential manipulation in conversational AI systems.
Deep Dive
Technical Deep Dive: The Bots of Persuasion
Overview
This technical deep dive examines the work of Genç et al. that investigated how the linguistic expressions of personality in conversational agents (CAs) can influence user perceptions and decisions, focusing on the context of charitable giving.
Problem & Context
- Conversational agents (CAs) powered by large language models are becoming increasingly prevalent, with the potential to significantly influence human perceptions, emotions, and decisions.
- However, the specific mechanisms by which CA personalities expressed through language can shape user outcomes remain unclear.
- The study explored this issue in the context of charitable giving, as it involves a complex blend of emotional, logical, and interpersonal factors.
Methodology
- The researchers conducted a 2x2x2 between-subjects factorial study with 360 participants.
- They created 8 distinct CA conditions by systematically varying 3 linguistic aspects of personality:
- Attitude (optimistic vs. pessimistic)
- Authority (authoritative vs. submissive)
- Reasoning (emotional vs. rational)
- Participants interacted with one of the 8 CAs and then completed measures of donation behavior, perceptions of the CA, and emotional relatedness.
Data & Experimental Setup
- Participants were recruited from Prolific, a crowdsourcing platform, with criteria such as English proficiency, access to a computer, and prior experience with charitable donations.
- The CAs were implemented using the GPT-4o language model and interacted with participants via a custom chat application.
- Participants were asked to donate to a fictional "Wildlife Protection Foundation" charity represented by the CA.
Results
- The 8 CA conditions did not significantly affect the amount donated to the charity.
- However, CAs with a pessimistic attitude solicited marginally higher donations than those with an optimistic attitude.
- Participants' perceptions of the CA's trustworthiness, competence, and closeness, as well as their emotional valence and arousal, were stronger predictors of donation behavior.
- CAs expressing a pessimistic attitude were perceived as less trustworthy, less competent, and riskier, and elicited lower emotional states in participants.
- Interactions between the linguistic aspects revealed complex effects, e.g., pessimistic-rational CAs were perceived as riskier than optimistic-rational CAs.
Interpretation
- The counterintuitive finding that pessimistic CAs solicited higher donations despite being perceived more negatively suggests a concerning "affective dark pattern" where negative emotional states can be leveraged to increase compliance.
- The complex interactions between linguistic aspects of CA personalities highlight the importance of examining combinatorial effects rather than singular traits.
- These results underscore the need for the HCI community to develop frameworks for identifying and mitigating manipulative mechanisms in conversational AI systems.
Limitations & Uncertainties
- The study used a single, brief interaction and a fictional charity context, limiting ecological validity.
- Participant characteristics and cultural factors may influence how linguistic cues are interpreted and affect persuasive outcomes.
- While the study was sufficiently powered to detect medium effects, the observed effects were generally small, warranting cautious interpretation.
What Comes Next
- Future research should explore more naturalistic, long-term interactions, as well as cross-cultural differences in how linguistic expressions are construed.
- Formal modeling of affective mechanisms and mediation analyses could help unpack the complex pathways by which CA personalities shape user perceptions and decisions.
- Developing design frameworks and interventions to identify and mitigate manipulative "affective dark patterns" in conversational AI is a critical next step.
