The Psychology of Gen AI Series
PG VS R: The Psychology of Prompted Thought
Can sanitized AI tools truly capture the nuance required for advertising and brand research? Is a less restrained one more likely to produce skewed results? This comparative deep dive, from ARF and MSI explores how two popular large language models—ChatGPT-4o and Grok 3—respond when prompted with complex topics. The findings highlight how content moderation affects not only tone and specificity, but the very boundaries of inquiry. For advertising researchers navigating sensitive brand perception topics, understanding these model tradeoffs is essential.
Acting the Part: Can AI Think Like a CFO? Using Personas to Test Generative AI’s Strategic Reasoning
The ARF tested whether generative AI can adopt executive personas and provide credible, role-specific strategies. This experiment highlights how AI performs when “thinking like” organizational leaders, its limitations in institutional logic and feasibility, and how human-in-the-loop feedback can refine outputs and create nuanced and worthwhile results.
The Bias Beneath the Average: What Loss Aversion Reveals About How AI Thinks
This experiment reveals how models like ChatGPT not only replicate human cognitive biases, such as loss aversion, but also compress variability into uniform patterns. This raises concerns for advertising researchers who rely on authentic insights into consumer behavior.
Steering AI Bias: How Persona Prompts Unlock Nuance in Gen AI Responses
Large language models mirror human cognitive biases—but can those biases be guided? New ARF and MSI research reveals that while loss aversion remains deeply ingrained in AI responses, introducing persona information, such as demographics or personality traits, can increase variability and make outputs more nuanced. For advertisers and researchers, this opens the door to design strategic prompts that spark richer and more nuanced, human-like responses.