The Psychology of Gen AI Series

When Language Becomes Targeting: How Gender Cues Shape AI Recommendations
Summary As generative AI tools increasingly influence product discovery and decision-making, subtle cues in user language can shape what consumers are shown—and how options are framed. This research examines how implicit and explicit gender signals affect AI-generated product recommendations, revealing systematic differences in categories, brand repetition, descriptive language and price information. The findings raise important questions for advertisers and researchers about bias, brand visibility and the growing cultural role of AI in shaping consumer norms.

When Style Becomes Signal: How Gendered Language Shapes Generative AI Output

As generative AI tools become embedded in advertising and marketing research workflows, questions about bias increasingly extend beyond outputs to the interaction itself. This study examines whether gendered patterns can enter AI through subtle differences in how prompts are phrased. By systematically varying linguistic styles using psychologically grounded traits, the research shows that implicit, style-based, gender cues shape AI prompt construction more strongly than explicit, gender labels, with important implications for how bias may propagate upstream in AI-assisted marketing and research applications.

Why Synthetic Respondents Flatten Consumer Sentiment

A new ARF Psych of GenAI experiment reveals that large language models apply a rigid, rule-driven logic when evaluating privacy scenarios—even when humans typically shift their reasoning based on framing, emotion and social context. Unlike consumers, who blend intuition, feeling and social perspective into their judgments, GPT-4o relied on a single internal rule across all testing conditions: data use is acceptable only with explicit consent. This consistency offers value for certain analytic tasks but exposes limits for advertising research that depends on emotional nuance and context-sensitive consumer insight.

Steering AI Bias: How Persona Prompts Unlock Nuance in Gen AI Responses

Large language models mirror human cognitive biases—but can those biases be guided? New ARF and MSI research reveals that while loss aversion remains deeply ingrained in AI responses, introducing persona information, such as demographics or personality traits, can increase variability and make outputs more nuanced. For advertisers and researchers, this opens the door to design strategic prompts that spark richer and more nuanced, human-like responses.

The Bias Beneath the Average: What Loss Aversion Reveals About How AI Thinks

This experiment reveals how models like ChatGPT not only replicate human cognitive biases, such as loss aversion, but also compress variability into uniform patterns. This raises concerns for advertising researchers who rely on authentic insights into consumer behavior.

The Bias Beneath the Average: What Loss Aversion Reveals About How AI Thinks

This experiment reveals how models like ChatGPT not only replicate human cognitive biases, such as loss aversion, but also compress variability into uniform patterns. This raises concerns for advertising researchers who rely on authentic insights into consumer behavior.

Acting the Part: Can AI Think Like a CFO? Using Personas to Test Generative AI’s Strategic Reasoning

The ARF tested whether generative AI can adopt executive personas and provide credible, role-specific strategies. This experiment highlights how AI performs when “thinking like” organizational leaders, its limitations in institutional logic and feasibility, and how human-in-the-loop feedback can refine outputs and create nuanced and worthwhile results.

PG VS R: The Psychology of Prompted Thought

Can sanitized AI tools truly capture the nuance required for advertising and brand research? Is a less restrained one more likely to produce skewed results? This comparative deep dive, from ARF and MSI explores how two popular large language models—ChatGPT-4o and Grok 3—respond when prompted with complex topics. The findings highlight how content moderation affects not only tone and specificity, but the very boundaries of inquiry. For advertising researchers navigating sensitive brand perception topics, understanding these model tradeoffs is essential.

Alternative Explanations: Can AI Rethink Its Own Reasoning?

Can AI challenge its own conclusions rather than merely reinforcing them? In this ARF experiment, researchers explored whether large language models (LLMs) like ChatGPT can go beyond efficiency and exhibit deeper critical thinking skills. By prompting AI to evaluate and compare hypotheses—including its own—this study reveals how LLMs can serve as interpretive collaborators in research and theoretical reasoning.

By using MSI.org you agree to our use of cookies as identifiers and for other features of the site as described in our Privacy Policy.