Webinars
MSI Analytics Conference Rebroadcast: Applying “Explainable” AI: Using Theory to Understand AI Emotion Models
This session is a rebroadcast from MSI’s 2023 Analytics Conference.
Images, music, speech, and text often elicit emotional responses in customers. Many machine learning models have been proposed to predict these emotional responses for subsequent downstream tasks. In particular, deep learning models often prove to have good predictive accuracy. However, these models are often also viewed as opaque black boxes, creating concern as to their generalizability. Incorporating theory about emotions into AI can help alleviate these concerns by making such models more explainable while maintaining good predictive accuracy. Incorporating theory about emotions into AI can help alleviate these concerns by making such models more explainable while maintaining good predictive accuracy.
speaker
Hortense Fong is an Assistant Professor of Marketing at Columbia Business School. She uses machine learning, econometric, and experimental methods to study how emotions impact consumer behavior. A distinguishing feature of her interests involves going beyond ML’s use in prediction to study how to incorporate domain-specific theoretic and managerial knowledge into ML systems and make them more interpretable. She also has a broader interest in questions at the interface of marketing and society (e.g., fairness). Professor Fong received a Ph.D. in Quantitative Marketing from Yale University.