Digital image of woman thinking
Photo: iStock / metamorworks

Should AI Insights Be Shared With Consumers?

Tomas Chamorro-Premuzic, a psychology professor at University College London, argues In a column for Harvard Business Review that while artificial intelligence (AI) technology is helping brands understand their customers better, it can also help customers understand themselves better.

“Your choice of music reveals the degree to which you are extraverted, curious, and neurotic; your choice of movies reveals the degree to which you are intelligent, conscientious, and agreeable; your Facebook data reveals whether you are conservative or liberal, sociable or introverted, optimistic or pessimistic; your tweets reveal whether you are narcissistic or not, and so on,” he writes.

Prof. Chamorro-Premuzic posits that AI could be used to discover, interpret and share these patterns. The algorithmic insights, he adds, can offer “granular and personalized” reads into preferences and personality traits in real time to help individuals “become more self-aware.”

“Just as wearables can translate physiological signals into actionable feedback on our fitness, energy, sleepiness, or stress levels, AI could detect changing patterns to our habits to alert us about increases in negative or positive affect, curiosity, or aggression,” he writes.

AI’s potential for therapy – at least as a complementary tool – is being increasingly examined, particularly given the financial and logistical barriers facing human-centered treatment.

In the UK, an AI chatbot, Limbic Access, was recently approved for medical use. It promises to predict mental health disorders with 93 percent accuracy.

A recent NPR article stated, “Advances in artificial intelligence — such as ChatGPT — are increasingly being looked to as a way to help screen for, or support, people who are dealing with isolation, or mild depression or anxiety. Human emotions are tracked, analyzed and responded to, using machine learning that tries to monitor a patient’s mood, or mimic a human therapist’s interactions with a patient.”

Prof. Chamorro-Premuzic said brands sharing AI-driven insights into their consumers’ personalities could help overcome the “creepy” feeling that comes with personalized targeting. He wrote, “Brands will enhance their ethical reputation and trustworthiness if they share this understanding with consumers, persuading them that there is no conflict between knowing them well, and helping them know themselves well, when done in an ethical and transparent way.”

BrainTrust

"Sure, you can derive some basic things by looking at habits, but personality is deeply personal and complex and can't be assessed by superficialities."

Neil Saunders

Managing Director, GlobalData


"With all due respect to Professor Chamorro-Premuzic – at the present time, his thesis is absurd. AI can’t generate “insights” in a therapeutic sense."

Ryan Mathews

Founder, CEO, Black Monk Consulting


"No. AI-based “insights” have a high likelihood today to be wrong. And we do not yet understand when and how those errors occur."

Doug Garnett

President, Protonik


Discussion Questions

DISCUSSION QUESTIONS: Should brands or retailers be sharing AI-driven insights with their customers for self-actualization and other personal development reasons? Do you see a way to do it and expect such sharing to reduce the “creepiness” of personalization?

Poll

10 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Neil Saunders
Famed Member
11 months ago

I find much of this ridiculous and typically academic. Choice of movies does not reveal all that much about personality. Any given movie is attended by a whole variety of people with different backgrounds, traits, beliefs and approaches to life. The same goes for music and every other everyday consumption experience. Sure, you can derive some basic things by looking at habits, but personality is deeply personal and complex and can’t be assessed by superficialities. I also think retailers have to be extremely careful in how they use this. Imagine getting an email from Target saying you are depressed because you’re buying candy!

Scott Norris
Active Member
Reply to  Neil Saunders
11 months ago

We remember how well those “congratulations, you’re expecting!” mailings did a few years ago!

Mark Ryski
Noble Member
11 months ago

This domain is moving so fast, most retailers (and businesses in general) are hanging on trying to determine how use and apply AI-driven insights. As we have seen in just the recent months, concerns about strange and disturbing answers from generative AI bots are common. When you consider how unsophisticated promotional targeting still is, I would be very concerned about what these brand/retailer AI-insights might reveal. Retailers should be very cautious of how they use this technology and what they share with their customers. From my perspective, this increases the “creepiness” factor.

Doug Garnett
Active Member
11 months ago

No. AI-based “insights” have a high likelihood today to be wrong. And we do not yet understand when and how those errors occur.

Retailers run very major risks of offending customers by integrating any AI-based recommendations.

Richard Hernandez
Active Member
Reply to  Doug Garnett
11 months ago

Yes, from what I have seen so far, the insights are off. It’s too early to take them seriously. I see “Minority Report” written all over this.

David Spear
Active Member
11 months ago

Insights can be derived through a variety of means (basic descriptive analytics or fancy AI-driven analytics). Regardless of method, the retailer ought to leverage these new learnings in prudent ways, and always adhere to current privacy and governance guidelines/laws. I agree with Mark Ryski on the pace at which AI is moving. Of course this can be good, but I foresee a retailer of significant size getting into trouble because several algorithms arrived at the wrong conclusion, weren’t validated, or sent insights in real-time, causing harm to the consumer. Yes today’s buzz is all about AI, but retailers must be wary and take a slow, methodical data-aware approach to it, otherwise, it could become highly problematic.

Ryan Mathews
Trusted Member
11 months ago

With all due respect to Professor Chamorro-Premuzic – at the present time, his thesis is absurd. AI can’t generate “insights” in a therapeutic sense. All it can do is indicate the degree of deviation from a mean established by surveying an artificially manufactured community. Watching rom-coms doesn’t mean you are intelligent. It means that, within a specific, limited, and artificiality constructed selection of intelligent people, a majority enjoy romantic comedies. Haptics (wearables) measure actual, standardized physical responses. Data from them is not totally “black and white” either, but significantly more objective. So brands should absolutely not share “insights” generated by tragically imperfect software. Not only do I think this is actually “creepier” than so-called – but not actual – personalization, it is frankly dangerous. We will see what Chamorro-Premuzic says when the first few unstable consumers leap off of a bridge based on what they interpret as a negative generative AI diagnosis. These systems are in their (relative) infancy. We need to stop thinking of them as mature and complete.

Brandon Rael
Active Member
11 months ago

Personalization continues to be the Holy Grail for retailers and brands. In exchange for personalized experiences, consumers have been willing to share their personal data, shopping history, and browsing data. This has been an evolving yet, for the most part, working operating model, especially if there is a trusting relationship between the retailers and consumers, with clear and transparent policies on data privacy and governance.

However what Prof. Chamorro-Premuzic proposes in the form of brands and retailers sharing AI-driven insights with consumers goes against the creepiness principle. An extreme amount of data is being stored and transferred, which powers AI insights. It will be very concerning to share this level of AI insights with consumers, as it essentially shows consumers how the “sausage is made,” which is never a good thing.

Mohamed Amer, PhD
Mohamed Amer, PhD
Active Member
11 months ago

Imagine if your favorite brand sent you a personalized insight report thoroughly couched in “ethical and transparent” terms. That notion would effectively change my relationship with a brand from one through which I am the subject that derives tangible value and self-identity through my choices to one where I become the object and at the mercy of AI-generated algorithms using me as a guinea pig. No thanks, Prof. Chamorro-Premuzic. We don’t need a bunch of mini-big brothers to enlighten us.

Roland Gossage
Member
11 months ago

Rather than discuss whether to share AI-driven insights with customers, I think it would be more valuable to discuss sharing with them the purpose of data capture and how, when combined with AI-driven analysis, it benefits their shopping experience.

A Salesforce survey (https://c1.sfdcstatic.com/content/dam/web/en_us/www/documents/e-books/state-of-the-connected-customer-report-second-edition2018.pdf) found 86% of customers are more likely to trust companies with their information if they explain how it provides a better experience, and 78% of customers are more likely to do so if companies use their data to fully personalize the customer experience. If you look at it in reverse, 86% of people get upset when you’re using their data and they’re not seeing a benefit.

Data collection becomes an issue when the customer feels that their info is being used to serve someone else’s cause, rather than to customize their shopping experience.