Are AI-generated ads worth the risks?
Photo: Getty Images/fizkes

Are AI-generated ads worth the risks?

A university study determined that only half of artificial intelligence-generated ads are open about the fact they were created by bots, potentially tricking consumers into false beliefs or causing confusion and dissatisfaction.

Researchers from the University of Kansas and Florida State University analyzed 1,375 AI-generated ads found on social media, news sites, search engines and video platforms in exploring the high frequency of usage in programmatic advertising.

The primary problem with the lack of transparency is that humans must follow guidelines set forth by agencies such as the FCC and FTC when creating advertising content, but AI is not bound by such restrictions so far.

“AI is not just a passive technology anymore,” said Vaibhav Diwanji, lead author of the study and a professor of journalism & mass communications at the University of Kansas, in a press release. “It’s actively being engaged in what we think — and in a way, how we make our decisions. The process has become more automated and is taking over the role of creative content online.”

The arrival of ChatGPT, DALL-E and other advanced AI tools has sparked speculation on how the technology may impact search advertising as well as certain jobs in sectors such as journalism, higher education and software design.

Some have called out the technologies’ shortcomings in replacing humans. CNET issued corrections to mistakes made in over half of the AI-written articles recently attributed to its CNET Money team, including plagiarism.

AI-generated ad content should significantly reduce production costs, leading to speculation on whether marketing costs overall could be coming down.

According to an Adweek article, brands would likely employ AI-generated ads if engagement, ratings and reach are compelling. On the downside, marketers will likely be busy cleaning up errors and need to filter through a flood of inferior ad content hitting the marketplace.

“The internet has no shortage of mediocre content at scale,” Jon Roberts, the chief innovation officer at Dotdash Meredith, told Adweek. “This is a new version of an age-old temptation.”

Discussion Questions

DISCUSSION QUESTIONS: Will AI-generated advertising content drive more benefits or chaos for marketers? What advice would you have for retailers or brands exploring AI-generated ad content?

Poll

14 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
DeAnn Campbell
Active Member
1 year ago

Without a human review before release, AI generated content is a risk to brands and retailers. At the very least AI ads could be out of tune with brand voice — at worst an AI ad could ruin customer trust and loyalty. AI content needs guardrails, and only increases the importance of human involvement in the creation process.

John Lietsch
Active Member
1 year ago

AI generated content does not exempt us from our responsibilities or guidelines. We can’t simply blame the “bot.” Retailers should continue to play with AI generated “everything” but apply the same editorial review processes they apply to their human generated content. At this stage, AI generated content is only speeding up ideation. Accept and leverage those gains and wait for the rest of the process to be improved by technology; it won’t be tomorrow but it won’t be long.

Mark Ryski
Noble Member
1 year ago

AI-generated content is already prevalent and this will only continue to be. The fact is, in many cases AI-generated content is good enough, and let’s not forget that there’s a lot of human generated ad content that’s pretty bad. I can absolutely see retailers using AI generated ad content as a first version that can then be edited by humans. The ability to create countless versions of an ad with AI give it a production edge over humans and retailers should consider using it.

Bob Amster
Trusted Member
1 year ago

What a great question! I would not take a chance on AI-generated ads. The day one goes awry it will cause a huge embarrassment, like the Southwest Airlines glitch. And this comes from an advocate of using technology to improve many (but not all) functions.

Dion Kenney
1 year ago

Anyone familiar with “The Uncanny Valley” will understand the risk of trusting AI-generated content without a human editor reviewing/revising before release. The “something is eerily not right” phenomenon with artificial humans is uncannily perceptive at an unconscious level. AI can empower us to make better informed decisions faster, but it is still not capable of human cognition and perception.

Mark Self
Noble Member
1 year ago

This is an easy one: avoid embracing this. No authenticity here, and customers will see through it.

Gene Detroyer
Noble Member
1 year ago

Within the last week, I read an explanation of the problems that surfaced with ChatGPT and DALL-E. The fellow explained that the AI interface was designed to reflect humans’ thoughts and words. It was noted that is exactly what they got, unfiltered humans.

Dave Bruno
Active Member
1 year ago

AI-generated content can be a compelling first step in the content production process, but it’s certainly not the only step. Human oversight is mandatory (and also an opportunity for us to provide differentiated value from the machines!).

Cathy Hotka
Trusted Member
1 year ago

“Danger, Will Robinson!” (Sorry for the ’60s reference.) AI-generated content is in chapter one of a hundred-chapter story. This is no time to go fully-operational without serious oversight.

David Spear
Active Member
1 year ago

AI-generated content can be a useful tool but never, never, use it without reviewing content for publication. We’re starting to see some horror stories bubble to the surface of individuals relying solely on these tools. It’s a huge mistake to do this.

Melissa Minkow
Active Member
1 year ago

There are so many smart use cases for AI. Creative content is not one of them. Right now, the focus should be on using AI for low-creativity tasks that are risk-free from a moral standpoint if there’s a mistake.

Craig Sundstrom
Craig Sundstrom
Noble Member
1 year ago

“…But AI is not bound by such restrictions so far.” Huh? Companies are what are bound by restrictions, so unless we’ve reached the stage where AI is creating corporations — and presumably paying all the costs related to them — I don’t see how this statement makes any sense.

My advice is simple, we seem to be in Dotcom 2.0 — 3.0? 4.0? — where a buzzword is mindlessly attached to … everything. When you see it used, run.

Gwen Morrison
Gwen Morrison
1 year ago

The best advertising reflects relationships consumers have with a categories, products and brands. AI generated content can’t deliver emotive insights that connects human situations and brand imagery. AI can select human developed content to match up with interests, preferences and purchases. That’s simply delivering personalization.

Oliver Guy
Member
1 year ago

Disclosure – I work for Microsoft. When Microsoft first highlighted OpenAI’s offerings late in 2022, it was positioned as a “co-pilot” to aid coders writing code. At that point it seemed logical that its use would likely expend to other areas and with it the point about being a “co-pilot” or assistant remained relevant.

A few years ago Gartner talked about AI not standing for Artificial Intelligence but “Augmented Intelligence” and this is how these developments must be thought of — something to assist the human — to augment and to help — to make faster but not replace. With this in mind, the need to put careful guardrails around how AI driven content is used is critical — it must be reviewed by a human and the human must be ultimately responsible for the execution, publication and use.

BrainTrust

"AI generated content does not exempt us from our responsibilities or guidelines. We can't simply blame the 'bot.'"

John Lietsch

Chief Operating Officer, Bloo Kanoo


"AI can select human developed content to match up with interests, preferences and purchases. That’s simply delivering personalization."

Gwen Morrison

Partner, Candezent & Retail Cities Consultant


"Anyone familiar with 'The Uncanny Valley' will understand the risk of trusting AI-generated content without a human editor reviewing/revising before release. "

Dion Kenney

COO, Mondofora