
Photo: Canva
August 1, 2023
Watermarks Are Set To Appear in All AI-Generated Content
Reuters released a report about Joe Biden announcing how “AI companies including OpenAI, Alphabet (GOOGL.O) and Meta Platforms (META.O) have made voluntary commitments to the White House to implement measures such as watermarking AI-generated content to help make the technology safer.”
The White House held a meeting with representatives from seven companies in total, including “Anthropic, Inflection, Amazon.com (AMZN.O) and OpenAI partner Microsoft (MSFT.O),” all of which voiced their commitment to “developing a system to ‘watermark’ all forms of content, from text, images, audios, to videos generated by AI so that users will know when the technology has been used.”
Due to concerns about the possible disruptions that AI is capable of, the companies made a pledge to conduct comprehensive system testing prior to their release and share information on risk reduction and investing in cybersecurity measures.
With ChatGPT becoming increasingly popular since its recent release to the public in November 2022, lawmakers quickly moved to put together a set of regulations that would help mitigate any potential dangers that AI could pose to the general public, the economy, and national security.
For example, “Congress is considering a bill that would require political ads to disclose whether AI was used to create imagery or other content.” This precaution is meant to prevent political parties from creating AI-generated propaganda about their opponents that might lead the public to believe things that aren’t necessarily true. Beyond that, there are also other political entities and movements that could use AI to create harmful or hateful material.
China has already moved quickly, with the Cyberspace Administration of China (CAC) requiring all AI-generated outputs to contain a watermark, according to the Georgetown Journal of International Affairs (GJIA). The CAC has also gone so far as to make it illegal to delete, alter, or hide any AI watermarks. Additionally, Reuters commented on how the European Union is likewise moving full speed ahead with regulating artificial intelligence.
But in the United States, this type of regulation is easier said than done. The GJIA discussed some specific concerns, such as that “watermarking content and providing imperfect tools to detect AI-generated content can encourage institutions to create real harms when detection tools falsely flag human-generated content as AI-generated.”
Another concern noted is the possibility of nefarious use when “adversaries purposefully mimic a watermark,” such as to “block a company’s product in its country, to accuse a company of election interference, etc. It could mimic a watermark and then use it for its disinformation campaigns, creating a false trail.”
Other critics suggest bypassing watermarks and instead “labeling AI-generated content prominently. This might be viewed like Article 17 of the CAC regulations––akin to food labels or health warnings on tobacco products.”
Ultimately, there is cause for concern because AI technology is rapidly developing and will look very different two, five, and eight years from now. This means that AI-detecting tools and watermarks will need to be constantly updated, making them prone to bugs, glitches, and other unforeseen issues.
The companies involved in the meeting with the White House also emphasized that they plan to protect users’ privacy as AI develops and ensure that AI technology remains free of bias and is not used in any way to discriminate against targeted groups.
Discussion Questions
DISCUSSION QUESTIONS: Will watermarking AI in general make a difference? Does watermarking AI have its place in the retail landscape?
Poll
BrainTrust
Zel Bianco
President, founder and CEO Interactive Edge
Lucille DeHart
Principal, MKT Marketing Services/Columbus Consulting
John Lietsch
CEO/Founder, Align Business Consulting
Recent Discussions







Labelling all AI content seems unnecessary. So long as it’s accurate, I am not really interested in whether a product description has been generated by a human or AI. The same applies to things like language translations and basic content. The biggest problem is with imitation and impersonation of people, for example, creating a video or audio of a politician saying something that they never actually said. Here, warnings and labels are needed (although people committing fraud will not use them) – but so too is a revision of laws to protect people against such harm and ensure they have the right to redress. It’s a complex area, but existing frameworks of law can be adapted to manage it.
At some point, AI will find a way around our attempts to identify it. We should be more focused on our abilities to navigate what will be a superior intelligence. The bigger discussion should be about content ownership, legal implications and AI going up against other AI. It is a complicated new world.
These are all genuinely well-intended efforts. However, we have seen that anything can be faked in the digital world except (so far) cryptocurrency, and blockchain does a good job of that. The AI genie is out of the lamp and it’s not going back in. We can’t make it go back in. AI is a curse we brought on ourselves and I fear that it’s going to take close to a police state to keep it in check and catch the bad guys. I have never been so scared of anything in my lifetime, including an atomic war, because I consider it, less likely than really bad people doing really bad things with AI. Hell is the limit.
Photoshopping gave us a pretty good clue about people’s willingness to use technology to alter and enhance depictions of real life. Or to commit outright fraud. AI will take this to whole new levels. Honest people with honest intent will use watermarks. And dishonest people will be dishonest people. The level of fraud is going to be breathtaking. And so is the level of creativity. I’m going to enjoy the creativity but I will also be on the lookout for fraud. It’s unfortunate that this situation will introduce a level of skepticism into daily life that didn’t exist before.
I believe you feel the way I do…
when so many Americans cannot agree on what is fact and what is fiction, layering on AI will make things even worse until there is a common standard that most people will accept as reality. What we do with AI will either help to bridge this gap or widen it. AI has so many positives, but we must be diligent to make sure guidelines are in place and updated on a regular basis as the technology will move too fast, not to.
Watermarks? That solution is laughable. How long would it take for the nefarious to fake a watermark?
Congress is concerned? There are also other political entities and movements that could use AI to create harmful or hateful material. What is happening daily on Twitter, Facebook, et al.?
We have a huge problem.
Right on, Jeff! I do want to know when content is created by AI. As a writer, I believe using AI to create or tweak content is right up there with plagiarism. If you didn’t create it, don’t put your name on it. As a consumer, I want to know which companies are heavily dependent on AI so I can avoid them.
I do know that AI is here to stay, and that it can be helpful, but yes, I would like to see something used to identify AI created materials.
Plagiarism comes to my mind as well. Have to start somewhere with recognition of AI created content…
We can’t even begin to imagine the damage that phony images can do in politics and commerce. It’s important that governments address this right away to ensure that individuals can trust the content they see.
Adding watermarking to AI content is like buying a Tesla – they both feel like you’re making a difference but the reality is that both are off the mark (pun intended). The good news is that China is moving fast on this one – shouldn’t that tell us something? Fraud and misinformation have been around since before the digital age. This isn’t a new problem and we shouldn’t let the over-hype around this type of AI make us think differently. Remember when we used to say that things were true if one read them on the internet? Thankfully, AI has apparently proven that the internet was wrong; things are only true if AI says they’re true. This problem is not going away with watermarks because this isn’t a technology problem. However, it’s a nice attempt and we need to continue the dialogue. Hopefully, we will do so on a far greater and broader scale.
Generative AI is an extremely dynamic emerging capability. Our society is struggling to keep up with the necessary safeguards and guardrails to ensure that creative professionals’ intellectual property is secure. One of the biggest challenges is that Generative AI is quickly adapting at such an alarming rate. There has to be some regulation to ensure that content creators receive attribution for their work and compensation for Generative AI content based on their original concepts.
Watermarking Generative AI-produced content will only serve as a temporary solution for a highly complex situation and will impact society across every aspect of our lives. There will have to be government regulations to ensure that Generative AI-produced content does not veer too far to manipulate humanity in the wrong direction and ensure that content creators continue to have viable careers.
While Watermarks sound like a good solution, fakes will immediately appear, and we will go down a rabbit hole chasing ‘truth’.