OpenAI Logo
Photo: Unsplash | ilgmyzin

How Much of a Security Risk is ChatGPT?

ChatGPT remembers everything. That’s a lesson Samsung employees learned last week after accidentally leaking sensitive information to the artificial intelligence-powered chat technology.

ChatGPT can help with writing emails, scheduling meetings, and analyzing data. Engineers and programmers across many industries have also found it helpful in refining code or double-checking their work. 

The information users provide, however, is retained to train the AI behind the bot, posing a security threat. If someone asks ChatGPT to write a similar piece of code to a previous user, the AI could take some code elements from the first developer’s previously created  code and share it with another user. 

OpenAI, ChatGPT’s developer, explicitly warns users against sharing sensitive information because it is “not able to delete specific prompts” unless users choose to opt out. 

Samsung supposedly discovered three instances in which confidential data was revealed, according to Economist Korea. Workers revealed restricted equipment data to the chatbot on two separate occasions and one sent the chatbot an excerpt from a corporate meeting.

Amazon.com and Walmart have issued internal warnings on AI use. 

Amazon’s warning in January came after it found text generated by ChatGPT that “closely” resembled internal company data, according to internal Slack messages leaked to Business Insider.

Walmart, in a late February memo, warned staffers to “not input any information about Walmart’s business — including business process, policy, or strategy — into these tools,” according to a separate Business Insider article.

Walmart stated that while generative AI tools “can enhance efficiency and innovation,” they must be used “appropriately.”

CitiGroup, Goldman Sachs, Verizon and Wells Fargo are among a growing number of companies restricting employees’ use of ChatGPT amid mounting concerns about the ability to regulate and shape the use of such tools. 

A recent analysis by cybersecurity company Cyberhaven found 3.1 percent of companies who used the AI had at one point submitted confidential company data into the system. Cyberhaven wrote in the report, “There are many legitimate uses of ChatGPT in the workplace, and companies that navigate ways to leverage it to improve productivity without risking their sensitive data are poised to benefit.”

Discussion Questions

DISCUSSION QUESTIONS: Should retailers be restricting or banning the use of ChatGPT and similar generative AI tools by employees due to the security threats it poses? What guidelines should retailers set around its use?

Poll

11 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Dion Kenney
1 year ago

What we are learning about ChatGPT is that it is not the malicious, sentient, tech-overlord that we had been warned about. It is a clever algorithm that is only as good as the person providing prompts. However we’ve also learned that intentional efforts to circumvent the protective measures OpenAI put in place can be successful in the hands of a skilled and malicious prompter. Combined with the recent news of foreign governments using apps and social media platforms to collect and leverage personal information, we should be hyper-aware that “the walls have ears” is no longer just a metaphor.

John Lietsch
Active Member
1 year ago

Shadow IT has always been a challenge for companies and generative AI tools just made the shadows smarter and more dangerous. Companies should embrace technologies that employees use to be more productive and efficient but should balance that with the immense security risk they pose especially with remote workforces.

David Spear
Active Member
1 year ago

It is not a surprise that ChatGPT has already exposed some secret sauce from companies that have engaged with it, which is why senior leaders need to put prudent guardrails around this new technology. Otherwise, ChatGPT will become a hacker’s haven.

Brandon Rael
Active Member
1 year ago

There are clear advantages and business value that could be derived from leveraging ChatGPT’s technological capabilities. Search and discovery will be evolving over the months and years to come as AI’s capabilities are integrated into how we work, shop and engage with each other.

However algorithms and technologies are advancing at an alarming rate, and we must proceed with guardrails and caution to ensure our personal data is protected and secured. Apple has been at the forefront of enabling users to have a control tower, enabling them to decide what personal information they are sharing with the apps they engage with. The challenge with AI is that the technologies and capabilities are gaining momentum, the rules are somewhat dynamic, and privacy and protection laws are struggling to keep up.

Neil Saunders
Famed Member
1 year ago

The danger is that information fed into an AI system is no longer under the complete control of a retailer or business. That means anything confidential or sensitive – customer details, proprietary processes, secret initiatives – should be kept well away from AI systems, especially at this early stage of their development. AI will be of great benefit to retailers, but the mantra right now should be to proceed slowly and with caution.

Georganne Bender
Noble Member
1 year ago

As a writer, I am not a fan of ChatGPT. Sure, I can see some of the fun in using it, but I also see the downside. Feeding it information can be dangerous because that information becomes part of its canon.

Last week a law professor found out that ChatGPT said he had committed a sexual assault during a trip that he never took. The AI backed it up with a reference to a Washington Post article that did not exist either.

If Steve Wozniak, Elon Musk and other tech leaders think we need to stop and re-evaluate where ChatGPT and other generative AI is going then I think that’s a good idea. And I certainly wouldn’t want my employees using it.

Neil Saunders
Famed Member
Reply to  Georganne Bender
1 year ago

And an Australian politician is suing OpenAI because ChatGPT claimed he was a convicted criminal when he isn’t! The technology is so interesting and has great potential, but it is also full of lies, inaccuracies and distortions. Proceed with care!

James Tenser
Active Member
1 year ago

It frustrates me that so much of the dialog surrounding Artificial Intelligence has focused on generative AI. Yes it’s amusing/chilling to imagine legions of schoolkids using ChatGPT to write their homework. Or a generation of journalists being displaced. Or a computer intelligence taking control of the planet. https://en.wikipedia.org/wiki/Colossus:_The_Forbin_Project
So let the popular media giggle about these scenarios. Responsible BrainTrusters like us should focus on how to wield AI tools to improve customer experiences and business outcomes with sound information security practices.
AI entities are amoral by design. They are born without any sense of culture and they have nothing to lose. It’s up to their users to set up the guiderails and adopt sound practices. As usual with any new and powerful technology, the knowhow lags way behind.
Meanwhile, think twice before you allow employees to upload proprietary business information to a cloud that other AIs may access or try to access. Oops – too late. We’ve already shared so much.

Georges Mirza
Member
Reply to  James Tenser
1 year ago

It’s human nature to fear the unknown; we have seen several cycles, and #AI is no different. You are right, James; we should focus more on using the tech to solve #retail problems and improve the shopper experience.

Georges Mirza
Member
1 year ago

This is an opportunity for someone to create a business interface/version that companies can confidently adopt and not introduce risk.

Oliver Guy
Member
1 year ago

Full disclosure – I work for Microsoft.
This topic comes up a lot. A personal friend of mine told me that at his company someone pasted a sales proposal into ChatGPT and asked for a summary. When management found out they suspended access for the entire organisation due to security concerns. My advice to him – talk to Microsoft about having their own instance inside their own Azure tenant thus preserving their confidentiality.
GPT3/4 capability is incredible – Carmax in the US have had OpenAI inside Azure write car reviews for the cars that they sell – something that would have taken 11 years without this technology.
As for ChatGPT, if use is not restricted, guidelines need to be made clear – in the same way as they are made for social media.

BrainTrust

"Shadow IT has always been a challenge for companies and generative AI tools just made the shadows smarter and more dangerous."

John Lietsch

Chief Operating Officer, Bloo Kanoo


"The challenge with AI is that the technologies & capabilities are gaining momentum, the rules are somewhat dynamic, and privacy and protection laws are struggling to keep up."

Brandon Rael

Strategy & Operations Transformation Leader


"Think twice before you allow employees to upload proprietary business information to a cloud that other AIs may access or try to access. Oops – too late."

James Tenser

Retail Tech Marketing Strategist | B2B Expert Storytelling™ Guru | President, VSN Media LLC