There is growing concern in the business world around the potential for sensitive company information to be leaked through ChatGPT or any other generative AI platforms. In March 2023, The Economist Korea reported that there were three instances of Samsung employees feeding sensitive data to ChatGPT, as a means of automating portions of their jobs. Two were as a result of feeding the platform code to be checked and one was a result of sharing notes from a meeting to be summarised.These leaks came just three weeks after Samsung lifted a previous ban on employees using ChatGPT. Now, the company is developing its own in-house AI. Apple has restricted employees from using AI tools over fears confidential information entered into these systems will be leaked or collected. Google has issued a warning to staff to not enter confidential materials into AI chatbots. Microsoft just announced a more secure version of its AI-powered Bing designed for businesses to assure professionals they can safely share potentially sensitive information with a chatbot. Self-imposed bans and limitations on ChatGPT are becoming increasingly common among companies.
Why is it important?
While generative AI platforms could mean efficiency gains and cost savings for businesses, they could also lead to the leaking of company secrets. The potential negative consequences of this should not be underestimated and include financial loss, reputational damage, the leaking of personal data like passwords and IDs, and legal action.The technology is in its infancy and requires executives to proceed with an abundance of caution. Joe Payne, CEO of insider risk software solutions provider Code42, says “Specifically, banning ChatGPT might feel like a good response, but it will not solve the larger problem. ChatGPT is just one of many generative AI tools that will be introduced to the workplace in the coming years.”
What can businesses do about it?
Banning or restricting the use of these AI tools, as many companies have been doing, may be wise in the short term. However, this technology is here to stay and it would be advisable for companies to devise a comprehensive AI policy. “Educate employees on what information is highly sensitive, how to treat that data in regards to humans or computer systems, and consider investing in a non-public model to use for your intellectual property’s protection,” advises Melissa Bischoping, Director of Endpoint Security Research at Tanium. Leaders should prioritise education, Q&A, and crystal-clear understanding of the risks and limitations of this technology. In response to this challenge, tools are being developed that protect businesses from data leaks. Serial tech entrepreneur Wayne Chang has rolled out an AI tool that blocks leaks by preventing chatbots and large language models from taking company secrets. LLM Shield uses “technology to fight technology” by scanning everything that is downloaded or transmitted by a worker and blocking any sensitive data from being entered into the AI tools.
By Faeeza Khan
Struggling to change systems, incentives and mindsets to embrace, drive and direct change? The Business of Transformation Workshop is for you!
The Business of Transformation Workshop is an intensive two-day module presented by Bronwyn Williams. It focuses on identifying potential problems and opportunities emerging from the trends already in play around and among us, while looking at the world as it could be – and what you could and should be doing to design and create a more sustainable, prosperous future.
Book this workshop for your team or clients!
Contact Cloud on firstname.lastname@example.org to book this workshop.
CPD Points and level: 8 CPD POINTS at CMSA Level Designated Members
(AMSA & MPSA Designated Members can attend and claim these CPD points as well)
CPD Approval Number: MA FT 23001
Certificate of completion to be loaded onto MarkEdonline to claim CPD points.
Image Credit: Natasha Grabovac