
Never forget that anything you share with ChatGPT is kept and used for further training of the model. Samsung employees learned this the hard way after top secret Samsung data was accidentally leaked.
Samsung employees accidentally shared confidential information while using ChatGPT for business assistance. Samsung’s semiconductor division allowed engineers to use ChatGPT to verify the source code.
But The Economist Korea reported(Opens in a new tab) Three separate cases of Samsung employees inadvertently leaking sensitive information to ChatGPT. In one case, an employee pasted secret source code into a chat to check for errors. It uses another code shared with ChatGPT and “request code optimization”. Third, share a recording of a meeting to turn into notes for a presentation. This information is now out in the wild for ChatGPT to feed on.
Amidst the controversies, OpenAI insists that safety is a paramount task
The leak is a real-world example of hypothetical scenarios that privacy experts have been concerned about(Opens in a new tab). Other scenarios include sharing confidential legal documents or medical information for the purpose of summarizing or analyzing a long text, which can then be used to improve the form. Experts warn that it may breach GDPR compliance, which is why Italy recently banned ChatGPT.
Samsung took immediate action by limiting ChatGPT load capacity to 1024 bytes per person, and is investigating the people involved in the leak. It is also considering building an internal AI chatbot to prevent embarrassing incidents in the future. But Samsung is unlikely to remember any of its leaked data. ChatGPT data policy(Opens in a new tab) It says it uses the data to train its models unless you request an opt-out. In the ChatGPT usage guide, it explicitly warns users(Opens in a new tab) Do not share sensitive information in conversations.
Consider this a cautionary tale to remember the next time you turn to ChatGPT for help. Samsung will definitely do that.