

Chris Carlon/Android Authority
TL; DR
- Samsung lifted the ban that prevented employees from using ChatGPT for work.
- Three weeks later, Samsung executives discovered that employees had been leaking company secrets to a chatbot.
- Samsung has now implemented an emergency measure to limit the claims to 1024 bytes.
What is the biggest mistake you made in your workplace? Whatever it is, you can probably take comfort in knowing that it probably doesn’t compare to the mistake Samsung employees made recently.
According to local Korean media, Samsung is currently doing damage control after executives learned that employees were intentionally divulging company secrets to ChatGPT. Specifically, three separate incidents of this appear to have been discovered.
The first incident involved an employee copying and pasting source code from a defective ChatGPT semiconductor database. This employee was reportedly using ChatGPT to help him find a fix for the code. The second case involved another employee who was also trying to find a solution to faulty equipment. Then there was an employee who pasted a completely confidential meeting, and wanted the chatbot to generate minutes of the meeting.
The problem here is that ChatGPT does not delete queries that have been sent to it. Open AI warns that users should not enter sensitive data because these prompts are stored and can be used to improve their AI models.
To add insult to injury, Samsung previously banned its employees from using ChatGPT at work. I later decided to unblock the AI tool three weeks before these incidents. Now the manufacturer is trying to solve their problem by putting a 1024 byte limit on ChatGPT prompts.
While this is bad for Samsung, it is not the only company that has had this problem. like Axios Reportedly, companies like Walmart and Amazon have also gone through something similar.