ChatGPT linked to alleged leak of confidential information at Samsung
Less than a month after electronics giant Samsung introduced ChatGPT for its employees, the artificial intelligence (AI) model has been linked to an alleged leak of confidential information. Sensitive information about the company’s semiconductor equipment measurement data is among the content that has now become part of ChatGPT’s learning database.
The conversational chatbot ChatGPT hasn’t wowed individual users alone. Giant corporations, too, have been blown away by its ability to summarize and paraphrase content and have been looking to aid the productivity of their employees by giving them access to its computational prowess.
While this is a great initiative, it can go horribly wrong if the employees are not trained sufficiently on what information can or cannot be shared with the AI model, as Samsung is learning the hard way.
What went wrong at Samsung?
According to a DigiTimes report, Samsung employees were granted ChatGPT access not more than three weeks ago. During their casual usage of the tool, the employees surely asked questions and received replies, which the tool uses as an internal learning database to constantly improve its performance.
It is likely that the employees at Samsung were not completely made aware of how the AI tool works because had they been told, they would surely not have put sensitive information into the text prompt of the tool. One can assume this to be true since it reportedly happened on three separate occasions in the short period of time since ChatGPT was made available.
The first incident reportedly occurred inside the Semiconductor and Device Solutions department, where a staff member executing a semiconductor equipment measurement database (DB) download software faced an issue.
Since it appeared that the issue surfaced from the source code of the software, the employee simply pasted the source code into ChatGPT and asked the AI tool to find the solution to the problem, effectively making it part of ChatGPT’s internal training data.
Another employee, when faced with difficulting in understanding the device yield and other information, simply plugged the produced code into ChatGPT, asking it to be optimized, while a third employee tasked ChatGPT with making minutes of the meeting.
The company has now put some safeguards in place, such as limiting the capacity of each question to not more than 1,024 bytes. Samsung has also instructed its employees to exercise caution since data entered into ChatGPT is transmitted to external servers, and the company cannot recover outflow data.
Last month, Interesting Engineering reported how a bug in the open-source library was leaking users’ private conversations with the bot to random individuals. While this has been fixed, with the right queries, users might be able to probe into what information Samsung employees actually shared with ChatGPT. Interesting Engineering
You must be logged in to post a comment Login