Samsung employees have reportedly leaked sensitive company data by sharing it with the popular chatbot service ChatGPT. The leaked data includes internal documents, meeting notes, and source code. Samsung engineers used ChatGPT to optimize test sequences for identifying faults in the chips they were designing, resulting in three data leaks in under a month.
In another case, an employee used the chatbot to convert meeting notes into a presentation, which contained sensitive information that Samsung would not have wanted to be shared with third parties.
As a result of these leaks, Samsung Electronics has warned its employees of the potential risks associated with the use of ChatGPT, stating that there is no way to prevent the leakage of data provided to OpenAI’s chatbot service.
The company has decided to develop its own AI for internal use to mitigate such risks.
Furthermore, it is unclear if Samsung has requested the deletion of the data provided by its workers to OpenAI. The Italian Data Protection Authority, Garante Privacy, recently temporarily banned ChatGPT due to the illegal collection of personal data and the absence of systems for verifying the age of minors.
The Authority highlighted that OpenAI does not alert users that it is collecting their data and that there is no legal basis for the massive collection and processing of personal data to train the algorithms on which the platform relies.
The incident underscores the importance of ensuring the security and privacy of company data. Companies must educate their employees about the potential risks of using third-party services, especially those that collect data.
They must also establish policies and procedures for the handling of sensitive information and invest in developing their own AI solutions to minimize the risk of data leakage.