Back To Top

Samsung bans AI chatbots after leak

Companies fear internal data shared with AI chatbots could leak to public

Samsung Electronics' office building in Seoul (Yonhap)
Samsung Electronics' office building in Seoul (Yonhap)

Samsung Electronics joined other tech companies in banning the use of ChatGPT and other AI-powered chatbots by its employees, after discovering leaks of sensitive internal codes by its engineers.

Taking measures to ban access of generative AI tools on company-owned computers, tablets and phones, Samsung is also reportedly creating its own service tools to support translation and the summarizing of documents, as well as software development.

The tech giant confirmed Wednesday it issued a memo last week banning the use of generative AI tools to the staff of its Device eXperience division in charge of consumer appliances and mobile devices.

"Using generative AI tools on company PCs will be banned temporarily from May 1," the note said, also asking employees to refrain from uploading anything related to the company, themselves or other employees on the AI chatbots while using their personal devices.

“We ask that you diligently adhere to our security guideline and failure to do so may result in a breach or compromise of company information, resulting in disciplinary action up to and including termination of employment,” Samsung added.

Earlier in April, the company's Device Solution division overseeing its semiconductor business found three misuse cases where its engineers uploaded sensitive company information, including their meeting minutes and source codes, on ChatGPT for work.

While it is unclear whether the sensitive information has been leaked to other users of the chatbot, the company immediately issued a notice to the division staff following the incident to not use generative AI tools for work.

Acknowledging that staff were using external AI chatbots to increase the efficiency of their work, Samsung said it is developing an optimal AI tool for the staff to use in translation, summarizing documents and in source code development.

The fear that ChatGPT and other similar chatbots operated by companies such as Microsoft and Google would leak sensitive company information to the public has prompted many companies to ban the use of AI tools.

South Korean companies, SK hynix and Posco have also prohibited the use of AI services in the company. Amazon issued a similar warning to its employees and several major banks in the US, including JPMorgan Chase, Bank of America and Citigroup, also introduced similar measures.

In a survey Samsung conducted of its DX division staff last month, 65 percent of respondents said they believe security risks can occur from using ChatGPT for work.

ChatGPT, in its default mode, saves up the user's conversation history, and the inputs are used to improve or "train" the AI and utilize the stored data to generate responses to inquiries made by other users.

In response to the concerns, OpenAI, the company that developed the ChatGPT, said it added an "incognito" mode under which the user can block their chats being used for training, last month.

Other companies have been using the generative AI tools for their work, despite the security concerns. Goldman Sachs, which does restrict its employees from using ChatGPT at work, said its software developers still use generative AI tools to write and test code, though it did not reveal which service they use.

IBM CEO Arvind Crishna also said in a recent interview that the company will suspend hiring for jobs that could be replaced with AI tools in the coming years.



By Jo He-rim (herim@heraldcorp.com)
MOST POPULAR
LATEST NEWS
leadersclub
subscribe
소아쌤