The Italian data protection watchdog has blocked access to the Chinese artificial intelligence (AI) application DeepSeek due to concerns over its handling of user data.
It also asked DeepSeek about the legal basis for data processing and whether the data is stored on servers located in China. DeepSeek and its affiliated companies were given 20 days to respond to the inquiry.
“Contrary to the authority’s findings, the companies declared that they do not operate in Italy, and that European legislation does not apply to them,’’ the regulator said in Jan. 30 statement.
The regulator said that DeepSeek’s claims contradict its findings and has launched an investigation into the chatbot service.
The AI application was unavailable on Apple and Google app stores in Italy following the request for information.
The Epoch Times has reached out to DeepSeek for comment but did not hear back by publication time.
The Hangzhou-based startup, founded by entrepreneur Liang Wenfeng in 2023, launched its open-source language learning model, DeepSeek-R1, last week, claiming it is on par with OpenAI’s reasoning model, O1.
Data collected will be stored “in secure servers located in the People’s Republic of China” in accordance with “the requirements of applicable data protection laws,” the webpage stated.
DeepSeek states that it uses data to “comply with our legal obligations, or as necessary to perform tasks in the public interest, or to protect the vital interests of our users and other people.”
The privacy policy page also states that user information may be disclosed to third parties if it believes in “good faith” that such disclosure is necessary “to comply with applicable law, legal process or government requests, as consistent with internationally recognised standards.”
Regulators from Ireland and South Korea have also sought information from DeepSeek over its handling of personal data.
“The president said that he believes that this is a wake-up call to the American AI industry. The last administration sat on their hands and allowed China to rapidly develop this AI program,” Leavitt said.
“We would just urge Australians to exercise real caution about the personal information that they’re giving away. It’s fine to talk to the app, but perhaps don’t give it personal information that you don’t want the rest of the world to know about you,” O’Neil said. “So what our national security agencies will be doing at the moment is having a look at the settings of the app and understanding more about how it works before it issues some formal guidance to Australians about care that they need to take.”