EU Privacy Watchdog Probes Google’s AI Model

Ireland’s Data Protection Commission has concerns with Google’s Pathways Language Model 2.
EU Privacy Watchdog Probes Google’s AI Model
The Google logo at the VivaTech show in Paris on June 15, 2023. The Canadian Press/AP, Michel Euler
Katabella Roberts
Updated:
0:00

The EU’s privacy watchdog is investigating how Google is using personal data in the development of one of its artificial intelligence models.

Ireland’s Data Protection Commission (DPC)—the bloc’s regulatory body for companies headquartered in Ireland—announced on Sept. 12. that it was investigating compliance with General Data Protection Regulation (GDPR) rules by Google’s Pathways Language Model 2, also known as PaLM2.

The regulator said it is working with partners in the European Economic Area to regulate the processing of personal data belonging to EU users being used in the development of AI models and systems.

Its inquiry will examine whether Google has assessed if PaLM2’s data processing is likely to result in a “high risk to the rights and freedoms of individuals” in the EU, the commission said.

According to Google, PaLM2 is a “next-generation language model with improved multilingual, reasoning and coding capabilities” that builds on the company’s previous research in machine learning and AI.

The model, which was pre-trained on a “large quantity of webpage, source code, and other datasets,” according to Google, can translate between languages, conduct math tasks, answer questions, and write computer code, among other things.

Back in May, the tech giant said PaLM2 would be built into more than 25 new products and features, including its email and Google Docs services, amid the ongoing race to adopt and expand AI use.

Other Firms Pause Plans to Train AI on User Data

The EU watchdog has been raising concerns about the use of EU user data in the training of generative AI models with various Big Tech platforms. 
Earlier this month, it said it had concluded proceedings against Elon Musk’s social media platform X after the tech giant agreed to permanently stop processing European user data for its generative AI chatbot, Grok.
In a Sept. 4 statement, the DPC said that, prior to reaching the agreement with X,  it had “significant concerns” about the processing of personal data of EU users to train Grok, Musk’s AI platform. The watchdog said it “gave rise to a risk to the fundamental rights and freedoms of individuals.”

It marked the first time that the DPC had taken such action, with the watchdog utilizing its powers under Section 134 of the Data Protection Act 2018, it said.

In the same statement, the DPC said it is currently working to address various issues arising from the use of personal data in AI models across the industry and had requested an opinion from the European Data Protection Board (EDPB) to trigger a discussion on the matter, in the hopes of bringing some “much-needed clarity” to this “complex area.”

The opinion invites the EDPB to consider, amongst other things, the extent to which personal data—including first-party and third-party data—is processed at various stages during the training and operation of an AI model.

In June, the DPC said Meta Platforms had also paused its plans to use content posted by European users to train the latest version of its large language model following “intensive engagement” between the regulator and the social media giant.

A Google spokesperson told The Epoch Times: “We take seriously our obligations under the GDPR and will work constructively with the DPC to answer their questions.”

Reuters and the Associated Press contributed to this report.
This report was updated with Google’s response. 
Katabella Roberts
Katabella Roberts
Author
Katabella Roberts is a news writer for The Epoch Times, focusing primarily on the United States, world, and business news.