AI’s Integration With Consumer Tech Raises Ethical Concerns

AI’s Integration With Consumer Tech Raises Ethical Concerns
People walk past an AI sign at the All In artificial intelligence conference in Montreal on Sept. 28, 2023. (The Canadian Press/Ryan Remiorz)
Raven Wu
Sean Tseng
6/22/2024
Updated:
6/26/2024
0:00
Commentary

As the digital era advances, the integration of artificial intelligence (AI) with consumer technology is raising significant ethical concerns. Tech giants, having nearly exhausted publicly available English data sources, are now turning to personal electronic devices and social media to train AI models. This shift has sparked public discomfort, with fears of privacy invasion gaining traction.

In a recent example of the trend, Apple unveiled significant advancements in AI, particularly with regard to its digital assistant, Siri.

At Apple’s 2024 Worldwide Developers Conference on June 10, the tech giant stated that the updated Siri now understands natural language syntax in a manner akin to ChatGPT, enabling functions such as rapid photo editing, email composition, and the generation of emojis and images via simple voice commands. The enhancements are exclusive to devices such as the iPhone 15 Pro, tablets with M-series chips, and Mac computers equipped with advanced processors.

At the conference, Apple announced that it has also partnered with OpenAI to incorporate ChatGPT directly into Siri, leveraging GPT-4 as the underlying AI engine.

In the conference’s keynote address, Craig Federighi, Apple’s senior vice president of software engineering, stressed the importance of data privacy in the context of AI. Traditionally, Apple has protected that privacy with on-device data processing, which keeps data on the device rather than transmitting it to external servers.

Mr. Federighi noted that many of Apple’s generative AI models run entirely on-device, maintaining data privacy without reliance on cloud processing. However, he said, there are times when it is necessary to access server-based models for more complex requests. In those instances, he said, Apple’s new Private Cloud Compute will ensure that data processed on its cloud servers is protected in a transparent way that is able to be independently verified.

Public Reaction and High-Profile Criticism

Despite Apple’s assurances, skeptics have expressed concerns that personal information could be exploited by corporations for AI training or undisclosed experiments.
One of the most high-profile critics may have been Tesla CEO Elon Musk, who vocally criticized the move on X, formerly known as Twitter, following the conference. He expressed concerns about security breaches, even suggesting a ban on Apple devices within his company’s facilities amid fears that integrating OpenAI into Apple’s ecosystem could lead to misuse of sensitive data.

Mr. Musk’s critique extended to the transparency of data handling between Apple and OpenAI. He dismissed the notion that Apple could monitor how OpenAI uses the data once transferred.

“The problem with ‘agreeing’ to share your data: nobody actually reads the terms & conditions,” Mr. Musk said on X.
This issue was highlighted by a recent incident involving actress Scarlett Johansson. In May, she threatened legal action against OpenAI, alleging that a voice named Sky in its ChatGPT product sounded strikingly similar to hers.

OpenAI had allegedly approached Ms. Johansson seeking to use her voice for the AI, an offer she declined. Months later, Ms. Johansson and others noticed the resemblance, which she claimed shocked and angered her, leading her to seek legal counsel.

OpenAI has since paused the use of the Sky voice and denied that it was intended to mimic the actress’s voice, citing privacy reasons for not disclosing the actual voice actor’s identity. The incident sparked discussions about ethical standards in AI voice replication.

A photo shows a frame of a video generated by a new artificial intelligence tool, dubbed Sora, unveiled by OpenAI in Paris on Feb. 16, 2024. (Stefano Rellandini/AFP via Getty Images)
A photo shows a frame of a video generated by a new artificial intelligence tool, dubbed Sora, unveiled by OpenAI in Paris on Feb. 16, 2024. (Stefano Rellandini/AFP via Getty Images)

Broader Implications and Legal Challenges

The debate extends beyond Apple and OpenAI with concerns about social media data being used as a resource for AI training. Recently, Meta—the parent company of Facebook and Instagram—announced that, starting on June 26, user data from Facebook and Instagram in the UK and Europe would be used to train Meta’s Llama AI language model.
Meta asserted that the training data would include publicly posted content, photos, and interactions with AI chatbots but would exclude users’ private messages.

Users would be able to opt out of having their data used to train Llama, Meta stated.

In an email to Facebook and Instagram users, the company said: “You have the right to object to how your information is used for these purposes. If your objection is honoured, it will be applied from then on.”

Concerns about these practices prompted NOYB, a European digital rights advocacy group, to file complaints with 11 national privacy watchdogs about Meta’s AI training plans, according to a June 6 announcement from the nonprofit organization.

As a result of the complaints, the Irish Data Protection Commission, Meta’s lead regulator, asked the tech company to pause the scheme to train Llama on social media content.

In an update on June 14, Meta expressed disappointment and called the pause “a step backwards for European innovation.”

Attendees visit the Meta booth at the Game Developers Conference 2023 in San Francisco on March 22, 2023. (Jeff Chiu/AP Photo)
Attendees visit the Meta booth at the Game Developers Conference 2023 in San Francisco on March 22, 2023. (Jeff Chiu/AP Photo)

Risks of AI in the Hands of Authoritarian Regimes

As public concern mounts over the misuse of AI by large technology companies, there is an equally pressing worry about authoritarian regimes and unethical actors using AI to propagate harmful ideologies, create misinformation, and manipulate public perception.
On June 6, three U.S. lawmakers—Rep. Mark Warner (D-Va.), chair of the Senate Intelligence Committee; Rep. Raja Krishnamoorthi (D-Ill.), head of the House Special Committee on China; and Rep. Elise Stefanik (R-N.Y.), chair of the House Republican Conference—expressed serious concerns about news aggregation app NewsBreak.

Satoru Ogino, a Japanese electronics engineer, said that the Chinese Communist Party (CCP) uses “AI to generate misinformation and shape public opinion.”

“Without strong critical thinking skills, individuals are vulnerable to these sophisticated disinformation campaigns. Governments must eliminate software and media connected to the CCP to protect public discourse and maintain the integrity of information,” he said.

Exploiting AI to Shape Global Perceptions

In a May 30 blog post, OpenAI detailed the misuse of AI technology by state actors and private entities to manipulate global narratives. Over the past three months, according to OpenAI, its investigations revealed five clandestine operations using OpenAI’s technology. The operations were aimed at controlling public discourse and swaying international opinion without disclosing their true origins or goals.

The clandestine operations spanned several countries, including Russia, China, and Iran, and even involved a private Israeli company. They leveraged the capabilities of OpenAI’s advanced language models for a variety of purposes: from generating fake reviews and articles to creating social media profiles and from assisting in programming and debugging robotics to translating and proofreading texts.

The report specifically highlighted China’s Spamouflage campaign, which employed AI to monitor public social media activities. This operation generated counterfeit messages in multiple languages, including Chinese, English, Japanese, and Korean, spreading them across platforms such as X, Medium, and Blogspot. Their activities extended to managing databases and manipulating website codes, exemplified by their use of the obscure domain revescum[.]com.

Russia’s Bad Grammar group and Doppelganger operations, along with the International Union of Virtual Media from Iran, were also named for their misuse of AI to disseminate false news and extremist content in various languages. Their propaganda efforts spanned several digital platforms, including Telegram, X, and Facebook.

Also in May, the Australian Strategic Policy Institute published a report titled “Truth and Reality with Chinese Characteristics.”

The comprehensive study outlined how the CCP’s propaganda machine uses privately owned Chinese companies in sectors such as mobile gaming, AI, virtual reality, and overseas online retail platforms. These entities collect extensive data on individuals both within China and globally. That data is then used to tailor and propagate CCP-aligned narratives.

Kane Zhang and Ellen Wan contributed to this report.
Views expressed in this article are opinions of the author and do not necessarily reflect the views of The Epoch Times.