As the digital era advances, the integration of artificial intelligence (AI) with consumer technology is raising significant ethical concerns. Tech giants, having nearly exhausted publicly available English data sources, are now turning to personal electronic devices and social media to train AI models. This shift has sparked public discomfort, with fears of privacy invasion gaining traction.
In a recent example of the trend, Apple unveiled significant advancements in AI, particularly with regard to its digital assistant, Siri.
At the conference, Apple announced that it has also partnered with OpenAI to incorporate ChatGPT directly into Siri, leveraging GPT-4 as the underlying AI engine.
In the conference’s keynote address, Craig Federighi, Apple’s senior vice president of software engineering, stressed the importance of data privacy in the context of AI. Traditionally, Apple has protected that privacy with on-device data processing, which keeps data on the device rather than transmitting it to external servers.
Public Reaction and High-Profile Criticism
Despite Apple’s assurances, skeptics have expressed concerns that personal information could be exploited by corporations for AI training or undisclosed experiments.Mr. Musk’s critique extended to the transparency of data handling between Apple and OpenAI. He dismissed the notion that Apple could monitor how OpenAI uses the data once transferred.
OpenAI had allegedly approached Ms. Johansson seeking to use her voice for the AI, an offer she declined. Months later, Ms. Johansson and others noticed the resemblance, which she claimed shocked and angered her, leading her to seek legal counsel.
OpenAI has since paused the use of the Sky voice and denied that it was intended to mimic the actress’s voice, citing privacy reasons for not disclosing the actual voice actor’s identity. The incident sparked discussions about ethical standards in AI voice replication.
Broader Implications and Legal Challenges
The debate extends beyond Apple and OpenAI with concerns about social media data being used as a resource for AI training. Recently, Meta—the parent company of Facebook and Instagram—announced that, starting on June 26, user data from Facebook and Instagram in the UK and Europe would be used to train Meta’s Llama AI language model.Users would be able to opt out of having their data used to train Llama, Meta stated.
In an email to Facebook and Instagram users, the company said: “You have the right to object to how your information is used for these purposes. If your objection is honoured, it will be applied from then on.”
Concerns about these practices prompted NOYB, a European digital rights advocacy group, to file complaints with 11 national privacy watchdogs about Meta’s AI training plans, according to a June 6 announcement from the nonprofit organization.
As a result of the complaints, the Irish Data Protection Commission, Meta’s lead regulator, asked the tech company to pause the scheme to train Llama on social media content.
In an update on June 14, Meta expressed disappointment and called the pause “a step backwards for European innovation.”
Risks of AI in the Hands of Authoritarian Regimes
As public concern mounts over the misuse of AI by large technology companies, there is an equally pressing worry about authoritarian regimes and unethical actors using AI to propagate harmful ideologies, create misinformation, and manipulate public perception.Satoru Ogino, a Japanese electronics engineer, said that the Chinese Communist Party (CCP) uses “AI to generate misinformation and shape public opinion.”
Exploiting AI to Shape Global Perceptions
In a May 30 blog post, OpenAI detailed the misuse of AI technology by state actors and private entities to manipulate global narratives. Over the past three months, according to OpenAI, its investigations revealed five clandestine operations using OpenAI’s technology. The operations were aimed at controlling public discourse and swaying international opinion without disclosing their true origins or goals.The clandestine operations spanned several countries, including Russia, China, and Iran, and even involved a private Israeli company. They leveraged the capabilities of OpenAI’s advanced language models for a variety of purposes: from generating fake reviews and articles to creating social media profiles and from assisting in programming and debugging robotics to translating and proofreading texts.
The report specifically highlighted China’s Spamouflage campaign, which employed AI to monitor public social media activities. This operation generated counterfeit messages in multiple languages, including Chinese, English, Japanese, and Korean, spreading them across platforms such as X, Medium, and Blogspot. Their activities extended to managing databases and manipulating website codes, exemplified by their use of the obscure domain revescum[.]com.
Russia’s Bad Grammar group and Doppelganger operations, along with the International Union of Virtual Media from Iran, were also named for their misuse of AI to disseminate false news and extremist content in various languages. Their propaganda efforts spanned several digital platforms, including Telegram, X, and Facebook.
The comprehensive study outlined how the CCP’s propaganda machine uses privately owned Chinese companies in sectors such as mobile gaming, AI, virtual reality, and overseas online retail platforms. These entities collect extensive data on individuals both within China and globally. That data is then used to tailor and propagate CCP-aligned narratives.