As research laboratory OpenAI drives artificial intelligence (AI) forward with ChatGPT, AI-related issues such as leaks and privacy concerns have surfaced, further driving unease over what the technology can do.
The Italian watchdog is investigating the chatbot gave OpenAI 20 days to propose measures for protecting user’s data, or else it would face a fine of €20 million (US $21.8 million) or up to 4 percent of annual global turnover.
There are two main reasons for the suspension and investigation.
The bug was patched, but there were other issues.
“Upon deeper investigation, we also discovered that the same bug may have caused the unintentional visibility of payment-related information of 1.2% of the ChatGPT Plus subscribers who were active during a specific nine-hour window,” OpenIA said.
Secondly, Open AI claims that its service targets people over 13 years old, but no age verification exists.
In this regard, the Italian watchdog believes that ChatGPT may expose minors to unsuitable answers for their degree of development and self-awareness.
Leaked Data in South Korea
In early April, the South Korean press reported that confidential information at Samsung Electronics in South Korea had been leaked due to the use of ChatGPT by its employees. In response, Samsung said that it would consider banning the use of ChatGPT.The concerns of companies and governments regarding AI are not limited to leaks, false information, and age verification. Some fear it could also impact the global economy and security.
Open Letter
In late March, Elon Musk and other tech giants joined forces at the Future of Life Institute, a nonprofit organization, to release an open letter calling for a six-month moratorium on advanced AI (ChatGPT) research and development. The letter asked: “Should we let machines flood our information channels with propaganda and untruth?”Credibility Concerns
The Guardian found an unpublished article in April that was very similar in style to the Guardian. This led the Guardian to investigate and confirm the source of the article, which was eventually found to have been fabricated by ChatGPT. The Guardian said they were deeply troubled by the incident and felt that AI could damage their newspaper’s credibility.In terms of false information, Japanese electronic engineer Li Jixin told The Epoch Times in early April: “While AI brings convenience to humans, it also provides criminals with new tools, just like what the internet does. We need better morals for humans first, and then we need to strengthen the regulatory measures of AI.”
Potential Chaos Caused by AI
In addition to the potential for AI to become a hotbed of false information and crime, there are many advanced technologies that, if misused or abused, could bring more chaos to society.“Our democracy, our society runs on language,“ Harris said. ”Code is language, law is language, contracts are language, media is language. When I can synthesize anyone saying anything else and then flood a democracy with untruths, … this is going to exponentiate a lot of the things that we saw with social media,” he said.
AI-Generated Photos of Trump
On March 18, former U.S. President Donald Trump said on Truth Social that he would be indicted by a grand jury in New York. Although the indictment had not yet happened, on March 21, realistic AI-generated photos of Trump “trying to escape” and “being arrested by police” were widely posted online. AI-generated photos fooled some of Trump’s opponents and supporters, causing some confusion.“The AI being developed now certainly has many strengths and weaknesses, and it could be abused to bring chaos and disaster to human society,” Kiyohara Hitoshi, a computer engineer from Japan, told The Epoch Times on April 9.
“However, without ethical practices and standards, even the best tools can harm society,” Kiyohara said.