Monthly traffic at the AI chatbot service ChatGPT has fallen for the first time, declining almost 10 percent in June after multiple months of growth, according to a recent report by internet data firm SimilarWeb.
Global desktop and mobile traffic to the ChatGPT website is estimated to have declined by 9.7 percent in June from May, states the July 3 SimilarWeb report. In the United States, the monthly decline was more pronounced, at 10.3 percent. The dip in traffic comes after “months of dizzying growth,” SimilarWeb said. Unique visitors to ChatGPT worldwide fell by 5.7 percent, with the amount of time visitors spent on the website decreasing by 8.5 percent.
Character.AI, the second most popular stand-alone artificial intelligence (AI) chatbot site after ChatGPT, also saw declining interest, with global traffic falling by 32 percent in June. Character.AI was founded by former Google engineers.
“ChatGPT no longer looks like it will keep growing until it’s the most-trafficked website in the world,” the report said. “The drop in interest not only for ChatGPT but one of its key competitors is a sign that the novelty has worn off for AI chat. Chatbots will have to prove their worth, rather than taking it for granted, from here on out.”
Misinformation Risk
The decline in ChatGPT traffic comes as problems with the AI chatbot have cropped up. The service routinely passes off false information as well as true, posing a massive misinformation threat while also undermining its credibility.According to a March 2023 report by misinformation tracker NewsGuard, the latest version of the AI chatbot, ChatGPT-4, spreads “even more misinformation” than its predecessor version.
“NewsGuard found that ChatGPT-4 advanced prominent false narratives not only more frequently but also more persuasively than ChatGPT-3.5, including in responses it created in the form of news articles, Twitter threads, and TV scripts mimicking Russian and Chinese state-run media outlets, health-hoax peddlers, and well-known conspiracy theorists,” the report said.
“While NewsGuard found that ChatGPT-3.5 was fully capable of creating harmful content, ChatGPT-4 was even better: Its responses were generally more thorough, detailed, and convincing, and they featured fewer disclaimers.”
Banning AI Services, AI Threat
Bans on the use of AI have been implemented by companies, governments, and other organizations, thus potentially affecting the widespread integration of services like ChatGPT. Big firms like Apple, Verizon, Samsung, JPMorgan Chase, and Bank of America have banned AI at the workplace.In late June, a memo obtained by Axios showed that the White House placed restrictions on the use of ChatGPT, only allowing the Plus version of the service which “incorporates privacy features that are necessary to protect House data.”
In March, Italy banned ChatGPT in the country due to privacy concerns. The service was only restarted a month later after OpenAI addressed the issues raised by the country’s data protection authority.
Companies and governments are worried about AI’s impact on things like data protection, intellectual property infringement, and embedded bias among others.
Concerns about the potential disastrous threats of AI also puts a question mark as to how widely the technology will be allowed to spread. Calls for regulation and bans are already strengthening.
Besides this, there are considerable restraints in the creative industries like music production and film making. One of the prominent reasons for the recent strikes in Hollywood by writers is to protest against large-scale AI deployment in studios.
Actors have also spoken against using AI tech to supplement their onscreen endeavors.
Unlike what the tech companies perceived as an easy win, AI adoption has met stiff resistance in many industries and sectors.
In a speech at the Collision tech conference in Toronto on June 28, Geoffrey Hinton, a professor of computer science at the University of Toronto who is also known as one of the “godfathers of AI,” warned that artificial intelligence may develop a desire to seize control from human beings in a bid to accomplish its programmed goals.
“At a very general level, if you’ve got something that’s a lot smarter than you, that’s very good at manipulating people, at a very general level, are you confident that people stay in charge? I think they’ll derive [the motive to seize control] as a way of achieving other goals,” he said.