‘Dehumanizing’ Tech: New AI System Converts Thoughts Into Text

‘Dehumanizing’ Tech: New AI System Converts Thoughts Into Text
An AI robot titled "Alter 3: Offloaded Agency," is pictured during a photocall to promote the forthcoming exhibition entitled "AI: More than Human", at the Barbican Centre in London on May 15, 2019. Ben Stansall/AFP via Getty Images
Naveen Athrappully
Updated:
0:00

Scientists have developed an artificial intelligence system capable of reading people’s thoughts by measuring brain activity and converting it into text—a development that triggers worries about privacy and freedom.

The study (pdf), published in the journal Nature Neuroscience on May 1, used a transformer model, similar to the one that powers OpenAI’s ChatGPT artificial intelligence chatbot, to decode people’s thoughts. A transformer model is a neural network that learns context, and thus meaning. Test subjects initially listened to hours of podcasts and the brain activity was recorded. The researchers then trained the decoder on these recordings. Later, the subjects listened to a new story or imagined telling a story, which allowed the decoder to generate corresponding text by analyzing brain activity.

Researchers trained the decoders on three subjects. “Because our decoder represents language using semantic features rather than motor or auditory features, the decoder predictions should capture the meaning of the stimuli,” said the study.

“Results show that the decoded word sequences captured not only the meaning of the stimuli but often even exact words and phrases.”

Technologies capable of reading people’s thoughts can be beneficial for individuals who have lost their ability to communicate physically. However, they raise concerns about privacy and loss of freedom.

In a March 17 interview with MIT Technology Review, Nita Farahany, a futurist and legal ethicist at Duke University in Durham, North Carolina, warned that brain data collection can be used by governments and other powers for nefarious purposes.

“An authoritarian government having access to it could use it to try to identify people who don’t show political adherence, for example. That’s a pretty quick and serious misuse of the data. Or trying to identify people who are neuroatypical, and discriminate against or segregate them,” Farahany said.

In a workplace, the tech can be used in the “dehumanization” of employees by forcing them to subject themselves to neuro surveillance.

“The problem comes if it’s used as a mandatory tool, and employers gather data to make decisions about hiring, firing, and promotions. They turn it into a kind of productivity score. Then I think it becomes really insidious and problematic. It undermines trust … and can make the workplace dehumanizing.”

Non-Invasive Tech, Addressing Privacy Issues

Unlike other language decoding systems that are in development at present, the one developed by researchers in the May 1 study is non-invasive and does not require subjects to get surgical transplants.

Alex Huth, an assistant professor of neuroscience and computer science at UT Austin who led the study, called the results a “real leap forward” for non-invasive brain readings.

“We’re getting the model to decode continuous language for extended periods of time with complicated ideas,” said Huth, according to a May 1 news release. The decoded results are not word-for-word transcripts. Instead, they capture the gist of what a subject is thinking.

To address worries regarding privacy, the researchers tested the decoders to see if they can be trained without a person’s cooperation. The team tried to decode perceived speech from test subjects using decoders trained on data from other subjects.

“Decoders trained on cross-subject data performed barely above chance and significantly worse than decoders trained on within-subject data. This suggests that subject cooperation remains necessary for decoder training,” the study said.

Researchers also confirmed that decoders trained with a person’s cooperation cannot be used to identify the thoughts of that individual should the person consciously resist it. Tactics like thinking about animals or quietly imagining stories can prevent the system from reading thoughts.

“A person needs to spend up to 15 hours lying in an MRI scanner, being perfectly still, and paying good attention to stories that they’re listening to before this really works well on them,” Huth said.

Jerry Tang, a co-author of the study, believes that while the technology is in early stages, governments should seek to enact policies that protect people and their privacy. “Regulating what these devices can be used for is also very important.”

Naveen Athrappully
Naveen Athrappully
Author
Naveen Athrappully is a news reporter covering business and world events at The Epoch Times.
Related Topics