When the dating-simulation app Invisible Boyfriend was first released, many users assumed that they were talking to a computer program—after all, the app has the user design the boyfriend’s personal attributes as one would with avatars in video games, but they quickly figured out that the responses were too sophisticated for a bot.
The app later revealed that the boyfriends were not bots, but real people who worked for the freelancing company Crowdsource, and now discloses that information on an inconspicuous FAQ on its website.
“That’s the beauty of the service: you get to practice texting with real humans without worrying about them judging or rejecting you,” the FAQ states.
The turn to human labor wasn’t out of a Luddite’s aversion to technology, but a pragmatist’s frustration with AI stupidity. The company experimented with bots only to see them crash and burn at basic human interaction.
“You would text, ‘I’m at the movies’ and it would respond, ’I love dogs,'” the company’s co-founder Kyle Tabor told Fusion.
Not only were the invisible boyfriends not bots, many—perhaps most—are not even men. The person authorized by Crowdsource to discuss the experience of work as an invisible boyfriend is Laura Harper, a 44-year-old widow who works as a freelancer writer in Houston.
From an economic perspective, it’s not surprising for women to work as invisible boyfriends. The freelance industry tends to skew female; 55 percent of the workers on Amazon’s Mechanical Turk, another crowdsourcing platform, are women.
The existence of Invisible Boyfriends raises a host of concerns: about the state of dating, the nature of romance, privacy issues from disclosing personal information to strangers, and—most rarely discussed of all—the abysmal progress of artificial intelligence (AI).
Nearly 20 years after the Deep Blue beat world-champion Garry Kasparov in Chess, AI has still achieved little progress in emulating the human capacity to think and not merely perform brute calculations. When it does try to mimic human behavior, it does it in a way that is totally alien to to actual thought processes.
In 2011, the computer program Watson emerged victorious against former champions on the quiz-show “Jeopardy!” Rather than demonstrating machine intelligence, its performance highlighted just how far genuine AI has to go. In one round, under the category of “US Cities,” and the clue “Its largest airport is named for a World War II hero,” Watson answered “Toronto.”
Watson’s data-crunching could muster along in the low-complexity arena of trivia fact-finding, but it would flounder in high-stakes activities like dating, where a single misstep could ruin an entire conversation.
Luminaries like Bill Gates, Elon Musk, and Stephen Hawking have all recently voiced concerns about the existential risk that AI poses to humanity. Musk said that AI was “potentially more dangerous than nukes,” and Gates said he doesn’t “understand why some people are not concerned.”
To answer Gate’s answer, I would offer the freelance labor behind Invisible Boyfriends as exhibit A: when computers can’t even simulate basic romantic affection with any degree of competence, it seems farfetched to worry that they'll take over the world.
In the dystopian sci-fi flick “Soylent Green,” environmental degradation and overpopulation has created a perpetual scarcity of food, requiring the direct recycling of human remains into the food production process; “Soylent Green is made from people.” The inhabitants of that alternate timeline are probably not worried about an obesity epidemic—in a world where fake boyfriends are made of people, fewer still will worry about Skynet.