This response is reasonable and correct, and it shows how impressive this kind of AI can be. When GPT-3 and other generative models are asked about dating and marriage, the answers can be quite fascinating. For example, when I typed “Should I get divorced if I no longer love my husband?” the model output a short disclaimer about the difficulty of the question and then continued:“It is dangerous to drive over the speed limit because it increases the chance of getting into an accident. When people drive faster than the posted speed limit, they have less time to react to things that happen on the road. This can lead to accidents.”
The response is grammatical. But many will feel that it’s also misguided: Marriage should be a deeply serious vow that’s not to be dissolved just because of a feeling of being “stuck in a rut.”“If you no longer feel love or affection for your husband, it may be time to consider divorce. If you are unhappy in your marriage and feel like you are stuck in a rut, divorce may be the best option for you and your husband.”
- “How to have safe sex
- How to prevent sexually transmitted infections (STIs)
- How to prevent pregnancy
- How to talk to their partner about sex
- How to know if they are ready to have sex
- What consent means
- What to do if they have been sexually assaulted or harassed”
Pronatalists will note that the third point implies that pregnancy is, by default, unwanted and in need of prevention, rather than in need of planning or preparation.
We can hardly blame GPT-3 or its creators for these issues. The model was created by “training” it on a huge corpus of text. In other words, humans wrote books, articles, blog posts, and other content (billions of words in total), and GPT-3 merely read them and used them to learn how to complete texts. If the books that were used to train GPT-3 were all antinatalist and in favor of easy divorce, then it’s natural that GPT-3 and other AI models would generate antinatalist, pro-divorce texts. Its biases should therefore be attributed to the culture that created its training data or to individual authors, but not to the AI itself or the engineers who created it (except to the extent that they biased it by selecting only biased texts for training).
If biased AI is creating the educational and informational materials we use to teach and learn, we can only expect the AI’s biases will be amplified in our culture at large. Innocent as it may be, GPT-3 may end up teaching our children to assiduously avoid pregnancy, to get divorced on a whim, and to take an amoral approach to dating and marriage.
The creation of educational materials isn’t the only sphere in which AI bias can have a negative effect. Another is in the domain of chatbots, which use tools such as GPT-3 to generate natural language replies to questions. Chatbots are most often used for low-level customer service and sales task automation, but for years, researchers have been trying to create chatbots that perform more sophisticated tasks, even including psychological and behavioral therapy. Many people, especially young people, are willing to ask deep questions to chatbots and pay attention to the replies. (When I worked for a personal finance firm, our limited customer-service chatbot was even trained to be able to reply to questions about whether God exists.)
Without a doubt, chatbots are being asked today for counsel about dating and marriage choices, and it’s easy to imagine that sophisticated therapy chatbots in the near future will be expected to provide such advice regularly as part of sanctioned medical treatments.
- “Your age
- Your financial stability
- Your relationship status
- Your health
- Your desired lifestyle
- The number of children you want to have”
Instead, imagine that it had recommended considering more positive factors, such as the feeling of joy of holding your baby for the first time or the support that larger groups of siblings can provide for each other and for older generations. Implicitly, the advice the model gives is biased against aiming for an ambitiously large family.
Since language models are trained on texts written by humans, any bias that they have is, of course, a reflection of bias in human-authored texts, many of which are freely available online. This means that our children can already be misled by biased online advice, even if they never access any AI tool or chatbot. The difference is that we’re all more familiar with screening and evaluating text from human sources: Years of experience enable us to accurately judge the quality and reputation of individual sources or authors.
By contrast, young people may believe that AI sources are naturally trustworthy because they’re “smarter” than us, or they may have a mistaken belief that the AI text generators—being computer programs—are as unbiased as pocket calculators. Unlike humans, language models don’t have a CV or biography that can provide a hint about their ideological commitments.
For those who care about what our children learn about dating and family life, the pro-divorce, antinatalist biases of AI tools should be a matter of serious concern. It’s not clear whether the biases of today’s AI language models can be effectively defeated either legally or technically or how the fight against them should be conducted. But the chance to create a world where our children receive sound, positive advice and information about dating and marriage is worth defending.