AI Will Eventually Be ‘As Good a Tutor as Any Human’: Bill Gates

AI Will Eventually Be ‘As Good a Tutor as Any Human’: Bill Gates
Bill Gates speaks onstage at the TIME100 Summit 2022 in New York City on June 7, 2022. Jemal Countess/Getty Images for TIME
Samantha Flom
Updated:
0:00

Artificial intelligence (AI) may not be advanced enough to replace teachers now, but according to Bill Gates, that time is not far off.

“The AIs will get to that ability to be as good a tutor as any human ever could,” the Microsoft co-founder said at the ASU+GSV Summit in San Diego on April 18.

“We have enough sample sets of those things being done well that the training can be done,” he added. “So, I’d say that is a very worthwhile milestone, is to engage in a dialogue where you’re helping to understand what they’re missing. And we’re not that far.”

Gates’ comments came within the context of a larger conversation about the future role of technology in education with Jessie Woolley-Wilson, CEO of DreamBox Learning.

“AI has, ever since the focus became machine learning, it’s achieved some unbelievable milestones,” Gates told Woolley-Wilson. “You know, it can listen to speech and recognize speech better than humans. It can recognize images and videos better than humans. The area that it was essentially useless in was in reading and writing. You could not take, say, a biology textbook and read it and pass the AP exam.”

However, Gates noted that, where previous iterations of AI were incapable of replicating human understanding, emerging systems, like Microsoft-backed OpenAI’s GPT-4, had begun to bridge the gap.

“The breakthrough we have now, which is very recent, is more to do with reading and writing—this incredible fluency to say, ‘Write a letter … like Einstein or Shakespeare would have written this thing,’ and to be at least 80 percent of the time very stunned by it.”

Threat to Humanity

In truth, industry experts have not only been stunned but, in many cases, unnerved by recent advancements in the evolution of AI, fearing the ripple effects of such technology on society.
“AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs,” warns a March 22 letter that has more than 27,500 signatures, with dozens of AI experts among them.

Accusing AI creators of engaging in an “out-of-control race” to develop “ever more powerful digital minds that no one—not even their creators—can understand, predict, or reliably control,” the letter’s signatories called for an immediate six-month pause in the training of more advanced AI systems as society grapples with how to ensure their safety.

One of those signatories was Tesla CEO Elon Musk, another tech tycoon who has been outspoken about his concerns regarding the capabilities of AI.

“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production in the sense that it has the potential—however small one may regard that probability, but it is non-trivial—it has the potential of civilizational destruction,” Musk told Fox News’ Tucker Carlson in a recent interview.

Gates, however, has pushed back against such claims, holding that a global pause in AI development would be both difficult and impractical and that the challenges the technology presents could be managed.
“Clearly, there’s huge benefits to these things. … What we need to do is identify the tricky areas,” he told Reuters on April 4.

Implementation

While Gates expressed optimism at the ASU+GSV Summit over the potential benefits of AI in education, he added that current systems will need more work before those benefits can take shape.

“It [AI] doesn’t have any sense of how hard to work on something,” he noted. “It spends exactly the same amount of computation on every token it generates, and it doesn’t know that a problem’s important, unimportant. And … that sort of meta-model of reasoning is what, over the next year, the leading AI implementers will be adding.”

Gates also stressed that his philanthropic interest in AI is in ensuring the technology is used “on an equitable basis” not only for the purposes of education but also in the medical field.

“Over the last six months, I’ve been to so many long meetings where we brainstorm, ‘OK, what does this mean for drug discovery for diseases of the poor? What does this mean for health consultations in Africa, where most people live their entire life without meeting a doctor?’ … So many different conditions simply can’t be diagnosed, and we can revolutionize that.”

Samantha Flom
Samantha Flom
Author
Samantha Flom is a reporter for The Epoch Times covering U.S. politics and news. A graduate of Syracuse University, she has a background in journalism and nonprofit communications. Contact her at [email protected].
Related Topics