Commentary
Several months back Patrick Hrdlicka, a professor of chemistry at the University of Idaho, tried an experiment. He asked the world’s most famous artificial intelligence tool, ChatGPT, to produce a sample diversity statement.
As part of the application packages for professorial appointments at Canadian universities, these diversity statements ask candidates to affirm the principles of diversity, equity, and inclusion (DEI) in higher education. Those keenly committed to DEI regard the statements as indispensable. Cynics (including some of the same people) view them as make-work projects evocative of moral purity tests.
Given its reputation for producing passable prose, it’s not surprising that ChatGPT delivered a highly readable diversity document capturing all the familiar catchphrases: diversity is key to success, innovation depends on hearing every voice, DEI is a way of life, and more.
A supporter of DEI initiatives himself, Professor Hrdlicka’s brief blog post on the matter concludes by wondering whether new ways of measuring DEI commitment may be necessary in an AI-saturated world. But he stops well short of asking whether the easy reproduction of DEI talking points indicates a growing lack of nuance in the very discourse increasingly dominating the university landscape.
Concerns over AI saturation in the world of higher education have grown ubiquitous.
AI boosters celebrate the new technology for its learning applications.
Anti-AI “Cold Warriors” see it as one more devastating blow to higher education. Both sides are trying to determine appropriate methods of evaluation in a world where the production of AI-generated assignments by stressed, confused, or uninterested students may soon expand from a trickle to a flood.
But while the debate over AI use in higher education has practical urgency, it often misses the extent to which AI is, more fundamentally, a symptom of well-established trends in Canadian education: the largely unreflective accommodation of new technologies (even when they demonstrably undermine habits of learning) and growing bureaucratic newspeak shaping the culture of campus conversation.
The unreflective accommodation of modern technologies is well-attested by mounting evidence of the move to
embrace AI in higher education. This development follows the widespread and increasingly unchallenged assimilation of smartphones and laptops into the classroom, despite the preponderance of evidence highlighting the deleterious consequences for learning.
A stream of articles has now appeared in University Affairs (Canada’s flagship journal of higher education) advocating the thoroughgoing incorporation of AI into
teaching, class preparation, and more. Despite the evidence against these forms of technological mediation, the same institutions which allied themselves with the federal and provincial governments’ authoritarian response to COVID-19 by consistently exhorting faculty, staff, and students to “follow the science” have been almost entirely unwilling to do the same regarding technologies on campus.
The growing bureaucratic technocracy of the modern university compounds the problem. The narrowing of campus debate has been largely confirmed by the MacDonald-Laurier Institute’s
recent study on viewpoint diversity in higher education, as well as by the Heterodox Academy’s annual
student surveys on campus self-censorship.
The administrative university’s assertion of
low-resolution political commitments that increasingly determine pedagogical priorities creates a climate in which the work of education easily feels oriented, implicitly or explicitly, to the production of activism well-suited to an age of online rage.
Indeed, interventions like ChatGPT are adept at offering what the modern university often demands—research keyed primarily to patterns of oppression—and reducing thinking to the passive consumption of prepackaged ideas,
outsourcing human memory to search engines and human thinking to chatbots.
More worrisomely, perhaps, current technological mediations induce (almost imperceptibly) patterns of scanning that easily become a normative mode of attending not just to online content, but to books and even to people. Such habits are familiar to anyone who has scrolled on an iPhone or surfed the web, their attention neither wholly present nor entirely absent.
In academia, the end result is clear enough: a proliferation of what
one scholar refers to as the “already formulaic and essentially meaningless” deployment of language abundantly evident in campus branding initiatives and sloganeering.
University administrations now face the urgent question of whether their complicity in allowing habits of attention, curation of information, and cultural production to be mediated by a handful of technology corporations will eventually compromise their ability to attract students and convince their families the investment is worth it. Though the credentialling power of higher education continues to carry significant weight, there is reason to be skeptical of institutions whose pedagogical promises are compromised by practices both unsupportive of deep learning and politically over-determined.
If humanities departments are a significant place for the cultivation of wisdom and reflection in a world of speed and distraction, Canadians may be forgiven for wondering where wisdom is to be found if even these spaces become complicit in larger technological developments that, as the writer Mathew B. Crawford notes, reduce “
raw material for a kind of social cybernetics.”