Open Menu Close Menu

C-Level View | Feature

Can Artificial Intelligence Expand Our Capacity for Human Learning?

A conversation with Gardner Campbell

robot hand touching human hand

As educators we've all experienced the rise of new technologies along with the process of sorting out how each one may impact our work and our lives. Is the coming of AI any different? If so, how can we approach AI wisely?

Here, Gardner Campbell, associate professor of English at Virginia Commonwealth University and a widely known technology thought leader considers issues and concerns surrounding AI, identifies helpful resources, and offers some grounding thoughts on human learning as we embark on our AI journey in education.

Mary Grush: Does the shift to AI bring up radically new questions that we've never had to ask before, especially in the education context?

Gardner Campbell: The short answer is yes! But my answer requires some clarification. I'll try to provide a high-level overview here, but that means I'll probably be oversimplifying some things or raising questions that would need at least another Q&A to address.


Grush: I think most of our readers will understand your emphatic "yes" answer, but of course, please give us some background.

Campbell: Throughout history, general intelligence — meaning primarily the ability to reason, but viewed by many as also including qualities like imagination or creativity — has been considered the thing that distinguishes human beings as a species. Psychologists call this array of traits and capabilities "g" for short. It follows, then, that if computers can be said to be intelligent — to be described with values akin to reason, imagination, or creativity — then that "human" distinction collapses. And if that distinction collapses, any use of the word "human"; any appellation tied to our uniqueness as a species has to be re-examined.

Throughout history, general intelligence has been considered the thing that distinguishes human beings as a species.

The next question, then, is whether ChatGPT, Bing, Bard, Caktus.ai, Poe, et al. are intelligent in ways that involve reason, imagination, or creativity. My own view, as well as that of many experts in the field, is that they are not. They are not — or not yet — capable of what psychologists call AGI, or artificial general intelligence, which is comparable to human intelligence in the ways I just mentioned — possessing reason, imagination, or creativity… That's why it's more accurate to call ChatGPT et al. "generative AI," as a way of distinguishing what these affordances can do, from just "AI", or AGI, which is not what they can do.

Grush: So if ChatGPT and other so-called "AI" platforms aren't really performing along the lines of human-variety general intelligence, why do we call them AI at all?

Campbell: Aside from sheer hype, I'd describe two main reasons. First, the large-language-model design of generative AI, while in many respects little more than autocomplete on steroids, is the first computing technology that stimulates to this potentially dangerous degree what cognitive psychologists call overattribution. To put it simply, when one interacts with one of these "bots", there is the strong impression, even the unshakable conviction at times, that one is talking to someone, someone who is in fact intelligent in the human sense.

Overattribution means more than just anthropomorphizing, say, our automobiles, by giving them cute names. It means ascribing motivations, intentions, reason, creativity, and more, to things that do not possess those attributes.


comments powered by Disqus