Reclaiming Intelligence: Why “Artificial Intelligence” Needs Reframing

Hopare’s “Paréidolie” installed at Terreiro do Paço in Lisbon, Portugal (Photo by Ana-Paula Correia).

A question raised by a colleague during a dissertation oral examination, combined with my own reflections, prompted me to think more carefully about what "artificial intelligence" actually means. In recent years, the term has been used widely and often loosely, with little effort to establish a shared understanding of what it signifies. When foundational terms are assumed rather than examined, confusion follows quickly. It is for this reason that I believe they deserve careful, deliberate attention.

A second impetus for this essay is the term's overuse. "Artificial intelligence" appears everywhere today: in marketing, in media, and in the ubiquitous "AI-powered" label attached to everything from spam filters to photo editors. This inflation of meaning is not merely linguistic excess. It carries real consequences. In education, those consequences are amplified. Introducing the term into learning contexts without conceptual clarity risks miseducating the very learners we ask to use these tools responsibly.

The Problem of Naming: Rethinking “Artificial Intelligence” in Education

If educators are to guide learners in the safe, ethical, and effective use of these systems, a clear understanding and honest discussion of what “artificial intelligence” is, and, equally important, what it is not, becomes indispensable. Without this work, we risk building pedagogical practices on unstable conceptual ground.

In educational contexts, the stakes of imprecise language are particularly high. When we describe a tool as “intelligent,” we implicitly suggest that it can teach, that it understands the learner, that it exercises judgment, and that it possesses some form of pedagogical authority. Each of these implications is highly debatable.

As Pierre Lévy (2023, 2025a, 2025b, 2026) argues, intelligence emerges from a dynamic interplay between collective memory, individual cognition, and dialogical exchange. It follows, then, that the richer a learner's own knowledge base, the more meaningfully they can interact with computational tools. Human intelligence is therefore the prerequisite for productive use of these systems, not their product. To call the tool itself "intelligent" inverts this relationship and, in doing so, misrepresents where the real cognitive work takes place.

This leads to a central question: What does “artificial intelligence” actually mean, and is there more precise terminology that better reflects its nature? To answer this, we must begin with the origins of the term itself.

Where Everything Begins: The Meaning of Intelligence

The word intelligence derives from the Latin intelligentia, itself rooted in intelligere, meaning “to understand” or “to perceive.” This term combines inter (“between”) and legere (“to choose” or “to read”), suggesting that intelligence is fundamentally about discerning between things, making connections, and interpreting meaning.

Historically, this concept was never applied to tools or mechanisms. Intelligentia has often been attributed to conscious, reasoning beings, first divine and later human. It implied awareness, intentionality, and judgment. When we appropriate this term to describe machines, we are not making a neutral linguistic move. We are importing centuries of philosophical meaning into a contemporary domain where those assumptions may not apply.

Pierre Lévy (2023) provides a critical lens for examining this shift. He argues that the expression “artificial intelligence” is inherently misleading because it suggests autonomy and agency where none exists. The term carries connotations of consciousness and independent reasoning that machines do not possess.

Kate Crawford (2021) extends this critique by challenging both components of the term. She argues that these systems are neither truly “artificial,” since they depend on material resources extracted from the earth and human labor, nor truly “intelligent,” since they rely entirely on human-generated data. Her analysis reframes “artificial intelligence” as a socio-technical system grounded in human activity rather than machine autonomy.

Both Lévy and Crawford arrive at the same critical insight from different directions: what we call "artificial intelligence" is fundamentally rooted in human intelligence, and the term, as commonly used, obscures precisely that fact.

A Brief History of the Term “Artificial Intelligence”

The term "artificial intelligence" was first introduced by John McCarthy in 1955 in the proposal he co-authored with Marvin Minsky, Nathaniel Rochester, and Claude Shannon to organize what would become the Dartmouth Conference, a summer workshop held at Dartmouth College in Hanover, New Hampshire, in 1956. At the time, the term was aspirational. It captured the possibility that machines might one day perform tasks that, in a human, would be taken as evidence of intelligence.

Early research focused on symbolic reasoning, rule-based systems, and attempts to replicate human cognition. These efforts assumed that intelligence could be formalized and reproduced through logical operations. However, progress proved more difficult than anticipated, leading to periods of reduced funding and interest.

From the 1980s onward, the field shifted toward statistical methods and machine learning. Rather than attempting to replicate reasoning directly, researchers focused on pattern recognition and data-driven approaches. This shift intensified in the 2010s with advances in neural networks and large-scale data processing.

The recent surge of interest, particularly since 2022, is largely due to the public release of large language models and generative systems: computational tools trained on vast amounts of human-generated text and data, capable of generating fluent written responses, images, and other content that, to many users, appear remarkably human-like. These tools gave rise to widespread adoption and a renewed cultural fascination with “artificial intelligence.” However, as Michael Jordan noted in discussions at the 2025 “AI, Science, and Society” conference in Palaiseau, France, these systems are fundamentally extensions of earlier predictive architectures rather than a realization of human-like intelligence.

This historical perspective reveals an important point: the meaning of “artificial intelligence” has shifted significantly over time. What began as a theoretical aspiration has become a broad and ambiguous label applied to a wide range of technologies.

What “Artificial Intelligence” Actually Means and Does Not Mean

Lévy (2023, 2025a) offers a compelling reframing. Rather than an autonomous intelligence, contemporary “artificial intelligence” can be understood as a statistical compression of collective digital memory. It mobilizes vast amounts of human-generated data and makes it accessible in new ways.

In this sense, it functions as an interface between collective intelligence and individual users. It does not think, understand, or intend. It retrieves, recombines, and predicts.

This perspective aligns with Hubert Dreyfus’s (1992) phenomenological critique of computational intelligence. Dreyfus argued that human intelligence is embodied. It emerges from lived experience, physical presence, and contextual awareness. Humans do not simply process information; they interpret it within a meaningful world shaped by experience.

Dreyfus (1992) distinguishes between “knowing-that” and “knowing-how.” While machines can process explicit rules and patterns, they lack the intuitive, situational understanding that characterizes human expertise. This limitation remains evident in contemporary “artificial intelligence” systems.

Thus, both Lévy and Dreyfus converge on a shared conclusion: what we call “artificial intelligence” lacks the essential features of intelligence as traditionally understood, including consciousness, embodiment, intentionality, and meaning-making.

The Problem with “Co-Intelligence” and Similar Framings

Recent terminology, such as “co-intelligence” and “co-creation,” suggests a partnership between humans and machines. While appealing, this framing introduces conceptual problems.

The prefix co- implies symmetry. It suggests equivalence between human and machine contributions. However, this equivalence does not hold. Humans bring experience, emotion, ethical judgment, and intentionality. Machines operate through statistical pattern recognition.

Lévy (2025b) emphasizes that these systems function as intermediaries between collective and individual intelligence. They do not originate meaning; they redistribute it. The human remains both the source and the interpreter.

Sherry Turkle (2026) adds another dimension to this discussion through her critique of what she calls "artificial intimacy." She argues that conversational systems can simulate empathy without experiencing it, and that humans, in turn, may attribute understanding to these systems where none actually exists. This confusion, she warns, carries a social cost: as people increasingly turn to machines for connection, genuine human relationships risk being quietly eroded, giving way to isolation and loneliness. Her central question is particularly striking: “What do we forget when we talk to machines?” Her answer is equally direct: we risk forgetting what is idiosyncratic about being human.

Where Turkle draws attention to what is lost in human terms, Kate Crawford (2021) shifts the focus to what is concealed in material terms. Her analysis reveals that the very infrastructure of "artificial intelligence" depends on human input at every stage, from the creation of training data to the labor of model development and refinement. Far from autonomous, these systems are built upon and sustained by human work that largely remains invisible. To frame them as equal partners in any collaborative endeavor, whether creative, intellectual, or educational, is not only philosophically inaccurate but also politically misleading, as it obscures the profound dependency at the heart of every “artificial intelligence” system.

In summary, the prefix co- infers symmetry and mutual agency, that is, the equal contribution of two parties toward a shared goal. As the scholars examined in this essay make clear, human intelligence and machine processing are not equivalent and cannot be treated as such.

Humans bring embodied experience, emotional depth, moral agency, consciousness, and the capacity to generate meaning. These systems bring statistical pattern recognition, operating without consciousness, intention, or understanding. To describe this relationship as a partnership is not a harmless metaphor. It normalizes a fiction that obscures where genuine intelligence resides and gradually erodes our sense of responsibility for the knowledge we produce, the decisions we make, and the learners we educate.

Proposing Alternative Terms for "Artificial Intelligence"

If language shapes understanding, then reconsidering terminology becomes an ethical imperative. Instead of asking “How do we teach with ‘artificial intelligence’?” we might ask, “How do we teach learners to engage critically with statistical pattern recognition systems?”

Several alternative terms may offer greater precision:

  • Collective Memory Interface: Drawing directly from Lévy (2023), this term emphasizes that “artificial intelligence” systems provide access to accumulated human knowledge rather than generate intelligence of their own. In an educational context, a collective memory interface is a tool that makes the recorded knowledge of humanity accessible, but whose meaning is always activated by the living mind of the learner. The term makes no false claims to autonomy, consciousness, or intelligence.

  • Cognitive Amplifier: This framing highlights augmentation rather than replacement. It aligns with Lévy's (2025a) argument that these systems synergistically augment both individual and collective intelligence without replacing either. In educational contexts, it keeps the teacher and learner at the center and the tool in its proper supporting role, resisting what Turkle (2026) identifies as the drift toward tools that substitute for human relationships rather than enhance them.

  • Knowledge Assistant: This term places agency firmly with the learner. Drawing on Crawford (2021), who dismantles the myth of these systems as autonomous, and Dreyfus (1992), who reminds us that genuine intelligence is situated, embodied, and contextual, the term captures what a learner actually does with these tools: navigate a vast landscape of human knowledge. The system assists, but the navigation and the meaning-making remain entirely human activities.

  • Thinking Support Tool: Inspired by Dreyfus (1992) and Turkle (2026), this term underlines the supplementary nature of these systems. Rather than framing the tool as a partner or collaborator, it positions it plainly as a support for human thinking, one that depends on the learner's own cognitive engagement to be meaningful. The term directly addresses Turkle's concern that learners may increasingly outsource not just tasks but thinking itself to machines. By naming the tool a support rather than an intelligence, it keeps the responsibility for thought where it belongs: with the human learner.

None of these alternatives is perfect, and no single term is likely to resolve a debate as complex as this one. But they move in a meaningful direction: toward conceptual clarity, critical reflection, and a more honest account of what these tools are and what they are not. In educational contexts above all, the language we use to introduce and discuss these systems matters. It shapes how learners understand them, how educators frame them, and how institutions govern them. Replacing "artificial intelligence" with more precise terminology is not a merely academic exercise. It is an invitation to think carefully about the tools we have adopted, the assumptions embedded in the words we use to describe them, and the kind of education we wish to build around them.

Conclusion: Toward a More Honest Terminology

Language is never neutral. The words we use to describe technology shape how we understand it, how we govern it, and how we integrate it into education.

The term “artificial intelligence,” rooted in the deeply human concept of intelligere, carries assumptions that distort our understanding of these systems. Despite their practical success, there remains a gap between what these systems do and what the term implies.

The scholars discussed here converge on a shared insight. Lévy reframes these systems as interfaces for collective intelligence. Dreyfus reminds us that true intelligence is embodied. Turkle warns against confusing simulation with experience. Crawford exposes the material and human foundations of these technologies.

Together, they call for greater accuracy and intellectual honesty.

This need is especially critical in education. When we describe these systems as “intelligent,” we risk granting them authority they do not possess. We risk encouraging learners to outsource thinking rather than develop it.

The goal is not to diminish these tools. They are powerful and valuable. The goal is to describe them accurately, in a way that preserves human agency, responsibility, and understanding.

The terminology we choose should not attribute to these tools a status they do not possess. It should instead clarify their function, acknowledge their limitations, and reaffirm what no computational system can replace: the living, thinking, feeling human being at the center of every meaningful act of learning.

References

Please cite the content of this blog

Correia, A.-P. (2026, April 25). Reclaiming Intelligence: Why “Artificial Intelligence” Needs Reframing [Blog post]. Ana-Paula Correia’s Blog. https://www.ana-paulacorreia.com/blog/reclaiming-intelligence-why-artificial-intelligence-needs-reframing

Next
Next

The Legacy of Trusted Leadership: Ripples That Reach Beyond Intention