Building AI Literacy in K–12 and Higher Education: Where Do We Stand Now?
Photo by Jannes Glas on Unsplash
Imagine the dawn of the 20th century, when the world stood on the brink of a technological revolution: the invention of the automobile. Roads were being paved, cars were rolling off assembly lines, and society was entering a bold new era. But with this groundbreaking innovation came a host of new challenges. People had to learn how to cross streets safely, interpret traffic signs, and most importantly, operate these new machines responsibly. It was a time of immense transformation that demanded a new kind of education focused on safety, efficiency, and human advancement.
History, it seems, is repeating itself with the rise of generative artificial intelligence (AI).
Just as the automobile reshaped society, generative AI is also reshaping society and education, in particular. Students now use it to brainstorm essays, teachers to personalize instruction, and organizations are racing to adapt. Much like we educated an entire population to use cars in safe, ethical, and responsible ways, we must now do the same with AI. This brings us to a central theme: AI literacy.
UNESCO (2025, para. 2) defines literacy as “a lifelong continuum of learning and proficiency in reading, writing, and numeracy skills that increasingly include digital fluency, media literacy, education for sustainable development, global citizenship, and job-specific competencies.”
In the last year, a call for AI literacy has evolved from a future aspiration to a present-day necessity. We find ourselves in a moment of rapid technological integration, and even faster questioning. What does it mean to be literate in a world where algorithms write, recommend, and decide alongside us? And how do we prepare students to navigate this world with agency, responsibility, and understanding?
AI is no longer happening around education; it is happening within it. The emergence of generative AI demands a new kind of literacy. We are no longer debating whether to teach AI literacy; we’re asking how, to whom, and toward what end.
Defining AI Literacy: Still in Flux
Despite growing interest, there is no universally accepted definition of AI literacy that works across all educational contexts. Broadly, scholars such as Long and Magerko (2020, p. 2) describe it as “a set of competencies that enables individuals to critically evaluate AI technologies, communicate and collaborate with AI systems, and use AI tools in informed ways.” However, how institutions interpret and apply this definition varies, shaped by whether the focus is on technical skill development, ethical reasoning, or practical tool fluency.
In K–12 education, AI literacy is frequently embedded within broader STEM initiatives, with emerging programs emphasizing computational thinking, data fluency, and ethical awareness. In higher education, particularly within non-STEM disciplines, AI literacy tends to center more on fostering awareness and critical reflection than on teaching coding or model development.
Core elements of AI literacy often involve:
Understanding what AI is—and what it is not
Recognizing algorithmic bias and navigating ethical dilemmas
Critically evaluating AI-generated content and outputs
This multidimensional approach is both a strength and a challenge. It allows AI literacy efforts to be context-sensitive and responsive to different learner needs. At the same time, its complexity can make integration into curricula more difficult to design and implement.
Not Just About Tech Skills
When we talk about AI literacy, the conversation often centers on tools and technical know-how. But the concept goes much deeper. AI literacy is part critical thinking, part ethical reasoning, and part practical fluency. In many ways, it echoes what we once called media literacy, but today, the systems are more complex, and the stakes are less clear.
This isn’t just a curricular conversation. It’s a cultural one. It’s about how we prepare learners not only to use AI, but also to ask difficult questions about it.
Janne Fagerlund and colleagues (2024) interviewed 13 early childhood and primary school teachers in Finland who were participating in national AI-themed projects. They found that teachers recognized the value of helping students understand AI not just as a tool, but as a concept. One of the study’s most compelling contributions was its emphasis on subjectification, an often overlooked yet essential purpose of education (Biesta, 2020). It’s about helping students become authors of their own lives, encouraging them to reflect on how AI affects them personally and to make choices aligned with their own values.
Models of AI Literacy from Classrooms to Campuses
Across classrooms in the U.S. and beyond, teachers are finding creative ways to introduce students to AI concepts. Middle schoolers are training image recognition models using free platforms like Teachable Machine. Fourth graders are building simple “if-then” chatbots to explain recycling to younger students. And through initiatives like AI4K12, more schools are anchoring this work in thoughtful, research-informed frameworks.
The Five Big Ideas in AI—perception, representation and reasoning, learning, natural interaction, and societal impact—help educators see beyond the code. These foundational concepts raise big questions: Can machines learn the way we do? Should they?
For K–12 students, developing AI literacy might mean exploring how facial recognition works or asking who gets excluded when algorithms make decisions. For undergraduates, AI literacy may mean recognizing how AI is already shaping the fields they’re entering, such as journalism, healthcare, and education.
At the university level, the response has largely been reactive. Generative AI tools like ChatGPT and Copilot emerged suddenly, disrupting long-held assumptions around authorship, assessment, and academic integrity. Instructors had to pivot mid-semester, often to find that students were already several steps ahead.
Some institutions responded with innovation. Faculty development programs took shape. New courses addressed AI and society or introduced AI to non-coders. At places like MIT, Georgia Tech, and the University of Toronto, interdisciplinary micro credentials began to emerge, giving students from all majors a meaningful entry point into the conversation.
Still, the terrain in higher education remains uneven. While engineering students may work hands-on with AI models, their peers in education or business often encounter AI only through ethics debates or policy discussions. Many faculty report feeling unprepared to teach or guide conversations about AI. Without adequate support, even the best-designed tools risk being misunderstood or misused.
AI literacy should not stand apart as a siloed subject. It must be thoughtfully integrated across disciplines and sustained over time. This work is challenging, especially given the ethical complexity, but it is also essential. AI reflects our collective values and biases. Ignoring that reality doesn’t protect students; it leaves them more vulnerable.
What’s Next?
So, where are we now? We’re in motion, but not yet in alignment. The field is buzzing with experiments, pilot programs, and early prototypes. It’s an exciting phase full of exploration and possibility. However, to ensure a lasting impact, we must transition from excitement to strategy.
That means:
Building shared frameworks that are flexible yet clear
Integrating AI learning into teacher preparation programs, and not just in computer science courses
Fostering cross-sector collaboration among educators, developers, researchers, and policymakers
And above all, it means listening to students, not just to understand how they are using AI, but to learn how they envision its role in their futures. Their questions, concerns, and creativity must shape the conversation. After all, they aren’t just AI users; they are future designers, decision-makers, and citizens in an AI-augmented world. If we want AI education to be meaningful, it must start by taking their perspectives seriously.
Looking Ahead: Turning AI Literacy into Strategy
AI literacy isn’t just a technical competency: it’s a mindset. It’s a way of asking good questions, navigating complexity, and making decisions in an increasingly algorithmic world. As educators, we must be intentional and serious about how we build that literacy across grade levels, disciplines, and contexts.
Just as at the dawn of the automobile invention, people had to learn how to navigate streets safely, interpret traffic signals, and, most importantly, drive these vehicles responsibly. We are now faced with a powerful new technology that promises advancements but also presents complex questions. And once again, we require an educational response that transcends merely catching up. We need one that is strategic.
It’s May 2025. The tools are here. The urgency is clear. The opportunity is ours to shape.
Let’s not wait to teach students how to catch up to AI. Let’s teach them how to lead with it.
That means designing AI literacy programs not just around tools, but around purpose. Before we ask which apps to use or what modules to build, we need to ask why we’re teaching AI in the first place.
Is it to prepare students for jobs that don’t yet exist? To help them navigate ethical dilemmas posed by opaque systems? To support them in critiquing, questioning, and co-creating the digital world around them?
The answer, of course, is yes; to all of it. It is our responsibility as educators, designers, and leaders to ensure that we are not simply reacting to technological change; instead, we’re shaping it thoughtfully, with purpose at the center.
References
AI4K12. (2020). The Five Big Ideas in AI. https://ai4k12.org
Biesta, G. (2020). Risking ourselves in education: Qualification, socialization, and subjectification revisited. Educational Theory, 70(1), 89–104. https://doi.org/10.1111/edth.12411
Fagerlund, J., Mertala, P., Lehtoranta, J., Mattila, E., Salo, L., & Korhonen, T. (2024). What’s the purpose of AI education? Studying K–9 teachers’ views of educational goals. University of Jyväskylä.
Long, D., & Magerko, B. (2020). What is AI Literacy? Competencies and Design Considerations. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10.1145/3313831.3376727
UNESCO (2025, February 11). What you need to know about literacy. UNESCO.
Please cite the content of this blog:
Correia, A.-P. (2025, May 29). Building AI Literacy in K–12 and Higher Education: Where Do We Stand Now? Ana-Paula Correia’s Blog. https://www.ana-paulacorreia.com/blog/building-ai-literacy-in-k12-and-higher-education-where-do-we-stand-now