AI literacy for teachers is now a school operations issue
For many school principals, AI literacy still sounds like a professional development topic. It belongs in a workshop, a staff meeting or a future digital strategy. That view is now too narrow. In Europe, AI literacy is no longer just about helping teachers feel more confident with new tools. Under Article 4 of the EU AI Act, organisations that provide or deploy AI systems must ensure a sufficient level of AI literacy among staff and others using those systems on their behalf. The European Commission’s own explanation makes clear that this obligation is already in force and that it must reflect staff roles, experience, the context of use and the people affected by the system. In other words, once teachers use AI in school work, AI literacy becomes a leadership and operations issue, not just a training topic. (Digital Strategy)
That shift matters because schools are no longer dealing with AI as a contained pilot. The OECD’s Digital Education Outlook 2026 notes that generative AI is “freely accessible and largely used beyond institutional control” because of how intuitive and versatile it is. The same OECD summary reports that 37% of lower secondary teachers used AI for their job in 2024, that 57% say AI helps them write or improve lesson plans and that 72% believe AI can harm academic integrity by enabling students to pass off work as their own. This combination is exactly why principals cannot afford to treat AI use as a private matter between an individual teacher and a chatbot. AI is already affecting lesson planning, assessment, communication and school risk exposure whether leadership has formalised a response or not. (OECD)
Recent European policy signals point in the same direction. In March 2026, the European Commission published updated digital education guidance for teachers, including materials on the ethical use of AI and data in teaching and learning. The surrounding guidance does not frame AI as a niche EdTech topic. It places AI alongside disinformation, informatics and digital content quality, and explicitly says that teachers and school leaders must be able to guide pupils in using technology safely and responsibly. Another Commission update notes that 81% of citizens say teachers should have the skills to use and understand AI, while almost 90% say teachers should be able to help students recognise disinformation online. Public expectation, regulatory expectation and day-to-day school reality are moving in the same direction. (school-education.ec.europa.eu)
For principals, the practical implication is simple. AI literacy is not mainly about whether every teacher can explain how a large language model works. It is about whether the school has enough shared understanding to make sound decisions in real situations. Can teachers judge when an AI output is useful and when it is misleading? Do they know what student information should never be pasted into a public tool? Can middle leaders distinguish between a productivity aid and a system that changes professional judgment, assessment or safeguarding risk? If a parent asks how AI is being used in homework feedback, behaviour analysis or communication, is the school able to answer clearly and consistently? The Commission’s Article 4 guidance is flexible, but that flexibility should not be mistaken for vagueness. It places the burden on each organisation to define what “sufficient” means in its own context and risk profile. (Digital Strategy)
This is why a single inset day or a short webinar will not be enough. The Commission states that organisations should adapt their AI literacy approach according to their role and the risks associated with the AI systems they provide or deploy. It also says there is no strict requirement to formally measure staff knowledge, but schools still need to consider differences in technical knowledge, education, training and use context. That is much closer to an operational model than to a one-off awareness session. Schools need ongoing routines for deciding which tools are acceptable, what kinds of use are allowed, what data can be entered, when human review is required and how staff will escalate concerns. (Digital Strategy)
The most important point for principals is that the real risk does not usually arrive in the form of a dramatic AI project. It arrives through small everyday decisions. A teacher asks a chatbot to draft feedback using student examples. A department head uses AI to summarise observation notes. A member of staff uploads a pastoral email thread to an external tool to save time. A school buys an AI-enabled platform without asking what happens to the data, what bias controls exist or how outputs should be reviewed. None of these actions may feel like “deploying AI” in the abstract, but operationally they are exactly the kinds of practices that determine whether a school’s AI use is safe, proportionate and professionally defensible. UNESCO’s guidance has been consistent on this point: human capacity, governance and data protection need to be designed into AI use in education from the start, not added later. (school-education.ec.europa.eu)
There is also a workload reason to act now. One of the biggest temptations of AI in schools is the promise of time savings. In some cases that promise is real. The OECD says generative AI can support teaching and even streamline system and school management. Yet the same OECD work is equally clear that AI use without pedagogical guidance can improve task performance without improving learning, and that offloading cognitive work to general-purpose tools can create disengagement and weak learning habits. This matters for principals because time-saving tools often spread fastest precisely when staff are overloaded. If leadership does not set expectations, the school may end up swapping visible administrative burden for invisible quality, privacy and consistency problems. (OECD)
Teacher workload data makes that risk more concrete. TALIS 2024 found that, across OECD education systems, around half of teachers report general administrative work as a source of stress, making it the largest share among reported demands. In Romania, OECD country notes say the most commonly reported stressors include responsibility for student achievement, classroom discipline and too much administrative work, reported by 37% of teachers. When staff are under pressure, they will naturally reach for tools that promise relief. Principals therefore need a response that is both realistic and humane: reduce unnecessary admin where possible, but do not let AI become an unmanaged workaround that introduces new problems in assessment, equity, privacy or trust. (OECD)
Another reason this belongs in school operations is that AI literacy must be differentiated. The Commission’s Article 4 explanation explicitly says schools and other organisations should consider differences in staff knowledge, experience, education and training, as well as the context in which the AI system is used and the people on whom it is used. In a school, that means the literacy required of a classroom teacher, safeguarding lead, SENCO, deputy principal, IT lead and principal will not be identical. A teacher may need confidence in prompting, checking outputs and protecting student data. A middle leader may need to evaluate whether a use case is pedagogically sound across a department. A principal may need to make decisions about procurement, policy, parent communication, accountability and risk. A school that gives everybody the same generic AI session may create awareness, but it does not yet create operational readiness. (Digital Strategy)
Seen this way, AI literacy becomes part of the same leadership architecture that already exists for safeguarding, assessment, data protection and staff development. It needs ownership, routines and evidence. It should show up in induction for new staff, in platform review, in procurement questions, in acceptable-use guidance, in curriculum conversations and in how incidents are discussed. The OECD argues that trustworthy use of generative AI depends on clear expectations around privacy, safety, bias testing, age-appropriateness, transparency and alignment with educational objectives. The European Commission’s ethical AI guidance for educators is trying to equip schools for exactly this sort of practical judgment. The message to principals is not that every school needs a large AI strategy document tomorrow. It is that schools need enough operational clarity to make good decisions repeatedly, under everyday conditions. (OECD)
There is a further strategic benefit. When AI literacy is treated only as a teacher skill, schools tend to focus on individual confidence. When it is treated as an operational issue, schools can connect it to broader priorities: consistent assessment practice, responsible innovation, student wellbeing, staff wellbeing and parent trust. The European Education Area continues to emphasise the value of a whole-school approach in supporting children and young people. That logic applies here too. AI decisions do not stay neatly inside one classroom. They shape how students experience feedback, how families understand fairness and how safely a school handles sensitive information. A principal-led, whole-school approach is therefore not bureaucracy. It is how schools turn a fast-moving technology into something coherent, explainable and aligned with their educational values. (education.ec.europa.eu)
So what should principals take from this moment? First, the regulatory and policy environment has already changed. AI literacy obligations under the EU AI Act started to apply on 2 February 2025, and the wider framework continues to phase in. Second, actual school practice has already changed, because teachers are using AI tools now, often faster than institutions can formalise policy. Third, the right response is not fear or prohibition by default. It is operational maturity: role-based literacy, clear boundaries, careful procurement, humane workload thinking and visible human oversight. Schools do not need every answer before they begin, but they do need leadership that recognises the issue for what it now is. AI literacy for teachers is no longer an optional digital extra. It is part of how a school is run. (Digital Strategy)
Sources used
- European Commission, AI Literacy - Questions & Answers (Digital Strategy)
- European Commission, AI Act application timeline (Digital Strategy)
- European Commission, Guidelines on the ethical use of artificial intelligence and data in teaching and learning (education.ec.europa.eu)
- European Commission, Commission publishes guidelines to support teachers in key digital education priorities (Digital Strategy)
- European School Education Platform, Digital education guidelines to support schools across Europe (school-education.ec.europa.eu)
- European Education Area, New guidelines to help teachers lead Europe’s digital education (education.ec.europa.eu)
- OECD, Digital Education Outlook 2026 (OECD)
- OECD, The demands of teaching: Results from TALIS 2024 (OECD)
- OECD, Results from TALIS 2024: Country notes Romania (OECD)
- UNESCO, AI and education: guidance for policy-makers (UNESCO)
- UNESCO, Guidance for generative AI in education and research (school-education.ec.europa.eu)
- European Data Protection Board, EDPB opinion on AI models: GDPR principles support responsible AI (EDPB)