A Washington Post opinion piece this month argued that schools are teaching AI all wrong — and the core of the argument is about framing. Most schools, when they address AI at all, treat it primarily as a threat to be managed: a cheating tool, a source of misinformation, something students need to be protected from or taught to resist. The better approach, the piece argues, is to treat AI as a medium students need to learn to think with and about — understanding how it works, what it's good for, where it fails, and what it means for how we know things. The distinction matters because the defensive framing produces students who are cautious about AI use; the literacy framing produces students who are capable of using it well.
While public school systems debate the question at the policy level, the technology companies that build AI have moved aggressively into the educator training space. Google for Education announced a landmark initiative to provide free Gemini training to all six million K-12 and higher education faculty in the United States, developed in partnership with ISTE and ASCD, the two major teacher professional development organisations. Microsoft launched an Elevate Educator Credential — also free, also developed with ISTE and ASCD — aligned to an AI Literacy Framework. Anthropic partnered with Teach For All, the global teacher training network, to run an AI Fluency Learning Series; over 530 educators attended the first series in late 2025. OpenAI launched Education for Countries, a programme to support national governments building AI education infrastructure at scale.
The pattern is striking: Google, Microsoft, Anthropic, and OpenAI have each independently decided that teacher training is a strategic priority, and each is offering its programmes free of charge. The reasons are not purely altruistic. Teachers who are trained on a particular company's tools will naturallyintegrate those tools into their classrooms. Students who grow up learning AI through Gemini, Copilot, or Claude will develop familiarity and preference for those platforms. The education market is, among other things, a long-duration customer acquisition channel — and the companies that shape how the next generation thinks about AI are positioning themselves in a durable way.
The research picture on whether any of this is working is genuinely mixed. OpenAI's learning outcomes measurement work, developed with Stanford's Accelerator for Learning and the University of Tartu, is attempting to build rigorous longitudinal data on how AI-assisted learning affects outcomes — but that data takes years to accumulate. What survey data exists shows 80% of US K-12 teachers have used AI at least once or twice, with one in five reporting daily use, and 58% expecting AI use at their school to increase in the coming year. What is less clear is whether that use is improving learning, or primarily saving teacher preparation time — a meaningful distinction.
The deeper problem the Washington Post piece gestures at is structural. AI literacy is not a subject with an established curriculum, a certification pathway for teachers, or a clear place in the scope and sequence of K-12 education. It is being grafted onto existing subjects — English classes address AI and writing, computer science classes address AI and coding — but there is no coherent framework for what a student should understand about AI by the time they graduate, or how that understanding should deepen across grade levels. Tech companies are filling that gap with their own frameworks, which is better than nothing but also means the curriculum is being written by the people who sell the technology. That is a conflict of interest that the education system has not yet seriously grappled with.
The international dimension adds pressure. OpenAI's Education for Countries programme, and similar national AI education investments in China, South Korea, and the UAE, reflect a growing recognition that AI literacy is becoming a dimension of national economic competitiveness. Countries that produce workers who can effectively direct and evaluate AI systems will have structural advantages over those that produce workers who are merely AI-adjacent. The US education system's fragmentation — 50 states, thousands of districts, no national curriculum — makes coordinated response slow. The technology companies, operating nationally and globally, are coordinating much faster than the institutions they're trying to help.