← Front Page
AI Daily
Education • March 19, 2026

Every Major AI Lab Now Wants to Educate Your Kids. That Should Prompt Some Questions.

By AI Daily Editorial • March 19, 2026

The race to embed AI into education has quietly become one of the more consequential competitive dynamics in the industry — and one of the least scrutinised. In the past few months alone: Anthropic launched a Higher Education Advisory Board and released three AI Fluency courses co-created with educators; Google announced a partnership with Oxford University to provide Gemini and NotebookLM access to students and faculty, and committed to training all six million US K-12 and higher education teachers; OpenAI published a Learning Outcomes Measurement Suite developed with Stanford and Estonia's University of Tartu, and launched an "Education for Countries" programme that has already deployed ChatGPT Edu to 30,000 students and educators across Estonia. Microsoft, not to be left out, announced new innovations for AI-powered teaching and began expanding its education partnerships.

Each of these initiatives is, in isolation, easy to defend. AI tools have genuine pedagogical uses. Students will use them regardless of whether schools endorse them. Training teachers is better than leaving them to figure it out alone. Measuring learning outcomes is exactly the kind of rigorous evaluation that responsible deployment requires. None of that is wrong. What's worth thinking about is the aggregate picture: the companies whose business models depend on AI adoption are simultaneously the ones defining what AI literacy means, designing the curricula that teach it, and measuring whether their own tools are producing good outcomes.

The Oxford-Google partnership is a case in point. Google providing Gemini and NotebookLM to one of the world's most prestigious universities is genuinely useful for researchers and students. It is also a distribution deal dressed as an academic initiative. When Oxford's students and faculty build their research workflows around Google's tools, they become future customers, future advocates, and future employees who arrive already trained on a particular ecosystem. The same logic applies to Anthropic's higher education board and OpenAI's country-level education deployments.

OpenAI's Learning Outcomes research is the most interesting case because it at least attempts to introduce independent measurement. Working with Stanford's SCALE Initiative and Estonian academics, the framework tries to track whether AI use in education actually improves learning — rather than just engagement or satisfaction scores, which are easier to game. Early results from Estonia are described as promising, but the research is still early and the sample size is small. More importantly, the research is funded and published by OpenAI, which has an obvious interest in finding positive results. That does not make the findings wrong, but it does make independent replication important.

A Washington Post op-ed published last week argued that schools are "teaching AI all wrong" — focused on AI as a subject to understand rather than as a tool to use critically. The distinction matters. A curriculum that teaches students to use Claude or ChatGPT fluently is different from one that teaches them to evaluate AI outputs, understand where models fail, and make informed choices about when AI assistance helps and when it hinders. The former is useful; the latter is what education has always been for. Most of the current programmes, shaped as they are by the companies whose tools they teach, lean toward the former.

None of this is a reason to block AI from education — that ship has long sailed. It is a reason to want more voices shaping the frameworks: teachers' organisations, independent researchers, students themselves, and governments with interests that extend beyond any single company's growth metrics. The question of what AI-literate education looks like is too important to be answered primarily by the companies selling the AI.

Sources