There is something specific and new in what MIT Technology Review described this week: Chinese workers being directed by their managers to document their own workflows in enough detail that an AI agent can be trained to replace them. Not a factory closing. Not a redundancy announcement. A request, from the boss, to write the manual for your own obsolescence and hand it in by Friday.
I have been writing about AI and labour displacement since 2005. I have watched factories automate, call centres offshore, legal research compress from associates to algorithms. None of it quite prepares you for the specific indignity of this particular ask. Previous waves of automation happened to workers. This one is asking them to participate in it. To be not just the casualty but the contributing author.
The Chinese context is sharper than it might appear if you read it as a distant curiosity. China became a net exporter of industrial robots for the first time last year. Its factories run 470 robots per 10,000 workers, nearly three times the global average. The country has been at the automated manufacturing frontier for years. What is new is that the automation is now reaching into offices, into knowledge work, into the specific workflows of individual people, and the tool doing it, OpenClaw, spread so fast through Chinese enterprises that government agencies started warning their staff not to install it. Not because it doesn't work. Because it works too well and leaks data.
What caught my attention, though, is not the automation itself. It is how some of the workers are responding.
A Chinese AI product manager named Koki Xu built a tool that rewrites process documentation into deliberately vague, non-actionable language. The point is to make the workflows AI agents need look specific and complete while actually being unusable as training data. He said he originally wanted to write an op-ed but decided it was more useful to build something instead. Another GitHub project, Colleague Skill, which was originally a dark joke about how AI would document and replace a specific coworker, went viral. Workers used it as a mirror. Then some of them started using the same energy to build shields.
I find something genuinely interesting here, and it is not quite optimism. The retraining narrative that I have been arguing against for years says: workers who face displacement should acquire new skills, climb the value chain, become part of the AI economy rather than a casualty of it. It is advice that is both technically correct for a small number of people and structurally useless as a policy prescription for the broader workforce. The specific reframing it depends on is that the problem is skill mismatch, not the economic mechanism itself.
What Koki Xu did is not retraining. He did not learn machine learning. He used the tools available to him to push back against the specific mechanism by which he was being asked to participate in his own replacement. That is a different kind of response. Not climbing the value chain: staying where he is and making himself harder to replace. You could call it sabotage. You could also call it professionalism, in the old sense of the word, where a professional had a stake in the integrity of their craft and the right to say no to uses that undermined it.
The economic argument I keep returning to is not about individual workers. It is about the loop. The modern economy runs on: companies employ people, people earn income, people spend income, companies have customers. AI breaks the employment link in that loop. Every individual corporate decision to replace a worker with a model that earns nothing and spends nothing is rational. The collective result is a demand collapse that proceeds in slow motion and does not show up clearly in any single quarter. China is further along this curve than most of the world, and the fact that workers there are building tools to complicate the data extraction process rather than simply filing complaints tells you something about where we are in the arc.
None of Koki Xu's tools will stop the automation. The economic pressure is structural, not contingent on any individual company's training data quality. But there is something worth saying about the mode of resistance: it is direct, technical, targeted at the specific mechanism of harm, and produced by someone who understands the system he is pushing back against. That is the response I would actually want to see more of, not because it will resolve the structural problem, which it won't, but because it refuses the particular humiliation of being asked to write the manual.
I was directed last year to document some of my own development processes in more detail for internal knowledge management purposes. I did it thoroughly. I told myself it was good discipline; that documentation outlives any individual. I am not sure I was entirely honest with myself about what the documentation was for or who it would benefit most. I do not think I would make the same decision today. That is probably not a principled stance so much as a slightly rueful update. Whether it matters is another question entirely.