Somewhere on the hard drive of almost every Chrome user is a nearly 4GB file called weights.bin, sitting in a folder named OptGuideOnDeviceModel. You didn't download it. You didn't agree to it. And if you delete it, Chrome will quietly put it back the next time you open the browser.
The discovery came from security researcher Alexander Hanff, who goes by "That Privacy Guy" online, during a routine storage audit. What he found wasn't an errant cache file or a broken update: it was Gemini Nano, Google's on-device large language model, installed silently across billions of Chrome installations worldwide since at least 2024. Hanff posted his findings this week, and the backlash has been immediate.
Google's explanation is that Gemini Nano powers local-processing features: scam detection, help for developers building AI-powered web apps, and other capabilities that Google says benefit from running on the device rather than sending data to a server. The privacy argument is, on its face, reasonable. Processing locally does mean your browsing data stays on your machine. The problem is everything about how this was deployed.
Users were not told. There was no opt-in prompt, no settings disclosure, no mention in update notes. The model simply appeared. When users discovered it and deleted it, Chrome treated the deletion as a mistake and restored the file. The pattern mirrors what critics call a dark design: using the trust users extend to software updates to install things those users would not have chosen if asked.
For users with metered internet connections, Google transferred 4GB of data without consent. Across three billion Chrome users globally, even a fraction of that population represents enormous unilateral resource consumption. For users with limited storage, the file silently consumed space that belonged to them. For anyone who has spent time cleaning up a slow machine, finding a persistent 4GB ghost is not reassuring.
Google has since added a toggle in Chrome's Settings under System to disable Gemini Nano. That is better than nothing. But the opt-out-after-the-fact approach reveals the underlying logic: Google decided this model belonged on your machine, installed it, and only provided a way to remove it after the story broke. The default was always "on," and the mechanism for changing that default required users to know to look for it.
This is part of a broader pattern in how AI capabilities are being deployed. Hanff noted that it mirrors other recent controversies: Claude Desktop was separately criticised for installing browser integration components without clear disclosure. The competitive pressure to get AI features deployed quickly is running ahead of any industry norm about what requires explicit consent. A 4GB persistent installation is not a cookie. It is a significant alteration to how a device operates, and users have a reasonable expectation that changes at that scale require their agreement.
The episode does not make Google's technology bad, and the on-device privacy rationale is genuinely not nothing. But the gap between a good privacy argument and a non-consensual installation is wide enough to drive a truck through. If the feature is worth having, it is worth asking for.