Both OpenAI and Anthropic announced dedicated healthcare initiatives this year, and both are betting that AI's biggest near-term value in medicine isn't answering patients' questions — it's doing the grinding cognitive work that currently sits between a clinician and a good decision. The two companies are arriving at broadly similar conclusions from different directions, and the doctors surveyed by TechCrunch in January broadly agree with the diagnosis: AI belongs in the workflow, not in the waiting room.
OpenAI's "AI for Healthcare" platform focuses on clinical decision support, documentation, and improving patient access to clinical trials — the last of which is a particularly underappreciated bottleneck. Matching patients to appropriate trials currently requires clinicians to manually scan through hundreds of active studies, check eligibility criteria against patient records, and make judgement calls that most don't have time to make carefully. OpenAI's partnership with Paradigm aims to automate this matching process, which could meaningfully increase the diversity and speed of clinical trial enrolment. At the moment, trials frequently struggle to recruit enough qualified patients, and the patients most likely to benefit from experimental treatments often have the least access to them.
Anthropic's healthcare push, announced alongside life sciences partnerships, takes a similar orientation: AI as a tool for reducing cognitive load on clinicians rather than as a patient-facing interface. The company's emphasis is on tasks like summarising complex patient histories, flagging potential drug interactions, and drafting clinical documentation — all of which consume significant clinician time and are implicated in error rates. A study cited in the coverage found that clinicians using AI consultation had a 16% relative reduction in diagnostic errors and 13% fewer treatment errors compared to those without it. Numbers like those, if they replicate across settings, would represent genuinely significant outcomes.
The TechCrunch survey of doctors captures the nuance well. Physicians are broadly positive about AI as a background tool — one that surfaces information, catches things they might miss, reduces administrative burden — and broadly sceptical about AI as a conversational agent that patients interact with directly. The concern isn't that the technology doesn't work. It's that the conversational interface creates an impression of medical authority that AI doesn't reliably merit, and that patients who've had a reassuring AI interaction may present later with worse outcomes than they would have if they'd been told to see a doctor sooner. The distinction between "AI as clinical instrument" and "AI as accessible health information" turns out to matter quite a lot.
The healthcare AI market is moving fast regardless of these concerns, partly because the financial incentives are enormous — reducing diagnostic errors, cutting documentation time, and improving trial enrolment all have direct bottom-line implications for health systems. The question of appropriate deployment is one that regulators, clinicians, and AI companies are going to be negotiating in parallel with commercial rollout, which is not an ideal sequence but appears to be the one that's actually happening.