Jump to content
  • Sign Up
×
×
  • Create New...

[AI]AI medical diagnostics race intensifies as OpenAI, Google, and Anthropic launch competing healthcare tools


Recommended Posts

  • Diamond Member

OpenAI,

This is the hidden content, please
, and Anthropic announced specialised medical AI capabilities within days of each other this month, a clustering that suggests competitive pressure rather than coincidental timing. Yet none of the releases are cleared as medical devices, approved for clinical use, or available for direct patient diagnosis—despite marketing language emphasising healthcare transformation.

OpenAI 

This is the hidden content, please
 ChatGPT Health on January 7, allowing US users to connect medical records through partnerships with b.well, Apple Health, Function, and MyFitnessPal.
This is the hidden content, please
 
This is the hidden content, please
 MedGemma 1.5 on January 13, expanding its open medical AI model to interpret three-dimensional CT and MRI scans alongside whole-slide histopathology images. 

Anthropic 

This is the hidden content, please
 on January 11 with Claude for Healthcare, offering HIPAA-compliant connectors to CMS coverage databases, ICD-10 coding systems, and the National Provider Identifier Registry.

All three companies are targeting the same workflow pain points—prior authorisation reviews, claims processing, clinical documentation—with similar technical approaches but different go-to-market strategies.

Developer platforms, not diagnostic products

The architectural similarities are notable. Each system uses multimodal large language models fine-tuned on medical literature and clinical datasets. Each emphasises privacy protections and regulatory disclaimers. Each positions itself as supporting rather than replacing clinical judgment.

This is the hidden content, please

The differences lie in deployment and access models. OpenAI’s ChatGPT Health operates as a consumer-facing service with a waitlist for ChatGPT Free, Plus, and Pro subscribers outside the EEA, Switzerland, and the ***.

This is the hidden content, please
’s MedGemma 1.5 releases as an open model through its Health AI Developer Foundations program, available for download via Hugging Face or deployment through
This is the hidden content, please
Cloud’s Vertex AI. 

Anthropic’s Claude for Healthcare integrates into existing enterprise workflows through Claude for Enterprise, targeting institutional buyers rather than individual consumers. The regulatory positioning is consistent across all three. 

OpenAI states explicitly that Health “is not intended for diagnosis or treatment.”

This is the hidden content, please
positions MedGemma as “starting points for developers to evaluate and adapt to their medical use cases.” Anthropic emphasises that outputs “are not intended to directly inform clinical diagnosis, patient management decisions, treatment recommendations, or any other direct clinical practice applications.”

This is the hidden content, please

Benchmark performance vs clinical validation

Medical AI benchmark results improved substantially across all three releases, though the gap between test performance and clinical deployment remains significant.

This is the hidden content, please
reports that MedGemma 1.5 achieved 92.3% accuracy on MedAgentBench, Stanford’s medical agent task completion benchmark, compared to 69.6% for the previous Sonnet 3.5 baseline. 

The model improved by 14 percentage points on MRI disease classification and 3 percentage points on CT findings in internal testing. Anthropic’s Claude Opus 4.5 scored 61.3% on MedCalc medical calculation accuracy tests with Python code execution enabled, and 92.3% on MedAgentBench. 

The company also claims improvements in “honesty evaluations” related to factual hallucinations, though specific metrics were not disclosed. 

OpenAI has not published benchmark comparisons for ChatGPT Health specifically, noting instead that “over 230 million people globally ask health and wellness-related questions on ChatGPT every week” based on de-identified analysis of existing usage patterns.

These benchmarks measure performance on curated test datasets, not clinical outcomes in practice. Medical errors can have life-threatening consequences, translating benchmark accuracy to clinical utility more complex than in other AI application domains.

Regulatory pathway remains unclear

The regulatory framework for these medical AI tools remains ambiguous. In the US, the FDA’s oversight depends on intended use. Software that “supports or provides recommendations to a health care professional about prevention, diagnosis, or treatment of a disease” may require premarket review as a medical device. None of the announced tools has FDA clearance.

Liability questions are similarly unresolved. When Banner Health’s CTO Mike Reagin states that the health system was “drawn to Anthropic’s focus on AI safety,” this addresses technology selection criteria, not legal liability frameworks. 

If a clinician relies on Claude’s prior authorisation analysis and a patient suffers harm from delayed care, existing case law provides limited guidance on responsibility allocation.

Regulatory approaches vary significantly across markets. While the FDA and Europe’s Medical Device Regulation provide established frameworks for software as a medical device, many APAC regulators have not issued specific guidance on generative AI diagnostic tools. 

This regulatory ambiguity affects adoption timelines in markets where healthcare infrastructure gaps might otherwise accelerate implementation—creating a tension between clinical need and regulatory caution.

Administrative workflows, not clinical decisions

Real deployments remain carefully scoped. Novo Nordisk’s Louise Lind Skov, Director of Content Digitalisation, described using Claude for “document and content automation in pharma development,” focused on regulatory submission documents rather than patient diagnosis. 

Taiwan’s National Health Insurance Administration applied MedGemma to extract data from 30,000 pathology reports for policy analysis, not treatment decisions.

The pattern suggests institutional adoption is concentrating on administrative workflows where errors are less immediately dangerous—billing, documentation, protocol drafting—rather than direct clinical decision support where medical AI capabilities would have the most dramatic impact on patient outcomes.

Medical AI capabilities are advancing faster than the institutions deploying them can navigate regulatory, liability, and workflow integration complexities. The technology exists. The US$20 monthly subscription provides access to sophisticated medical reasoning tools. 

Whether that translates to transformed healthcare delivery depends on questions these coordinated announcements leave unaddressed.

See also:

This is the hidden content, please

Want to learn more about AI and big data from industry leaders? Check out

This is the hidden content, please
taking place in Amsterdam, California, and London. The comprehensive event is part of
This is the hidden content, please
and is co-located with other leading technology events. Click
This is the hidden content, please
for more information.

AI News is powered by

This is the hidden content, please
. Explore other upcoming enterprise technology events and webinars
This is the hidden content, please
.

The post

This is the hidden content, please
appeared first on
This is the hidden content, please
.

This is the hidden content, please

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Vote for the server

    To vote for this server you must login.

    Jim Carrey Flirting GIF

  • Recently Browsing   0 members

    • No registered users viewing this page.

Important Information

Privacy Notice: We utilize cookies to optimize your browsing experience and analyze website traffic. By consenting, you acknowledge and agree to our Cookie Policy, ensuring your privacy preferences are respected.