health

AI Outperforms Doctors in Diagnostic Study

Harvard researchers found OpenAI's o1-preview model correctly diagnosed 67% of emergency cases, versus roughly 50% for attending physicians, suggesting AI could complement rather than replace clinical judgment.

Apr 30th 2026 · United Kingdom

Reid Hoffman, the billionaire cofounder of LinkedIn, is now pushing for artificial intelligence to play a central role in healthcare, arguing at WIRED Health in London that doctors who do not use frontier AI models as a second opinion are "bordering on committing malpractice." His startup, Manas AI, is working to accelerate drug discovery for cancer from a decade-long process to a matter of years, inspired by cancer physician Siddhartha Mukherjee, who serves as the company's cofounder and CEO. Hoffman believes AI models, despite not being specifically trained for medicine, have ingested enough information to serve as valuable diagnostic aids that could prevent misdiagnosis and help address the NHS's shortage of doctors. While Hoffman's vision emphasizes AI as a collaborative tool, a new study published in Science reveals that AI systems are already outperforming human doctors in diagnostic accuracy. Researchers at Harvard and Beth Israel Deaconess Medical Center tested OpenAI's o1-preview reasoning model against two attending physicians on 76 actual emergency department cases, finding that the AI achieved correct diagnoses in 67.1 percent of cases, compared to 55.3 percent and 50.0 percent for the human doctors. Blinded reviewers could not distinguish between AI-generated and physician-made diagnoses. The researchers emphasized that their findings suggest collaboration rather than replacement, though AI systems still struggle with multimodal inputs and can hallucinate incorrect information. Separately, wealthy clients are increasingly using AI chatbots like ChatGPT for legal and tax advice, creating complications for their attorneys. Lawyers report that clients are sharing sensitive documents with AI systems, which can void attorney-client privilege protections, and spending hours of attorney time addressing AI-generated recommendations that are not appropriate for their situations. One lawyer recounted a client who wanted to create a community property trust after his wife had died, an option only available to married couples. Experts warn that AI tools often make mistakes with complex topics like international taxes and may not be current with new legislation, emphasizing that wealth transfer decisions require nuanced discussions that AI systems are not equipped to handle properly.