The Promise and the Peril of AI in Medicine
AI is increasingly integrated into healthcare: from interpreting radiology scans and generating clinical notes to assisting in diagnosis and patient scheduling. While these tools offer undeniable advantages—including increased efficiency, reduced clinician burnout, and improved decision-making—they also introduce new types of risk. AI models can be biased, opaque, and difficult to audit. And when AI makes mistakes, the consequences can be devastating.
Errors stemming from AI-driven decision support tools may lead to:
- Missed or delayed diagnoses
- Inappropriate treatment recommendations
- Patient harm or death
- Difficulty tracing the root cause of an error
Because AI decisions are often layered beneath complex software and datasets, medical professionals may not even recognize that an error originated from an algorithm—making accountability and prevention even more difficult.
Lack of Oversight in New York Hospitals
Despite AI’s widespread use, many New York healthcare institutions lack standardized policies governing its implementation. A national survey found that only 16% of hospital systems had a system-wide governance framework for AI use and data access. This means that in most cases, there are few checks in place to evaluate whether an AI system is safe, effective, or equitable.
The absence of oversight opens the door to:
- Disparate impacts on patients based on race, socioeconomic status, or gender
- Noncompliance with federal and state laws
- Heightened exposure to malpractice liability
Legal Implications: Who Is Responsible When AI Harms a Patient?
In a malpractice case involving AI, key legal questions arise: Who is liable—the physician who relied on the AI? The hospital that implemented the technology? Or the company that developed the algorithm? When no governance policy exists, the burden of responsibility may fall ambiguously among multiple parties.
For patients harmed by AI-assisted medical decisions, proving negligence can be particularly challenging without clear documentation of how and why the AI recommendation was followed. In many instances, staff are not adequately trained on how to report or even recognize AI-related errors.
Best Practices for Preventing AI-Related Medical Errors
ECRI and other safety organizations recommend the following steps to reduce AI-related patient harm:
- Establish clear governance and oversight policies for AI usage
- Include patient safety, clinical engineering, and legal experts in AI oversight committees
- Regularly evaluate AI systems for accuracy, bias, and safety outcomes
- Disclose AI use to patients and obtain informed consent where appropriate
- Train staff to identify, report, and escalate AI-associated incidents
These safeguards should be standard practice in every healthcare facility in New York. Unfortunately, many hospitals have yet to catch up with the technology they have already deployed.
Protecting New York Patients in the Age of AI
At Gair, Gair, Conason, Rubinowitz, Bloom, Hershenhorn, Steigman & Mackauf, we are committed to holding healthcare providers accountable when technology—no matter how advanced—fails to meet the standard of care. If you or a loved one suffered harm due to a medical error involving artificial intelligence, you may have grounds for a malpractice claim.
Contact our New York medical malpractice attorneys at 212-943-1090 to schedule a free consultation.