Skip to main content
New Webinar

What’s Next for the Medical Professional Liability Industry? Challenges and Opportunities in an Evolving Market

Insights from Bill Burns, Conning, and Bill McDonough, Constellation


Government Relations

COVID-19 Information Center

Inside Medical Liability

Second Quarter 2020


The Algorithm Will See You Now

How AI’s healthcare potential outweighs its risk

By Richard Anderson, MD


Start-up companies using artificial intelligence (AI) for healthcare raised $864 million through 75 deals, a record amount of funding, in the second quarter of 2019.1 This indicates strong confidence in the future of AI in healthcare.

Today, the sweeping benefits of AI in making the practice of medicine safer and more efficient are still emerging. Improved accuracy of diagnosis, precision medicine, early detection, personalized medicine, and inexpensive reproducible drugs are just a few of the ways the healthcare community can expect AI to make the practice of medicine safer and more efficient. If AI tools can consistently detect minute anomalies that are imperceptible for the most experienced physicians, there’s the potential for reduction in defensive medicine and improved health for patients.

AI could also potentially reduce liability. An analysis of more than 25,000 claims and suits revealed that in cases with incorrect diagnoses, inadequate assessments were the most common factor. AI tools could support physicians with a second opinion or a deeper layer of understanding.

AI benefits and risk

A third of U.S. physicians are already using AI in their practices. There is widespread hope that AI can help address diagnostic errors, the largest cause of medical liability claims.2 AI technology is still in the early stages of deployment in clinical practice throughout the U.S., but the number of users is likely to rise in coming years.3

Potential benefits from healthcare AI include:

  • Assistance with case triage
  • Enhanced image scanning and segmentation
  • Improved detection (speed and accuracy)
  • Supported decision making
  • Integration and improvement of workflow
  • Personalized care
  • Automatic tumor tracking
  • Disease development prediction
  • Disease risk prediction
  • Patient appointment and treatment tracking
  • Easing workload to prevent physician burnout and distractions that compromise doctor-led diagnosis
  • Making healthcare delivery more accessible, humane, and equitable
  • Increasing physician competency to enable patient-physician trust4
With all the potential benefits, AI also presents some foreseeable risks, including:
  • False positives/negatives
  • Systems error
  • Over reliance
  • Unexplainable results
  • Unclear lines of accountability
  • New skill requirements
  • Network systems vulnerable to malicious attack
  • Seeing things that do not exist (“AI hallucination”)
  • Augmenting biased or unorthodox behavior

Initial wins from healthcare AI

There are areas where AI has already shown great promise for the way conditions are diagnosed and the practice of medicine.

Reading diagnostic images—Of all medical specialties, initial applications of AI are likely to affect radiology most directly. Diagnosis-related claims accounted for 67% of all diagnostic radiology claims in a study of closed claims between 2013 and 2018 conducted by The Doctors Company. In interventional radiology claims, the second-highest case type was “improper management of treatment course.”

In diagnosis-related radiology claims, patient assessment was a contributing factor in 85% of the claims, including misinterpretation of diagnostic studies and failure to appreciate and reconcile relevant signs, symptoms, and test results. The top injury in diagnosis-related cases was undiagnosed malignancy, occurring in 35% of claims. AI may offer a way to significantly reduce the incidence of failure-to-diagnose and the misinterpretation of diagnostic studies.

The advent of systems that can quickly and accurately read diagnostic images will undoubtedly redefine the work of radiologists and assist in the prevention of misdiagnoses. The majority of AI healthcare applications use machine learning algorithms that train on historical patient data to recognize the patterns and indicators that point to a particular condition. Although the best machine learning systems are possibly only on a par with humans for accuracy in making medical diagnoses based on images,5 experts are confident that this will improve over time as developers train AI systems on millions-strong databanks of labeled images showing fractures, embolisms, tumors, etc. Eventually these systems will be able to recognize the most subtle of abnormalities in patient image data.

Initial research of AI applications in radiology shows success in:

  • Performing automatic segmentation of various structures on CT or MR images, potentially providing higher accuracy and reproducibility6
  • Automatically detecting polyps during colonoscopy, which assists in increasing adenoma detection, especially diminutive adenomas and hyperplastic polyps7
  • Facilitating improved diagnostic decisions through a radiologist trained tool that provides a summary view of patient information in the electronic health record (EHR), aiding in uncovering relevant underlying issues8
  • Prioritizing interpretation of critical findings that a radiologist might otherwise be unaware of, allowing for faster reading of cases with high suspicion for significant abnormalities

Detecting and predicting cancer—AI systems are also yielding promising results in the diagnosis— and even the treatment—of a range of different types of cancer. A recent closed claim study by The Doctors Company determined that “undiagnosed malignancies” were the third most common alleged injury in medical and surgical oncology claims. In 29% of oncology claims, patients alleged that there was a failure or delay in diagnosing their illness, and inadequate patient assessments were a contributing factor in 46% of the claims, suggesting an opportunity for AI to assist in cancer diagnosis.

Oncology-related AI is showing success in:

  • Detecting metastatic breast cancer. Google AI boasted a 99% success rate in detecting this form of cancer.9
  • Diagnosing the two most common types of lung cancer, which can be challenging even for experienced physicians. In 2018, a team of computational researchers reported a 97% accuracy rate from a system trained to diagnose these types of cancer.10
  • Predicting the development of a variety of diseases with 93% accuracy overall, including cancers of the prostate, rectum, and liver,11 using natural language processing techniques
  • Predicting a woman’s future risk of breast cancer12
  • Detecting skin cancers. While AI systems to detect skin cancer are still in their early stages,13 a study showed that AI misdiagnosed malignant melanomas less often than a group of 58 dermatologists.14

Alleviating physician burnout—Nearly half of all physicians believe the documentation burden or workload is the leading cause of burnout, according to the Future of Healthcare survey by The Doctors Company.15 Preventing physician burnout is a potentially exciting use of AI.

Among the AI tools that are helping lessen the pressures on practicing physicians are those that can:

  • Manage workflow
  • Provide a second opinion
  • Help with preliminary triage
  • Allow remote examination
  • Assist with treatment management and dosage
  • Allow voice control

Risks of healthcare AI will emerge

There’s little doubt that AI-driven technologies will almost inevitably introduce new risks for patients and clinicians, leading to new reasons to sue. In anticipation of the potential risks, the medical community must make important decisions about the regulation of AI and the necessary physician education. Clearly, current laws and boundaries must be reassessed.

Some of the inherent risks in healthcare AI are already surfacing. For example:

  • Models trained on partial or poor data sets can potentially show bias towards particular demographics that are represented more fully in the data (e.g., caucasian). This could create high potential for poor recommendations, like false positives. It’s critical that system builders are able to explain and qualify their training data.
  • General misdiagnosis is also possible in a well-trained system; although an accuracy rate may be high according to a manufacturer, there will inevitably be times when AI gets it wrong. This is why it is important to have a human expert in the loop.
  • Over reliance on AI recommendations could become problematic in the long run. As AI improves, there is a danger that health workers will refrain from challenging AI results.
  • Black box algorithms can generate suggestions without being able to provide justification for them, which creates problems for the chain of accountability.
  • Cybersecurity issues will likely develop, as they have with other technologies. Cyber criminals, for example, could misclassify machine learning–based medical predictions.17 As AI develops and its use proliferates, these risks will be supplemented and expanded. This pattern played out during the arc of electronic health record adoptions.

Before wholesale deployment, know the risks

Clearly AI has the potential to reduce the frequency of medical liability litigation by improving the speed and accuracy of diagnoses.

Nonetheless, the healthcare industry must have good foreknowledge of the risks before embracing wholesale deployment.

Errors are not always preventable, and it is important to have a clear understanding of the liability implications for physicians who choose to augment their practice with machine intelligence. U.S. law is still ambiguous and legal scholars are studying how incidents of malpractice related to AI should be considered. Their suggestions range from creating the new status of “AI personhood,” requiring that technology be insured for such an eventuality, to an extension of common entity liability that would hold all parties involved in the use liable.18

Physician-led bodies like the American Medical Association also have called for oversight and regulation of healthcare AI systems. Physicians must seek training in the use of AI and adhere to the standards provided by the device companies. Training will also enable physicians to fully and clearly articulate potential harms to patients19 in order to obtain true informed consent.20

Thoughtful physicians need to anticipate not only the exciting potential for AI to improve patient care, but also the dangerous unintended consequences that may arise.

The medical professional liability insurance industry faces distinct challenges with the implementation of AI in healthcare. What are the risks? How do we ensure that healthcare providers have the necessary insurance coverage? How do we fight for healthcare providers in court when they are threatened by frivolous claims involving AI? Together, as an industry, we can work on tackling these challenges and co-creating workable solutions.


1 Taylor NP. “Healthcare AI funding hits new high as sector matures.” MedTechDive. Published August 7, 2019. Accessed September 26, 2019.

2 Survey of physicians conducted by The Doctors Company in July 2019 via Twitter and outreach to members: 1,786 respondents to question 1,734 respondents to question 2, 755 respondents to question 3, and 643 respondents to question 4.

3 Park A. “Nearly 90% of healthcare orgs are experimenting with emerging tech: AI, VR, blockchain.” Becker’s Health IT & CIO Report. Published June 5, 2019. Accessed September 26, 2019.

4 Nundy S, Montgomery T, Wachter RM. “Promoting trust between patients and physicians in the era of artificial intelligence.” JAMA. Published online July 15, 2019;322(6):497–498. doi:10.1001/jama.2018.20563.

5 Digital health top concern of leading healthcare institutions [news release]. Napa, CA: The Doctors Company; August 8, 2019. about-the-doctors-company/newsroom/press-releases/2019/digital-health-topconcern-of-leading-healthcare-institutions/. Accessed September 26, 2019.

6 Cuocolo R, Ugga L. “Imaging applications of artificial intelligence.” HealthManagement. 2018;18(6):484-. Accessed September 27, 2019.

7 Wang P, et al. “AI colonoscopy system may detect clues physicians ‘not tuned in to recognize.’” Healio Gastroenterology. interventional-endoscopy/news/online/%7Bf6f5e8c9-818a-4352-a3e6- 89f21cac1227%7D/ai-colonoscopy-system-may-detect-clues-physicians-nottuned-in-to-recognize. Published March 15, 2019. Accessed September 27, 2019.

8 IBM Watson Health. “IBM Watson imaging patient synopsis.” Published May 2019. Accessed September 27, 2019.

9 Wiggers, K. “Google AI claims 99% accuracy in metastatic breast cancer detection.” VentureBeat. Published October 12, 2018. Accessed September 30, 2019.

10 National Cancer Institute. “Using artificial intelligence to classify lung cancer types, predict mutations.” Published October 10, 2018. Accessed September 30, 2019.

11 Kann B, Thompson R, Thomas C, Dicker A, Aneja S. “Artificial intelligence in oncology: Current applications and future directions.” Oncology (Williston Park) 2019 February;33(2): 46-53. Accessed September 30, 2019.

12 “New AI tool predicts breast cancer risk.” Accessed October 1, 2019.

13 “Artificial intelligence shows promise for skin cancer detection” [news release]. Washington: American Academy of Dermatology; March 1, 2019. Accessed October 1, 2019.

14 “Man against machine: AI is better than dermatologists at diagnosing skin cancer” [news release]. European Society for Medical Oncology; May 28, 2018. Accessed October 1, 2019.

15 The Doctors Company. “The future of healthcare: A national survey of physicians—2018.” 23c0cee958364c6582d4ba95afa47fcc/11724b_fohc-survey_0918_nomarks_ spread_fr-1.pdf. Published September 2018. Accessed October 1, 2019.

16 Topol E. “Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again.” New York, NY: Hachette Book Group; 2019:285-286.

17 Polyakov A, Forbes Technology Council. “How AI-driven systems can be hacked.” Forbes. Published February 20, 2018. Accessed October 1, 2019.

18 Sullivan H, Schweikart S. “Are current tort liability doctrines adequate for addressing injury caused by AI?” AMA Journal of Ethics. 2019 February;21(2): E160-166. Accessed October 1, 2019.

19 Schiff D, Borenstein J. “How should clinicians communicate with patients about the roles of artificially intelligent team members?” AMA Journal of Ethics. 2019 February;21(2):E138-145. Accessed October 1, 2019.

20 Sullivan H, Schweikart S. “Are current tort liability doctrines adequate for addressing injury caused by AI?” AMA Journal of Ethics. 2019 February;21 (2):E160-166. Accessed October 1, 2019.




Richard Anderson, MD, is the Chairman and CEO of The Doctors Company.