Skip to main content

Table 4 ELSIs discussed in the nine alternative perspectives articles

From: Reporting of screening and diagnostic AI rarely acknowledges ethical, legal, and social implications: a mass media frame analysis

Article no. (ref); title

Short description of ELSIs

A143 [50]; Medical AI can now predict survival rates—but it’s not ready to unleash on patients

Historical bias—algorithms that use historical data may produce biased outputs (e.g. algorithms may find a relationship between a disease and a minority group that has historically had worse access to healthcare)

Black box systems—problems arise when doctors cannot access information about the features algorithms use to produce outputs

Physician deskilling—doctors may become over-reliant on algorithms to make decisions and lose the skills to make those decisions without the aid of algorithms

A22 [46]; Paging Doctor AI: Artificial intelligence promises all sorts of advances for medicine. And all sorts of concerns

Harm to patients—if AI fails to integrate into workflows or is poorly validated for clinical use it may lead to worse patient outcomes

Value tension between health and for-profit enterprise—AI is proprietary and there is a value collision with the bedside clinician

Impact on clinician workflow—AI may be given authority over clinician workflow (e.g. patients’ insurers may only reimburse for the treatments an algorithm recommends, meaning clinicians lose their ability to exercise their own discretion in treating patients)

Exacerbation of human bias—when algorithms are not designed to take structural inequalities into account, they will produce flawed results

A93 [49]; Genetic Testing Companies Take DNA Tests To A Whole New Level

Concerns about data privacy—using AI tools routinely will raise the need for better data protection regulations

A91 [47]; From suicide prevention to genetic testing, there's a widening disconnect between Silicon Valley health-tech and outside experts who see red flags

Lacking involvement with medical research—concerns developers of AI are not using normal channels for testing and disseminating algorithms. Claims that they make to consumers are unvalidated and the safety of innovations are not regulated

Poor transparency protocol in tech companies

Value tension between health and for-profit enterprise—tech emphasises disruption and convenience, whereas healthcare emphasises safety. The values behind AI development conflict with the Hippocratic oath

Harm to patients—poorly implemented algorithms may lead to iatrogenic health impacts

A3 [45]; The AI governance challenge

Need for better data protection regulations

Value tension between public and for-profit values

A113 [51]; How A.I. Can Save Your Life

Concerns about data privacy

A117 [52]; How tech giants like Google are targeting the seismic NHS data goldmine

Concerns about data privacy—private companies requesting access to public healthcare data

A8 [53]; Addressing Cyber Security Healthcare and Data Integrity

Concerns about data privacy

A260 [48]; Vietnam: AI for early warning about liver cancer

Inaccuracy of AI techniques