Skip to main content

What’s in a name? A comparison of attitudes towards artificial intelligence (AI) versus augmented human intelligence (AHI)

Abstract

Background

“Artificial intelligence” (AI) is often referred to as “augmented human intelligence” (AHI). The latter term implies that computers support—rather than replace—human decision-making. It is unclear whether the terminology used affects attitudes and perceptions in practice.

Methods

In the context of a quality improvement project implementing AI/AHI-based decision support in a regional health system, we surveyed staff’s attitudes about AI/AHI, randomizing question prompts to refer to either AI or AHI.

Results

Ninety-three staff completed surveys. With a power of 0.95 to detect a difference larger than 0.8 points on a 5-point scale, we did not detect a significant difference in responses to six questions regarding attitudes when respondents were alternatively asked about AI versus AHI (mean difference range: 0.04–0.22 points; p > 0.05).

Conclusion

Although findings may be setting-specific, we observed that use of the terms “AI” and “AHI” in a survey on attitudes of clinical staff elicited similar responses.

Peer Review reports

Background

In 2018, the American Medical Association released a policy statement on augmented intelligence in medicine [1]. The wording “augmented intelligence” was carefully chosen in contradistinction to the more colloquial term “artificial intelligence” (AI) to emphasize that while computing systems have the capability to augment human medical decision making, these systems are not a replacement for rational human thought.

Popular culture and science fiction are plagued with examples of AI as a competing intelligence to be feared, and a recent survey of attitudes about AI among the general public found that only a minority support the development of AI [2]. In fact, most Americans believed that automation and AI would result in a net destruction of jobs [2]. Perhaps in response to these public perceptions, major cloud computing vendors, including IBM [3] and Microsoft [4] also have stated a preference to refer to the technology as “augmented” rather than “artificial” intelligence. Although the idea of “augmented human intelligence” (AHI) has been around for over 50 years, [5] the term “artificial intelligence” continues to be the prevailing term that is used. At our own institution, Mayo Clinic has established “augmented human intelligence” (AHI) as the preferred term, [6] yet our experience has been that in practice the two terms are often used interchangeably.

Perceptions of technology and attitudes toward these technologies are key elements that can affect the uptake and success of implementation. However, there is a lack of evidence as to whether there is a measurable difference in perceptions and attitudes toward the technology when it is referred to as “augmented human intelligence” versus “artificial intelligence” among health care staff. Therefore, in this study we sought to understand whether use of the less frequently used—but institutionally-preferred—term “augmented human intelligence” led to more favorable staff perceptions and attitudes about the technology.

Methods

Ethics review

This study was reviewed by the Mayo Clinic Institutional Review Board and deemed “exempt.” No patient identifiable information was used.

Survey participants and clinical context

The study was performed in three regional health system primary care clinics and a 250-bed hospital in a medium-sized city in the American Midwest where AI/AHI was not routinely used as part of clinical care. An electronic survey was emailed in the context of a quality improvement project that included implementation of two decision support systems powered by AI/AHI. One system aimed to identify outpatients with diabetes mellitus who were at risk for poor glycemic control and intervene to reduce that risk (“diabetes-related AI/AHI”). The other system aimed to identify inpatients who were at high risk for hospital readmission and to intervene to reduce that risk (“hospital readmission-related AI/AHI”). Survey participants were clinical staff (physicians, nursing staff and other allied health staff) in the clinics that had been selected to participate in the pilot project. There was a pre-implementation survey (not reported herein) in which the technology was referred to using the common term, “artificial intelligence”. During the initial presentations of the project to frontline clinical staff, neither of the two terms (i.e., AI or AHI) were used; instead, the technology was primarily referred to as “predictive analytics” or “cognitive computing.” The following subsection describes the survey of attitudes.

Survey questions

The survey included six questions, which respondents were asked to respond to using a 5-point Likert scale:

  • I am generally familiar with (AI/AHI).

  • I routinely use (AI/AHI) support in my job.

  • I am excited about how (AI/AHI) can help me with my job.

  • I am worried that (AI/AHI) will make my job more complicated.

  • I am worried that (AI/AHI) will make my job obsolete.

  • I believe (AI/AHI) will not be able to understand my job well enough to help.

Survey participants were randomized to have questions worded with the term “artificial intelligence” or “augmented human intelligence.” Participants randomized into each group were emailed an invitation to complete the survey. In addition to the above questions, participants were asked to self-report their role (e.g., physician, registered nurse, other allied health staff role).

Statistical analysis

Statistical analysis was performed using JMP (Sas Institute, Cary, North Carolina) and Microsoft Excel (Microsoft, Seattle, Washington). Student’s t-test was used to assess for a difference in Likert scale responses [7]. P-values < 0.05 were considered statistically significant. Power analysis revealed that combined analysis of the diabetes-related and hospital admission-related AHI questions had a power of 0.95 to detect a difference larger than 0.8 points on a 5-point scale, and a power of 0.8 to detect a difference of 0.65 on a 5-point scale.

Results

Thirty-seven participants of the diabetes-related AI/AHI pilot and 56 of the hospital readmission-related AI/AHI pilot completed the survey (Table 1). The response rate was 46% for staff involved with diabetes-related AI/AHI and 38% for staff involved with hospital readmission-related AI/AHI, yielding an overall response rate of 41% (Table 1).

Table 1 Survey respondent demographics

Survey responses are shown numerically in Table 2 and graphically in Fig. 1. Mean response score differences ranged from 0.04 to 0.22 points out of 5. No statistically significant difference was observed when comparing responses between respondents who were asked about AI versus AHI within each group (i.e., diabetes-related vs. hospital readmission related) or when combining the two groups (i.e., p > 0.05). At least 60% of respondents in each group did not think that AI or AHI would make their jobs obsolete. Respondents were largely ambivalent about the ability of AHI to understand their jobs.

Table 2 Summary of survey responses
Fig. 1
figure 1

Survey responses

Discussion

At our institution, survey responses about attitudes following use of augmented human intelligence did not appear to vary when respondents were asked about AI or AHI. We were reassured by this, as we were worried that staff may misunderstand questions if asked using the institutionally-preferred term “AHI” rather than the more common term “AI.”

The finding that survey responses did not vary based on wording also suggests that staff perceived these two terms similarly. Although we did not directly assess perceptions of the two terms (e.g., by asking “How are AI and AHI different?”), this finding suggests that use of the term AHI does not necessarily “soften” attitudes or yield more favorable attitudes. This is similar to the findings of a survey of attitudes toward AI conducted in the general population [2], which compared attitudes toward “AI” and “machine learning” and found similar results for both terms.

One of the possible limitations of any negative study is whether it was powered to detect a difference that would be practically significant. Based on our sample size, we had a power of 95% to detect a difference greater than 0.8 points in our 5-point Likert scale, and an 80% power to detect a difference greater than 0.65 points. Although there is a possibility that a smaller difference existed but was not detected, we deemed differences less than one point to likely be of little practical significance.

However, we may have failed to detect variation in different attitudinal dimensions, and our study was not sufficiently powered to conduct sub-group analyses to determine whether attitudes differed between clinical work roles. We also only assessed the attitudes of staff at a regional health system clinic where a specific AI/AHI-based pilot project was being implemented, leaving the possibility that attitudes may vary in other settings where AI/AHI was not being implemented. In an initial survey, the common term “AI” was used. We cannot rule out the possibility that use of this term led to anchoring and contributed to the lack of an observed difference in responses between the groups. However, our observation that approximately 40% of respondents in reach group did not report being familiar with AI/AHI suggests that this may not have been the case for some staff. Indeed, it is possible that lack of familiarity and deep understanding of the terms (AI/AHI) accounts for the observation that there was no difference in responses when the two terms were used. During the project, we avoided using the term “AHI.” When this term was introduced in the survey presented in this manuscript, it did not seem to soften staff attitudes toward the technology, as some have hypothesized it would [3, 4, 6]. Attitudes of clinical staff elsewhere and of the general population (i.e., patients) may differ. Finally, because we targeted a convenience sample of staff participating in a quality improvement project, we must consider the degree to which selection bias may have influenced participant responses.

Our finding that attitudes did not appear to be different when technology was referred to as “AI” versus “AHI” can yield different interpretations. One interpretation is that the terms are perceived equivalently and can both be used interchangeably. An alternative interpretation is that, because the choice of term does not meaningfully influence staff’s perceptions about the work domain, other considerations should dictate which term is preferred. Our institutional leadership has chosen “augmented human intelligence” (“AHI”) as the institutionally-preferred term, rather than “artificial intelligence” (“AI”). To maintain consistency, we will utilize the institutionally-preferred in future pilot projects with clinical staff.

Conclusions

Although findings may be setting-specific, we observed that use of the terms “AI” and “AHI” in a staff survey on attitudes towards machine learning-based clinical decision support elicited similar responses.

Availability of data and materials

The dataset is not publicly available because the Institutional Review Board application did not include provisions for sharing data with external parties.

Abbreviations

AI:

Artificial Intelligence

AHI:

Augmented Human Intelligence

References

  1. American Medical Association. Augmented intelligence in health care. https://www.ama-assn.org/system/files/2019-01/augmented-intelligence-policy-report.pdf. Accessed 1 July 2019.

  2. Baobao Zhang AD. Artificial intelligence: American attitudes and trends (preprint); 2019. https://ssrn.com/abstract=3312874. Accessed 1 July 2019.

    Google Scholar 

  3. IBM. Augmented intelligence, NOT artificial intelligence; 2017. https://www.ibm.com/blogs/collaboration-solutions/2017/01/31/augmented-intelligence-not-artificial-intelligence/. Accessed 20 June 2019.

    Google Scholar 

  4. Microsoft Cloud Perspectives Blog Team. Starting the AI journey: scenarios for augmented intelligence in banking; 2017. https://cloudblogs.microsoft.com/2017/10/16/starting-the-ai-journey-scenarios-for-augmented-intelligence-in-banking/. Accessed 1 Jul 2019.

    Google Scholar 

  5. Engelbart DC. Augmenting human intellect: a conceptual framework. Menlo Park: Stanford Research Institute; 1962.

    Book  Google Scholar 

  6. Decker W. Augmented Human Intelligence & Digital Strategy (internal Mayo Clinic presentation); 2018.

    Google Scholar 

  7. De Winter JF, Dodou D. Five-Point Likert Items: t test versus Mann-Whitney-Wilcoxon (Addendum added October 2012). Practical Assessment, Research, and Evaluation. 2010;15(1):11.

Download references

Acknowledgements

None.

Funding

The project and survey were funded with our institution’s internal research and practice improvement funds. No external funding body participated in the study design, collection, analysis or interpretation of data or in writing the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

SRB and KDW performed the data analysis and the data interpretation, and co-authored the first draft of the manuscript. SRB conceived of the work. SRB, MMo, PB and CCR designed the study and conducted the work. PB, MMo, MMi and CCR provided critical revisions for important intellectual content. All authors have read and approve of this version to be published and agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Corresponding author

Correspondence to Santiago Romero-Brufau.

Ethics declarations

Ethics approval and consent to participate

This study was reviewed by the Mayo Clinic Institutional Review Board and deemed “exempt” (Application 19–006162). No patient identifiable information was used, and specific consent procedures were not required.

Consent for publication

Not applicable.

Competing interests

The authors declare that they have no competing interests to declare.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Romero-Brufau, S., Wyatt, K.D., Boyum, P. et al. What’s in a name? A comparison of attitudes towards artificial intelligence (AI) versus augmented human intelligence (AHI). BMC Med Inform Decis Mak 20, 167 (2020). https://0-doi-org.brum.beds.ac.uk/10.1186/s12911-020-01158-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12911-020-01158-2

Keywords