Skip to main content
  • Research article
  • Open access
  • Published:

Evaluating users’ experiences of electronic prescribing systems in relation to patient safety: a mixed methods study

Abstract

Background

User interface (UI) design features such as screen layout, density of information, and use of colour may affect the usability of electronic prescribing (EP) systems, with usability problems previously associated with medication errors. To identify how to improve existing systems, our aim was to explore prescribers’ perspectives of UI features of a commercially available EP system, and how these may affect patient safety.

Methods

Two studies were conducted, each including ten participants prescribing a penicillin for a test patient with a penicillin allergy. In study 1, eye-gaze tracking was used as a means to explore visual attention and behaviour during prescribing, followed by a self-reported EP system usability scale. In study 2, a think-aloud method and semi-structured interview were applied to explore participants’ thoughts and views on prescribing, with a focus on UI design and patient safety.

Results

Study 1 showed high visual attention toward information on allergies and patient information, allergy pop-up alerts, and medication order review and confirmation, with less visual attention on adding medication. The system’s usability was rated ‘below average’. In study 2, participants highlighted EP design features and workflow, including screen layout and information overload as being important for patient safety, benefits of EP systems such as keeping a record of relevant information, and suggestions for improvement in relation to system design (colour, fonts, customization) and patient interaction.

Conclusions

Specific UI design factors were identified that may improve the usability and/or safety of EP systems. It is suggested that eye-gaze tracking and think-aloud methods are used in future experimental research in this area. Limitations include the small sample size; further work should include similar studies on other EP systems.

Peer Review reports

Background

Electronic prescribing (EP), also known as computerised provider order entry, has become increasingly common in UK hospitals [1] and in many countries worldwide [2]. One reason for this is its capabilities in incorporating various levels of computerised decision support (CDS). This may include provision of users with on-screen information about specific risks, and/or pop-up alerts to highlight drug-drug, drug-allergy or drug-disease interactions, or duplicate therapy [3]. EP systems can also result in more complete medication orders [4, 5], and have been shown to reduce medication errors [6].

To support patient safety, EP systems are advised to have an appropriate user interface (UI) design, which allows users to interact with the system in a structured, error mitigating and yet intuitive way [7,8,9]. Aspects of such UI features include, among others, screen layout, density of information, position of messages on the screen, and/or use of colour. Within the clinical setting, previous studies have, in particular, emphasised the importance of alert placement and visibility, screen layout and of information prioritization, including timing and format of decision support alerts [10, 11]. For example, Peikari et al. [12] demonstrated that the self-reported ease of use of the system and information quality (e.g. user interface consistency) can significantly reduce prescribing errors, while Kushniruk et al. [13] highlighted that the risk of medication errors is likely to be increased due to lack of display visibility if prescribers are not able to see the required information on the screen. These studies provide some evidence for a link between medication errors and an EP system’s usability in relation to UI design. However, more research is needed to fully understand the risks and benefits of UI and its impact on patient safety.

Two methods for exploring how users interact with computer screens are eye-gaze tracking and think-aloud methods [14,15,16,17,18,19]. Eye-gaze tracking is a non-invasive method that gathers data on prescribers’ gaze behaviour and fixation points, and which allows for the interpretation of users’ attentional shifts and cognitive workload (i.e. information processing); think-aloud methods enable researchers to explore the cognitive process involved and to identify areas for improvement. To our knowledge, neither have yet been used to study EP systems.

Our aim was therefore to conduct a preliminary study to explore users’ perspectives of the on-screen design features of a hospital EP system and how these may affect patient safety. Specific objectives were to (1) evaluate the use of eye-gaze tracking to study users’ visual attention and behaviour when interacting with the screen interface; (2) explore prescribers’ experiences of on-screen EP design and how they perceived this to affect patient safety; and, (3) make recommendations for EP system screen design to support patient safety.

Methods

Setting

The studies took place at a London teaching hospital organisation using a commercially available EP system, to prescribe for test patients in a simulation setting. The system had some CDS in operation; this included pre-specified order sentences and pop-up alerts for medication prescribed to which the patient had a documented allergy.

Study design

We conducted a mixed methods study, which took place in two parts. Study 1 was quantitative in nature and applied an eye-gaze tracking method to explore prescribers’ visual attention and behaviour during the prescribing process. Study 2 was qualitative and evaluated the on-screen design of the EP system through a think-aloud technique, followed by a semi-structured interview focusing on the on-screen design of EP systems and their impact on patient safety.

Recruitment

Participants were recruited via an advert on the organisation’s intranet; any prescriber with experience of the organisation’s EP system was eligible to participate. An information leaflet was given, and consent obtained from each participant prior to the study. No payment was given for participation.

Procedure

In both studies, participants were asked to complete a prescribing task for a test patient on the EP system, which included prescribing penicillin for an allergic patient. The prescribing task was limited to one medication to keep the process of ordering as standardised as possible.

Study 1

In study 1, gaze patterns were recorded during the prescribing process using a Tobii Pro X3–120 integrated screen monitor tracker with a sampling frequency of 250 Hz, equivalent to 250 recorded data samples per second [20]. The screen resolution was 1600 × 1200. This was followed by asking participants to complete a 10-item system usability score (SUS). The SUS is a widely applied, standardised questionnaire, used to understand the degree of a system’s “user-friendliness” [21]. Example items are “I found this system very cumbersome to use” or “I felt very confident using the system”; each item is rated on a 5-point Likert scale (1 = “Strongly disagree” to 5 = “Strongly agree”). Converted into percentiles, a score of 68 (equivalent to the 50th percentile) has been defined as “average” usability [21,22,23], with a lower percentage suggesting need for UI re-design.

Study 2

In study 2, participants were asked to comment on any aspect of the UI whilst prescribing (e.g. how they interpret on-screen design features and what they expect to see and do during the process). This was followed by a semi-structured interview (cf. supplementary material) that explored participants’ views on the UI and the EP system’s usability in relation to medication safety. In particular, we explored (1) how participants viewed the screen and worked through the medical scenario designed for this study; (2) how various design factors influenced their understanding and uptake of information from the screen; (3) what influence these factors may have had on their prescribing behaviour (e.g. navigation, consideration of pop-up alerts, medication selection, etc.); and, (4) what impact (1) and (2) might have on their perception of how the system affects medication safety.

Data analysis

Study 1

Each eye-gaze tracking video was segmented into steps based on the specific tasks and changes in UI features during the prescribing process:

  • Step 1: Reviewing patient data (e.g. identity, allergy status) and navigating to the medication page

  • Step 2: Reviewing new pop-up window and selecting option to add new medication

  • Step 3: Entering drug name in search button and specifying dosages for the medication

  • Step 4: Appearance of a pop-up alert and acknowledging it by clicking “OK”

  • Step 5: Reviewing the medication order screen and signing

  • Step 6: Reviewing final prescription

The data were exported from Tobii and into the statistical software “R” [24] which was used to process the data for each segment, by exploring gaze behaviour both for the full screen and for each of its quadrants (Fig. 1). By convention, we set the upper-left corner of the screen as the origin (0,0), the top edge of the screen as the X-axis, and the left edge as the Y-axis.

Fig. 1
figure 1

Division of quadrants (created using ProcessOn and Power Point)

Our main outcome measure extracted was the scan paths (i.e. the mean number of fixation points across all participants) for each screen/quadrant, for each of the segments concerned. Coded as the sequence of items fixated, this measure provides insights into the cognitive state of the user during evaluation tasks, with longer durations indicating an increase in cognitive function (i.e. increased focus, attention, and information processing) [17]. Because of the small sample, we used descriptive analysis to explore both prescribers’ gaze patterns and their perceived usability of the EP system on the SUS.

Study 2

For study 2, interviews were audio recorded, transcribed verbatim by an external company, and analysed with NVivo v12 [25], using an inductive thematic approach. Thematic analysis is an exploratory approach and particularly suited for rich yet complex data [26]. Researchers LA and NA familiarised themselves with the data by reading and re-reading the transcripts, before identifying emerging and recurrent themes and key ideas in context relating to the research objectives. The quantitative and qualitative findings from studies 1 and 2 were then synthesised narratively [27] to address the study’s aim.

Results

Study 1

The sample for study 1 comprised ten medical prescribers, including five registrars, four foundation year 2 and one foundation year 1 doctors. Participants had, an average, 3.9 (SD = 2.4) years of experience with the EP system concerned. They had the following backgrounds: general surgery (n = 1) and medicine (n = 1), renal (n = 2), orthopaedics (n = 1), urology (n = 1), plastic surgery (n = 1), cardiovascular (n = 2), and stroke (n = 1). Each eye-gaze tracking session took a mean of 3.28 min (SD = 0.24).

During steps 1–3, highest numbers of fixation points were observed in the top and bottom left quadrants (Table 1). These three steps refer to reviewing of patient details, and searching for and ordering the appropriate medication. For step 4, in which the allergy pop-up alert is displayed, most fixations were observed for the top left and the top/bottom right quadrants of the screen, indicating that the prescribers (re-)evaluated the patient data in relation to the pop-up alert. In steps 5 and 6, a large quantity of fixations occurred in the right top and bottom quadrants respectively. In both these steps, prescribers reviewed the medication order before confirming it, with the latter done via an electronic signature request positioned in the bottom quadrant of the right screen. Overall for the full screen, the highest numbers of fixations were observed for steps 3, 4 and 5, indicating that medication selection, review of the medication order, and the pop-up alert led to an increase in number of fixations (Fig. 2) and therefore the highest cognitive load.

Table 1 Mean number of fixations (NoF) and standard deviation (SD) across participants for each step: partial and full screen. Green denotes lowest values, yellow to midpoints (50%) between low and high values, and red the highest values. Similarly, for the full screen, low to high NoF are coloured from light to dark blue. The table also provides the video segmented mean duration in milliseconds for each step
Fig. 2
figure 2

Example of scan path for reviewing medication (created using R version 3.6.1)

In terms of the EP system’s usability; the SUS mean raw score was 39 (SD = 4.7) of 100, with a percentile score of 5% based on cross-industry comparisons, which is considered ‘below average’ in terms of usability. These cross-industry comparisons are based on data from research using the SUS on a wide range of systems and technologies in different contexts and settings [22].

Study 2

Study 2 comprised ten prescribers, including three registrars, two foundation year 2 doctors, three foundation year 1 doctors, and two pharmacist prescribers. Participants had, on average, 3.3 (SD = 2.3) years of experience in using the EP system. Their areas of expertise were: general surgery (n = 1) and medicine (n = 1), neurosurgery (n = 1), renal (n = 2), orthopaedics (n = 1), urology (n = 1), orthopaedics (n = 1) and stroke (n = 2). Data analyses from the think-aloud method and semi-structured interviews revealed the following themes: (1) EP design features; (2) usefulness of EP systems; and, (3) suggestions for improvement. Subthemes and example quotes for each theme are presented in Tables 2, 3 and 4.

Table 2 Sub-themes and example quotes relating to the theme ‘Electronic Prescribing design features’
Table 3 Sub-themes and example quotes relating to the theme ‘Usefulness of Electronic Prescribing systems’
Table 4 Sub-themes and example quotes relating to the theme ‘Suggestions for improvement’

EP design features

Participants referred to design features in relation to pop-up alerts, the process of prescribing, the screen layout and design, and the visibility of the allergy section. The pop-up alerts were discussed in terms of both frequency and specificity. In particular, alerts for mild or redundant interactions were mentioned as a major obstacle and contributor to alert fatigue. The process of prescribing was also discussed in relation to the click-through rate and the lack of automation (e.g. entering fields manually). When it came to navigation and allergy visibility, the screen layout and design were considered too ‘busy’, making the system less intuitive and, thus, a risk to patient safety.

Usefulness of the EP system

The usefulness of the EP system was highlighted in association with its accessibility, safety, standardisation, and keeping a record of relevant information which reduced transcribing. For instance, the fact that all information is clearly documented, auditable, and provided in real time was perceived as a facilitator of patient safety.

Suggestions for improvement

Lastly, participants expressed a variety of different suggestions for improvement, including customisation, incorporating links to relevant guidelines, and the use of more salient colours. To illustrate, participations wished to have a default list of options of medications based on their speciality, using pop-up alerts as a means to provide alternative solutions to the drug-to-drug interactions, as well as having important information highlighted in colours, bold, or larger fonts. Participants also acknowledged the need to enhance the system’s accessibility to facilitate greater patient interaction; this was due to concerns that prescribing electronically can lead to prescribing remotely, thus precluding patient involvement.

Discussion

Key findings

This study aimed to evaluate and create recommendations for UI and on-screen design of hospital EP systems. Findings from study 1 show that, during the overall prescribing process, the highest average number of fixation points was observed during review of patient data, the search, dosage, prescription and order of medication, and the review of the allergy pop-up alert. According to the SUS, the EP system was perceived to be more usable than only 5% of cross-industry solutions. Study 2 revealed three main themes: EP design features, usefulness of EP systems, and participants’ suggestions for improvement. Design features were discussed in relation to their impact on process flow, including aspects such as screen features and layout, as well as information overload. The usefulness of EP systems was expressed in terms of standardization and safety measures that reduce the likelihood of medication errors, while suggestions for improvement were specified in association with embedded prescribing guidelines and changes in system design (e.g. colour, fonts, customization) to enhance information visibility and overall attention.

Comparison with existing literature

There is wide variation across healthcare organisations and EP system vendors in how on-screen safety information is presented to prescribers, with no clear guidelines derived from studies or best practice recommendations. We know from previous recommendations that placement and visibility and prioritization of information in the design of UI should be customized based on the system [28], and that the colour of alerts or general text information should function as a risk indicator [29, 30], while UI consistency (e.g. buttons that perform the same action and that have the same purpose) is advised to avoid medication errors [10, 12]. Our findings echo these observations, by also highlighting integrated local guidelines, system customisation based on profession (e.g. default medication list commonly used for each speciality) and preference (e.g. layout, font size), as well as enabling greater patient interaction. In addition, and similar to Eghdam et al. [31], findings showed that participants focused on certain screen elements during the prescribing process, reflecting attention on the specific content displayed. However, while Eghdam et al. conducted eye tracking on a prototype that was aimed at supporting antibiotic use in intensive care, we evaluated a fully functioning and operating EP system with particular focus on UI design features, and how these features may affect patient safety.

Strengths and limitations

A strength of this study is the novel use of eye-gaze tracking to explore a fully functioning EP system, with the focus on patient safety; we have shown this to be a feasible method for studying user interactions with EP. Another strength relates to the qualitative evaluation, which allowed us to explore the limits and benefits of the EP systems, and to design recommendations on how to make the system more efficient. Contributing to the evidence on on-screen design and usability, these findings are an asset for system developers and organisations planning on implementing EP and CDS systems, as well as for those making refinements and adapting existing systems. In contrast, limitations include the small sample size, the use of only one usability measure and the inclusion of only one EP system at one hospital.

Implication for research and practice

Our findings provide several implications for research and practice. Future studies should focus on collecting data from a larger sample and the application of more usability measures based on aspects such as effectiveness (e.g. success/failure of conducting the task safely) and efficiency (e.g. time on task; click flows). Studies should also evaluate how these affect the usability, perceived satisfaction, and safe use of the system. Studies are furthermore recommended to focus on pop-up alerts and workflow, including aspects such as screen layout and appearance (e.g. colour, fonts, customization of onscreen appearance), and to evaluate their impact on information visibility and overall attention. This will allow further exploration of the degree to which these UI features influence prescribers’ understanding and uptake of information from the screen, leading to evaluation of interventions to improve prescribing behaviour and decision making. We found that use of eye-gaze data was a potentially useful method to evaluate UI features and their impact on patient safety. Eye gaze patterns reflect the prescribers’ visual behaviour during the prescribing task, and, if applied on a large sample, could be used in training, assessment, and feedback. Future studies are therefore advised to explore use of eye-gaze data to compare visual patterns between expert and novice prescribers, and explore its use in improving task performance in relation to safe prescribing. Furthermore, studies are advised to assess the level to which the visual attention during prescribing is a surrogate marker of the desired clinical outcome. Lastly, our findings suggest the importance of working with system vendors to conduct usability studies, in order to evaluate key EP system UI design features with a variety of users, and with the goal of identifying and contributing to evidence-based usability standards for EP systems.

Conclusion

Creating meaningful user-centred design in EP systems allows participants to interact and work efficiently and safely, and allows for a robust and standardised process of prescribing. System vendors and organisations should recognise the importance of end-user testing and their involvement in all stages of the design, development and implementation of EP systems. The findings in this study provide information about specific UI design features that may relate to the usability of EP systems and improve the accuracy and safety of prescribing.

Availability of data and materials

The data that support the findings of this study are not publicly available. Because of the nature of the informed consent and ethical restrictions, data distribution is not permitted.

Abbreviations

UI:

User interface

EP:

Electronic prescribing

MS:

Milliseconds

NoF:

Number of Fixations

SD:

Standard deviation

Hz:

Hertz

References

  1. Puaar SJ, Franklin BD. Impact of an inpatient electronic prescribing system on prescribing error causation: a qualitative evaluation in an English hospital. BMJ Quality & Safety 2018;27:529–38.

  2. Miller RA, Gardner RM, Johnson KB, Hripcsak G. Clinical decision support and electronic prescribing systems: a time for responsible thought and action. J Am Med Inform Assoc. 2005;12(4):403–9.

    Article  Google Scholar 

  3. Radley DC, Wasserman MR, Olsho LE, Shoemaker SJ, Spranca MD, Bradshaw B. Reduction in medication errors in hospitals due to adoption of computerized provider order entry systems. J Am Med Inform Assoc. 2013;20(3):470–6.

    Article  Google Scholar 

  4. Bates DWTJ, Lee J, Seger D, Kuperman GJ, Ma'Luf N, et al. The impact of computerized physician order entry on medication error prevention. J Am Med Inform Assoc. 1999;6(4):313–21.

    Article  CAS  Google Scholar 

  5. Kaushal RSK, Bates D. Effects of computerized physician order entry and clinical decision support systems on medication safety: a systematic review. Arch Intern Med. 2003;163:1409–16.

  6. Reckmann MH, Westbrook JI, Koh Y, Lo C, Day RO. Does computerized provider order entry reduce prescribing errors for hospital inpatients? A systematic review. J Am Med Inform Assoc. 2009;16(5):613–23.

    Article  Google Scholar 

  7. Fairbanks RJ, Caplan S. Poor Interface design and lack of usability testing facilitate medical error. Jt Comm J Qual Saf. 2004;30(10):579–84.

    PubMed  Google Scholar 

  8. Middleton B, Bloomrosen M, Dente MA, Hashmat B, Koppel R, Overhage JM, et al. Enhancing patient safety and quality of care by improving the usability of electronic health record systems: recommendations from AMIA. J Am Med Inform Assoc. 2013;20(e1):e2–8.

    Article  Google Scholar 

  9. Chan J, Shojania KG, Easty AC, Etchells EE. Does user-centred design affect the efficiency, usability and safety of CPOE order sets? J Am Med Inform Assoc. 2011;18(3):276–81.

    Article  Google Scholar 

  10. Payne TH, Hines LE, Chan RC, Hartman S, Kapusnik-Uner J, Russ AL, et al. Recommendations to improve the usability of drug-drug interaction clinical decision support alerts. J Am Med Inform Assoc. 2015;22(6):1243–50.

    Article  Google Scholar 

  11. Phansalkar S, Edworthy J, Hellier E, Seger DL, Schedlbauer A, Avery AJ, et al. A review of human factors principles for the design and implementation of medication safety alerts in clinical information systems. J Am Med Inform Assoc. 2010;17(5):493–501.

    Article  Google Scholar 

  12. Peikari HR, Zakaria MS, Yasin NM, Shah MH, Elhissi A. Role of computerized physician order entry usability in the reduction of prescribing errors. Healthc Inf Res. 2013;19(2):93–101.

    Article  Google Scholar 

  13. Kushniruk AW, Triola MM, Borycki EM, Stein B, Kannry JL. Technology induced error and usability: the relationship between usability problems and prescription errors when using a handheld application. Int J Med Inform. 2005;74(7–8):519–26.

    Article  Google Scholar 

  14. Jaspers MW. A comparison of usability methods for testing interactive health technologies: methodological aspects and empirical evidence. Int J Med Inform. 2009;78(5):340–53.

    Article  Google Scholar 

  15. Li AC, Kannry JL, Kushniruk A, Chrimes D, McGinn TG, Edonyabo D, et al. Integrating usability testing and think-aloud protocol analysis with "near-live" clinical simulations in evaluating clinical decision support. Int J Med Inform. 2012;81(11):761–72.

    Article  Google Scholar 

  16. Jacob RJK, Karn KS. Commentary on Section 4 - Eye Tracking in Human-Computer Interaction and Usability Research: Ready to Deliver the Promises. In: Hyönä J, Radach R, Deubel H, editors. The Mind's Eye. Amsterdam: North-Holland; 2003. p. 573-605.

  17. Raschke M, Blascheck T, Burch M. Visual analysis of eye tracking data. In: Huang W, editor. Handbook of human centric visualization. New York: Springer; 2014.

    Google Scholar 

  18. Blascheck T, Kurzhals K, Raschke M, Burch M, Weiskopf D, Ertl T. State-of-the-art of visualisation for eye tracking data. In R. Borgo, R. Maciejewski, I. Viola (Eds.), EuroVis - STARs. The Eurographics Association; 2014.

  19. Horsky J, Aarts J, Verheul L, Seger DL, van der Sijs H, Bates DW. Clinical reasoning in the context of active decision support during medication prescribing. Int J Med Inform. 2017;97:1–11.

    Article  Google Scholar 

  20. Morgante JD, Zolfaghari R, Johnson SP. A critical test of temporal and spatial accuracy of the Tobii T60XL eye tracker. Infancy. 2012;17(1):9–32.

    Article  Google Scholar 

  21. Brooke J. SUS: A Retrospective. J Usability Stud. 2013;8(2):29–40.

    Google Scholar 

  22. Lewis JR. The system usability scale: past, present, and future. Int J Hum Computr Interact. 2018;34(7):577–90.

    Article  Google Scholar 

  23. Bangor A, Kortum P, Miller J. Determining what individual SUS scores mean: adding an adjective rating scale. J Usability Stud. 2009;4(3):114–23.

    Google Scholar 

  24. Team RC. R: a language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2004.

    Google Scholar 

  25. Welsh E. Dealing with Data: Using NVivo in the Qualitative Data Analysis Process. Forum Qualitative Sozialforschung / Forum: Qualitative Social Research; Vol 3, No 2 (2002): Using Technology in the Qualitative Research ProcessDO - 1017169/fqs-32865. 2002.

  26. Braun V, Clarke V. Using thematic analysis in psychology. Qual Res Psychol. 2006;3(2):77–101.

    Article  Google Scholar 

  27. Barbour RS. The case for combining qualitative and quantitative approaches in health services research. J Health Serv Res Policy. 1999;4(1):39–43.

    Article  CAS  Google Scholar 

  28. Coleman JJ, van der Sijs H, Haefeli WE, Slight SP, McDowell SE, Seidling HM, et al. On the alert: future priorities for alerts in clinical decision support for computerized physician order entry identified from a European workshop. BMC Med Inform Decis Mak. 2013;13:111.

    Article  Google Scholar 

  29. Luna DR, Rizzato Lede DA, Otero CM, Risk MR, González Bernaldo de Quirós F. User-centered design improves the usability of drug-drug interaction alerts: Experimental comparison of interfaces. J Biomed Inf. 2017;66:204–13.

    Article  Google Scholar 

  30. Luna DR, Rizzato Lede DA, Rubin L, Otero CM, Ortiz JM, García MG, et al. User-centered design improves the usability of drug-drug interaction alerts: a validation study in the real scenario. Stud Health Technol Inf. 2017;245:1085–9.

    Google Scholar 

  31. Eghdam A, Forsman J, Falkenhav M, Lind M, Koch S. Combining usability testing with eye-tracking technology: evaluation of a visualization support for antibiotic use in intensive care. Stud Health Technol Inf. 2011;169:945-9.

Download references

Acknowledgements

We thank all participants for their contribution to and participation in this study.

Funding

This article represents independent research supported by the NIHR Imperial Patient Safety Translational Research Centre and the NIHR Health Protection Research Unit in Healthcare Associated Infections and Antimicrobial Resistance at Imperial College. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health and Care. The funder had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Author information

Authors and Affiliations

Authors

Contributions

LA, NS, and BDF conceptualised this research. LA and NS collected the data and LA, NS, and SC analysed them. The manuscript was written by LA with contributions from all authors. All authors have read and approved the manuscript.

Corresponding author

Correspondence to Lisa Aufegger.

Ethics declarations

Ethics approval and consent to participate

Study 1, using eye-gaze tracking, was classified as a service evaluation (ID: 283) and registered as such within the Imperial College London NHS Trust; Study 2 received approval from the Health Research Authority (19/HRA/0517), which is an executive non-departmental public body of the Department of Health in the UK. Informed written consent was obtained from all participants, and no payment was given in exchange for participation.

Consent for publication

Not applicable.

Competing interests

We declare no conflict(s) of interest associated with this research.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Additional file 1.

Participant interview.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Aufegger, L., Serou, N., Chen, S. et al. Evaluating users’ experiences of electronic prescribing systems in relation to patient safety: a mixed methods study. BMC Med Inform Decis Mak 20, 62 (2020). https://0-doi-org.brum.beds.ac.uk/10.1186/s12911-020-1080-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s12911-020-1080-9

Keywords