OUP user menu

★ Review paper ★

Computerized clinical decision support for prescribing: provision does not guarantee uptake

Annette Moxey, Jane Robertson, David Newby, Isla Hains, Margaret Williamson, Sallie-Anne Pearson
DOI: http://dx.doi.org/10.1197/jamia.M3170 25-33 First published online: 1 January 2010

Abstract

There is wide variability in the use and adoption of recommendations generated by computerized clinical decision support systems (CDSSs) despite the benefits they may bring to clinical practice. We conducted a systematic review to explore the barriers to, and facilitators of, CDSS uptake by physicians to guide prescribing decisions. We identified 58 studies by searching electronic databases (1990–2007). Factors impacting on CDSS use included: the availability of hardware, technical support and training; integration of the system into workflows; and the relevance and timeliness of the clinical messages. Further, systems that were endorsed by colleagues, minimized perceived threats to professional autonomy, and did not compromise doctor-patient interactions were accepted by users. Despite advances in technology and CDSS sophistication, most factors were consistently reported over time and across ambulatory and institutional settings. Such factors must be addressed when deploying CDSSs so that improvements in uptake, practice and patient outcomes may be achieved.

Keywords
  • Clinical decision support systems
  • medication systems
  • drug prescriptions
  • drug utilization
  • physician practice patterns

Over the last two decades there have been rapid advances in information technology, increased acceptance of computers in healthcare and widespread interest in developing evidence-based computerized clinical decision support systems (CDSSs). There has been not only a greater infiltration of CDSSs in clinical practice over time but also increased levels of technical sophistication and portability.1 ,2 The simplest systems present narrative text requiring further processing and analysis by clinicians before decision-making, while the more sophisticated systems are “interactive advisors”, integrating patient-specific information, such as laboratory results and active orders, with guidelines or protocols, and presenting derived information for decision making.3

CDSSs have been developed for a range of clinical circumstances and play an important role in guiding prescribing practices such as assisting in drug selection and dosing suggestions, flagging potential adverse drug reactions and drug allergies, and identifying duplication of therapy.4 Systematic reviews have demonstrated that use of CDSSs for prescribing can reduce toxic drug levels and time to therapeutic control,5 ,6 reduce medication errors7 ,8 and change prescribing in accordance with guideline recommendations.9 Further, there is some evidence pointing to greater impacts in institutional compared with ambulatory care and for fine-tuning therapy (eg, recommendations to improve patient safety, adjust the dose, duration or form of prescribed drugs, or increase laboratory testing for patients on long-term therapy) rather than influencing initial drug choices.10

Systematic reviews of electronic and paper-based clinical decision support across a range of clinical domains have demonstrated its effectiveness in changing clinicians' practices such as screening, test ordering and guideline adherence,5 ,7 ,8 ,9 ,11 ,12 ,13 but there is no consistent translation into patient outcomes.11 Some of these reviews have also attempted to evaluate whether particular system or organizational features predict successful CDSS implementation and changes in practice and patient outcomes.10 ,11 ,12 ,14 However, insufficient attention to the reporting of these features in intervention studies has hampered these approaches.14 Despite these limitations, some reviews have demonstrated the benefits of computer- over paper-based decision support,12 system- over user-initiated tools,10 ,11 and integrated over stand-alone systems,10 and the advantage of providing specific treatment recommendations or advice rather than a simple problem assessment requiring further consideration by end users.12 CDSS success has also been shown to be associated with higher levels of integration into the clinical workflow and advice presented at the time and location of decision making.12

Despite the growing evidence base in this field, there remains some level of inconsistency about the relative merits of CDSSs in influencing practice patterns and patient outcomes. Importantly there is clear evidence that CDSS tools are not always used when available,15 with up to 96% of alerts being overridden or ignored by physicians.16 ,17 ,18 ,19 This variability in uptake (use and adoption of recommendations generated by CDSSs) and impact is most likely due to a range of inter-related factors including, but not exclusive to the technical aspects of the CDSS. They also include the setting in which the system is deployed and the characteristics of system end users and the patients they treat. Given the limited detail in intervention studies about system-specific features and other potentially important factors impacting on the acceptability and uptake of CDSSs, there is likely to be merit in examining data from studies beyond those evaluated in the strict confines of systematic reviews of intervention studies. These include information generated from studies investigating the factors influencing the uptake of specific decision support systems as well as those exploring attitudes and practices toward CDSSs in general.

Therefore, the objective of this study was to review systematically the peer-reviewed literature to better understand the barriers to, and facilitators of, the use of CDSSs for prescribing. In particular, we examined whether the factors impacting on CDSS uptake varied over time and by study setting.

Methods

Literature search and included studies (figure 1)

We searched Medline (1990 to November Week 3, 2007), PreMedline (November 30, 2007), Embase (1990 to Week 47, 2007), CINAHL (1990 to November Week 4, 2007) and PsycINFO (1990 to November Week 4, 2007). We restricted the review to English-language studies published since 1990. We combined keywords and/or subject headings to identify computer-based decision support (eg, decision support systems clinical, decision making computer assisted) with the area of prescribing and medicines use (eg, prescription drug, drug utilization) and medical practice (eg, physicians' practice patterns, medical practice). We also searched INSPEC (1990 to November 2007) and the Cochrane Database of Systematic Reviews (November 2007), including reviews and protocols published under the Effective Practice and Organisation of Care Group. Finally, we hand searched reference lists of retrieved articles and reviews.

Figure 1

Process by which studies were identified for review. In cases where one study was published across multiple manuscripts35 ,49 ,50 ,56 ,67 ,68 we combined data from the manuscripts to form one set of data per study. In addition one manuscript provided data on two separate studies.69 Thus 58 studies, from 60 manuscripts, were reported in the review.

We reviewed the titles and abstracts of studies captured in the search strategy for relevance to the study aims. The full-text versions of potentially relevant articles were retrieved and considered for inclusion if they met the following criteria:

  • examined any type of decision support or evidence based information presented electronically (eg, alerts, dose calculators, electronic guidelines);

  • the decision support provided guidance on prescribing-related issues (eg, drug interactions, drug monitoring, treatment recommendations);

  • primarily targeted physicians but were not necessarily exclusive to this clinical group; and

  • provided information on the barriers to, and facilitators of, the uptake of CDSSs for prescribing based on primary data collection methods (eg, surveys, interviews, focus groups).

Editorials or studies reporting the views of individuals or speculation as to why a specific CDSS was or was not used were excluded from the review.

Data extraction

Data were extracted from eligible studies on:

  1. Study characteristics—year of publication, year study was conducted, objectives, setting, clinical focus, clinical setting (ambulatory versus institutional care), study design and participant numbers.

  2. CDSS features—type of decision support presented to users and system features identified in previous literature reviews.10 ,11 ,12 These details could only be ascertained for studies evaluating a specific CDSS. We extracted information on:

    • Whether systems used prompts, guidelines, calculators or risk assessment tools or were information retrieval systems.

    • How the CDSS was accessed, that is, system-initiated (eg, alerts or reminders) versus user-initiated support (eg, online information retrieval systems such as Medline or electronic guidelines).

    • Whether the system was integrated into existing programs (eg, electronic medical records) or stand-alone.

    • The type of advice given to clinicians. The system may have provided an overall assessment requiring further consideration by the user or specific recommendations for action.

  3. Barriers to, and facilitators of, CDSS use were recorded exactly as described in the individual studies. We classified these into four domains using a previously published schema20:

    • Organizational (eg, resource use, access to computers, organizational support)

    • Provider (eg, computer skills, knowledge and training)

    • Patient (eg, patient characteristics and interaction during consultation)

    • Specifics of the CDSS (eg, presentation format and usability).

Further, within each domain we developed a hierarchical theme and subtheme structure and noted the studies reporting specific themes and subthemes within this framework.

Data extraction was undertaken independently by two reviewers (AM and IH). A third reviewer (SP) assessed a sample of studies to validate the extraction method and clarify any disagreements between the primary data extractors. All data were subsequently entered into an Excel spreadsheet to facilitate analysis (available from the authors on request).

Analysis and reporting

We undertook a thematic analysis of the “verbatim” data extracted from the studies according to the four domains described previously. The verbatim extracts were reviewed independently by pairs of reviewers (AM and IH, AM and SP) and the findings were analyzed, overall, and by time period (1990–1999 versus 2000–2007) and study setting (ambulatory versus institutional or inpatient care). Reviewers reached consensus around the interpretation of findings via group discussion.

In order to undertake a time-based analysis, we required manuscripts to report the year in which the study was conducted. We chose to use this variable over year of publication as we felt it would more accurately capture any changes in factors affecting the use of CDSSs over time. However, 17 studies did not provide details of when studies were undertaken. Of these, four were published either prior to or during 2000 and were subsequently classified as having been conducted in the period 1990–1999. The remaining studies published after 2000 were classified in the 2000–2007 period. To ensure that these assumptions did not impact on study findings, we conducted the time-based analysis with and without the inclusion of the 17 studies that did not report year of study conduct. We found no difference in the outcomes so we report our analysis according to the classification described above.

The overall findings are reported in summary tables that also detail the frequency with which studies report specific domains, themes and subthemes. However, these data may not necessarily represent a ranking of importance of a particular issue. As such we do not report individual frequencies in the body of the results section. We also use examples and/or quotes from the original manuscripts to illustrate particular issues emerging from the data, and accompany each quote by our classification of study period and the setting in which the study was conducted.

Results

Studies identified (figure 1)

Of 174 potentially relevant articles, 58 studies1 ,2 ,16 ,18 ,19 ,20 ,21 ,22 ,23 ,24 ,25 ,26 ,27 ,28 ,29 ,30 ,31 ,32 ,33 ,34 ,35 ,36 ,37 ,38 ,39 ,40 ,41 ,42 ,43 ,44 ,45 ,46 ,47 ,48 ,49 ,50 ,51 ,52 ,53 ,54 ,55 ,56 ,57 ,58 ,59 ,60 ,61 ,62 ,63 ,64 ,65 ,66 ,67 ,68 ,69 ,70 ,71 ,72 ,73 ,74 were included in the review. Eight studies reported the outcomes of randomized controlled trials but had also provided additional primary data on the barriers and/or facilitators.22 ,27 ,33 ,40 ,51 ,60 ,66 ,70

Study characteristics (table 1)

A detailed description of individual study characteristics is outlined in table 1 and in the supplementary tables (online Appendix A).

View this table:
Table 1

Summary of study characteristics (n=58)

Characteristicn (%)References
CDSS
   Specific system50 (86)1 2 16 18 19 20 21 22 23 26 27 29 31 32 33 34 35 36 37 38 40 41 42 43 44 45 46 47 48 49 50 51 52 53 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 72 73 74
   Opinions of CDSSs in general8 (14)24 25 28 30 39 54 69 71
Year study was conducted*
   1990–199915 (26)36 39 40 41 52 53 58 60 64 65 69 70 72 74
   2000–200743 (74)1 2 16 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 37 38 42 43 44 45 46 47 48 49 50 51 54 55 56 57 59 61 62 63 66 67 68 71 73
Setting
   Ambulatory care38 (66)2 16 18 19 20 21 22 23 24 25 26 27 29 32 33 34 35 40 42 44 45 47 48 49 50 51 52 55 56 60 61 62 63 65 66 67 68 69 71 72
   Institutional (inpatient)14 (24)1 28 30 36 38 39 41 43 54 59 64 70 73 74
   Both ambulatory and inpatient settings6 (10)31 37 46 53 57 58
Location
   North America35 (60)2 16 18 19 20 22 23 27 30 32 33 35 36 37 39 40 42 43 44 45 46 47 52 55 56 57 58 59 60 61 63 64 65 66 70 72
   Europe17 (29)24 25 26 29 31 34 41 48 49 50 51 53 62 67 68 69 71 74
   Australasia6 (10)1 21 28 38 54 73
Target
   Physicians29 (50)1 16 18 20 21 26 29 31 34 39 40 46 48 49 50 52 53 54 57 59 60 66 67 68 69 70 71 72 73 74
   Physicians and other professionals29 (50)2 19 22 23 24 25 27 28 30 32 33 35 36 37 38 41 42 43 44 45 47 51 55 56 58 61 62 63 64 65
Clinical focus
   Drug interactions or contraindications19 (33)16 18 19 20 25 26 31 36 37 42 43 45 46 48 57 59 60 64 70
   Cardiovascular disease12 (21)22 27 33 40 47 49 50 61 62 66 67 68 72 74
   Antibiotic prescribing5 (9)1 2 34 49 50 58
   Respiratory5 (9)49 50 51 53 62 72
   Vaccination3 (5)22 63 65
   Chemotherapy2 (3)29 41
   Other conditions13 (22)22 24 33 35 39 45 52 55 56 63 65 71 72 74
   No clinical focus11 (19)21 23 28 30 32 38 44 54 69 73
Study design
   Cross-sectional surveys30 (52)1 2 20 22 27 29 34 35 36 37 39 41 42 43 45 48 49 50 51 52 56 58 59 60 64 65 66 69 70 71 73
   Interviews21 (36)1 23 25 26 28 29 30 31 32 33 34 38 40 43 49 50 54 55 62 67 68 72 74
   Log-file analysis7 (12)16 18 19 46 47 57 61
   Focus groups6 (10)21 28 38 43 44 69
   Observation6 (10)23 30 34 43 55 63
   Other4 (7)24 53 62

CDSS, computerized clinical decision support system.

  • * Publication year was used if the year the study was conducted could not be ascertained.

  • Studies may have explored more than one clinical focus and/or study design.

Most studies explored clinicians' opinions of a specific CDSS (n=50); were classified as being conducted since 2000 (n=43); were conducted within ambulatory care (n=38); and were undertaken in North America (n=35). Twenty-nine studies focused solely on the opinions or behaviors of physicians.

A range of clinical areas were addressed, the most common being cardiovascular disease (n=12), respiratory conditions (n=5) and antibiotic prescribing (n=5). Nineteen studies focused on drug alerts (eg, drug interaction, drug allergy and drug age).

Studies employed a range of data collection methods including self-report questionnaires (n=30), interviews (n=21), analysis of computer log files detailing reasons for overriding alerts (n=7), focus groups (n=6) and observation (n=6); 13 studies employed more than one data collection method. Given the variety of study designs, we did not formally assess the quality of the individual studies in this review.

CDSS features (table 2)

Of the 50 studies focusing on a specific CDSS, 38 systems used prompts, such as alerts or reminders, within the computerized order entry system or electronic medical record. The majority of studies reported on systems that were integrated with existing software programs (n=33), were system-initiated (n=28) and provided an assessment and specific recommendations for treatment (n=35).

View this table:
Table 2

Summary of computerized clinical decision support systems (CDSS) features (n=58)

Characteristicn (%)References
Type of decision support*
   Prompts38 (66)2 16 18 19 20 21 23 26 27 31 32 33 34 35 36 37 40 42 43 44 45 46 47 48 49 50 51 52 53 55 56 57 58 59 60 61 63 64 65 66
   Guidelines15 (26)20 21 22 26 31 32 33 34 59 61 62 69 70 72 74
   Calculators or risk assessment tools7 (12)1 2 29 41 51 53 67 68
   Information retrieval systems2 (3)38 73
   Not specified8 (14)24 25 28 30 39 54 69 71
Specific features (n=50)
   System-initiated22 (44)16 18 19 27 32 33 35 37 42 45 46 47 49 50 55 56 57 59 60 61 62 63 65 66
   User-initiated13 (26)1 2 22 38 40 41 51 53 64 67 68 72 73 74
   Both6 (12)20 26 29 31 36 58
   Unclear or not specified9 (18)21 23 34 43 44 48 52 69 70
   Integrated into existing programs33 (66)16 18 19 20 22 26 27 31 32 33 35 36 37 42 43 44 45 46 47 49 50 52 55 56 57 59 60 61 62 63 64 65 66 69 70
   Stand-alone12 (24)1 2 29 38 40 41 51 53 67 68 72 73 74
   Unclear or not specified5 (10)21 23 34 48 58
   Assessment only4 (8)16 36 42 46
   Assessment plus treatment recommendations35 (70)1 2 18 19 20 22 26 27 29 31 32 33 35 40 41 45 47 49 50 51 52 53 55 56 58 59 60 61 62 63 65 66 67 68 69 70 72 74
   Unclear or not specified11 (22)21 23 34 37 38 43 44 48 57 64 73

CDSS, computerized clinical decision support system.

  • * Studies may have explored more than one type of CDSS.

  • These studies did not focus on a specific CDSS tool (ie, explored providers' general view of CDSS).

Barriers to and facilitators of CDSS uptake

Tables 3–6 summarize the key factors reported in the studies according to four domains—organizational, provider-related, patient-related factors and specific issues relating to the CDSS. Given the overwhelming consistency in our findings when we compared themes across different time periods and settings, we first report our overall findings according to domain and then highlight pertinent issues that emerged when we compared themes by time and setting.

View this table:
Table 3

Organizational factors impacting on computerized clinical decision support system (CDSS) uptake (31 studies)

Thematic areaExamples provided from the studies*
Infrastructure (17)Access (10)Access to computers is currently a problem… there are only two computers available for the physician group at the department which can at any given moment comprise as many as five physicians.26 [2000–07, ambulatory care]
“Hardware problems related to the sitting of computers within practices. Since the system was not loadable onto a central file server, the workstation with the software installed was only available at one site, thereby limiting access to clinicians or nurses consulting with patients.”40 [1990–99, ambulatory care]
Technical problems (15)The slow speed makes [me] ‘uneasy’: kind of high stress just sitting there waiting for stuff to come up.60 [1990–99, ambulatory care]
The CPRS is slow today, the hourglass [is an indicator] when it starts slowing down we know that it might crash. We become paralyzed. It shuts everything down.63 [2000–07, ambulatory care]
Implementation (23)Technical support (6)“…when a technical problem occurs, rapid support must be available so that ongoing work does not suffer.”26 [2000–07, ambulatory care]
Part of the success of CRs at [Site 3] is attributed to the feedback tool… At times, changes are made to the CRs based on this feedback.63 [2000–07, ambulatory care]
Integration with existing systems (7)“…59% said that they would find the prescribing support useful if it was integrated with their practice computer…”72 [1990–99, ambulatory care]
“…expressed the frustration that different software systems are difficult to integrate.”53 [1990–99, ambulatory care, institutional]
Endorsement (7)“Two senior doctors had adopted the role of ‘champions' and had facilitated this initiative. These champions were… explicit about encouraging information seeking in their departments.”38 [2000–07, institutional]
You can have postgraduate education until the cows come home, it doesn't change attitudes. The only thing that I know that works is actually setting a target system, with financial carrots or financial sticks.62 [2000–07, ambulatory care]
Roles and responsibilities (7)“…nurses and providers displayed substantial confusion regarding delegation of responsibility for satisfying CRs (removing them from the due list).”63 [2000–07, ambulatory care]
“An observation that users made is that sometimes the alerts could be presented to someone else in the clinical workflow.”44 [2000–07, ambulatory care]
Liability and privacy (6)“Many clinicians also pointed out that today's medical liability culture, alerts or warnings become even more desirable in managing risks and making the system more transparent.”28 [2000–07, institutional]
CDSS considered a “…threat to professional privacy (35%)…”69 [1990–99, ambulatory care]

The numbers in brackets represent the number of individual studies addressing a particular thematic area.

CDSS, computerized clinical decision support system; CPRS, computerized patient record system; CR, clinical reminder.

  • * Italics indicate actual responses from study participants, non-italicized responses represent quotes from manuscript authors.

View this table:
Table 4

Provider-related factors impacting on computerized clinical decision support system (CDSS) uptake (43 studies)

Thematic areaExamples provided from the studies*
Knowledge and training (26)I strongly believe these support systems are useful and definitely made to help you, not to do the work for you. In something as important as dealing with people's health and lives, you should learn to use as much as you can from what is available.39 [1990–99, institutional]
I hope that I will learn more pharmacology. I see it as a strength in Janus that interaction information appears immediately and you can certainly learn a great deal yourself.26 [2000–07, ambulatory care]
“…limited knowledge of how to use the clinical reminder software in general was a barrier.”55 [2000–07, ambulatory care]
[I] never had any training on warning messages. It seems… these things just come out of system… there may be some more safety features available on the system, but [I] would not know about them because [I have] never received specific [safety] training for the system.25 [2000–07, ambulatory care]
Current practice and preferences (35)A CDSS reminder system insults the intelligence of the physician. Why train to be a doctor at all? Just have ‘techs' plug in symptoms to a computer and out pops the answer complete with a plan. [CDSS] is completely and absolutely unnecessary and superfluous. This is another attempt to force a technological solution onto a nonexistent problem.39 [1990–99, institutional]
Our society is geared so that if it's in the computer, it must be accurate and complete, and as we know, it just isn't so.30 [2000–07, institutional]
“…35% worried that it might result in ‘cookbook’ medicine.”70 [1990–99, institutional]
Guidelines can change. Doctors must be allowed to make individual decisions as to whether to follow or not.21 [2000–07, ambulatory care]
“…doctors believed that the use of a clinical decision support system might reduce the extent of adverse drug reactions. Doctors also concluded that a CDSS would enhance the decision making process and thus reduce medication errors.”54 [2000–07, institutional]
“Prescribers also agreed that DDI alerts increased their potential for prescribing medications safely…”42 [2000–07, ambulatory care]

The numbers in brackets represent the number of individual studies addressing a particular thematic area.

CDSS, clinical decision support system (as defined by authors); DDI, drug–drug interaction.

  • * Italics indicate actual responses from study participants, non-italicized responses represent quotes from manuscript authors.

View this table:
Table 5

Patient-related factors impacting on computerized clinical decision support system (CDSS) uptake (26 studies)

Thematic areaExamples provided from the studies*
Patient characteristics (14)“…less likely to prescribe (override) an alerted medication if the patient had multiple medication allergies… substantially more likely to override an alert for a renewal of a current prescription than for a new prescription.”16 [2000–07, ambulatory care]
Likelihood of accepting reminders for elderly patients (40%), >5 current medications (36%), >5 chronic conditions (36%), presenting with acute problem (9%)20 [2000–07, ambulatory care]
“Has tolerated in past”, “not really allergic”46 [2000–07, ambulatory care, institutional]
Patient–doctor interactions (11)“The morbidity prediction function was perceived as being innovative and a stimulus for doctor and patient to work together to try to improve management.”53 [1990–99, ambulatory care, institutional]
…The computer really takes your eye off the patient… If you have seven and a half minutes then you should spend probably seven of those minutes with eye-to-eye contact… [decision aids are] just hopeless for the personal bit.67 [2000–07, ambulatory care]
My experience is that patients like technology. They like to see their doctors are up to date.67 [2000–07, ambulatory care]
Risks and benefits for patients (8)“76% of physicians thought that this clinical decision support system helps to improve quality care of patients.”66 [2000–07, ambulatory care]
84% of physicians would override a drug interaction alert as “Treatment required is too important to change the drug”48 [2000–07, ambulatory care]
“Prompts of this kind do more harm than good.”27 [2000–07, ambulatory care]

The numbers in brackets represent the number of individual studies addressing a particular thematic area.

  • * Italics indicate actual responses from study participants, non-italicized responses represent quotes from manuscript authors.

View this table:
Table 6

Specifics of the computerized clinical decision support system (CDSS) impacting on uptake (51 studies)

Thematic areaExamples provided from the studies*
Integration with workflow (38)Ease of navigation and use (23)“…88% found the system easy to use…”72 [1990–99, ambulatory care]
“…the need to be able to ‘backtrack’ when navigating through the system… Some specific problems with navigating the system were also highlighted, particularly switching between the display of clinical information for the current cycle and that for previous cycles.”29 [2000–07, ambulatory care]
“More than half felt that their clerical work was easier…”64 [1990–99, institutional]
Timing and frequency of prompts (25)I'm sorry to say that this software is driving me mad… it's also a nuisance when it comes up when everything has been done… it's so annoying that I always exit, but I would feel less antagonistic if I had some individual control.62 [2000–07, ambulatory care]
Now we get alerts when we go to charting, which in my workflow is the last step. It's after the patient's gone…23 [2000–07, ambulatory care]
“Too many messages will lead to them all being ignored.”39 [1990–99, institutional]
Perception of time (30)One of the difficulties in the consultation is the severe time constraints that we have. Our consultations are conducted over 7 to 10 min and in that time we have to cover quite a lot of ground. And if you have to call upon a system that takes another 3 or 4 min, it means surgery is running late.67 [2000–07, ambulatory care]
“The indiscriminant, excessive generation of clinical alerts by CPOE systems can also slow clinicians as they pause to decipher alerts, deliberate on whether and how to respond, and potentially document reasons for not complying with alerts.”30 [2000–07, institutional]
“Users were disillusioned by the time wasting necessity to input information already available on the host database, only to lose it again when exiting the Primed system.”40 [1990–99, ambulatory care]
“68% reported that patient encounters were either the same duration or slightly faster when using the CDSS tool compared to usual practice.”2 [2000–07, ambulatory care]
Presentation (19)“…77% of respondents to the questionnaire stated that the overall screen layout was adequate or better and 61% that the amount of information on screen was adequate or better.”69 [1990–99, ambulatory care]
“42% of responding providers thought that the blinking icon was either too unobtrusive or unnoticeable.”65 [1990–99, ambulatory care]
“It was suggested that it would be helpful if interactions could be graded according to severity.”28 [2000–07, institutional]
“Alert does not allow for tailoring to providers' individual needs (eg, cannot turn alert function on/off) (24%)”37 [2000–07, ambulatory care, institutional]
For serious allergy interactions, it should stop you from prescribing unless you've made a serious effort to override it.21 [2000–07, ambulatory care]
“…some users commented that the presentation of information was dense and that some of the text was small.”29 [2000–07, ambulatory care]
Content (37)Relevance (sensitivity vs specificity) (24)“…79% agreed or strongly agreed that the basic HMR recommendations… were appropriate.”65 [1990–99, ambulatory care]
“…prescribers found that DDI alerts often provided them with information that they already knew.”42 [2000–07, ambulatory care]
“…GPs' opinion that the reminder was not suitable to the prescribing situation. Some reminders were valued as extensive and therefore caused disregarding of GPs.”49 [2000–07, ambulatory care]
“…some clinicians suggested more information on allergy and atopy was desirable.”51 [2000–07, ambulatory care]
Quality of information (20)“Concerns over the comprehensiveness, accuracy and evidence base of the information in alerts predominated”21 [2000–07, ambulatory care]
“…clinical content being described as ‘patronizing’. Clinicians were also concerned about the source and strength of the evidence behind the recommendations.”69 [1990–99, ambulatory care]
“…75% agreed or strongly agreed that Couplers provides high-quality information.”22 [2000–07, ambulatory care]
…Here is updated and systematized information: first a question, then and answer in full detail based on facts and references, and finally a conclusion.31 [2000–07, ambulatory care, institutional]
Type of information (15)“Safety/drug interaction alerts viewed as most helpful and least annoying.”32 [2000–07, ambulatory care]
“90% thought that additional patient advice leaflets would be of some use or invaluable.”69 [1990–99, ambulatory care]
Links to supporting information (4)“Users appreciate the ability to seamlessly link other knowledge resources across the Intranet and Internet from within the application.”44 [2000–07, ambulatory care]
Local constraints (2)“…there was conflict between what was advocated in the guideline and what was possible to achieve in the service setting.”74 [1990–99, institutional]
“The UK ‘Crystal Byte’ was awkward for a Swiss clinician to adapt to a consulting style, which took a broader view of patient management. All participants shared the view that the core set of guidelines incorporated into decision support software should be GINA international guidelines.”53 [1990–99, ambulatory care, institutional]

The numbers in brackets represent the number of individual studies addressing a particular thematic area.

CDSS, clinical decision support system (as defined by authors); CPOE, computerized provider order entry; DDI, drug–drug interactions; GP, general practitioner; HMR, health maintenance reminder.

  • * Italics indicate actual responses from study participants, non-italicized responses represent quotes from manuscript authors.

Organizational factors (table 3)

The quality and quantity of infrastructure provided and the way in which the CDSSs were implemented were key factors impacting on the uptake of decision support. Studies reported consistently that limited computer availability at the point of care impeded CDSS use. Further, even when computer workstations were accessible, clinicians identified ongoing technical problems such as malfunctions, system failures and slow computer speeds as barriers to use. This often resulted in frustration for end users.

Technical assistance to address hardware and software issues was often limited. Studies also reported that CDSS use was compromised if the software itself could not be integrated with existing systems, and the roles and responsibilities of end users were not clearly delineated during the implementation phase (eg, who would be responsible for managing the clinical issues relating to an alert).

A key facilitator to CDSS uptake was the endorsement, demonstration and/or communication of the systems' benefits by management, administration or senior clinicians. Further, financial incentives for clinicians, as well as having adequate funds to support the introduction of the CDSS, were reported to facilitate uptake. While some studies reported that clinician concerns about professional liability and patient privacy may restrict the use of CDSSs, others highlighted that CDSS use may reduce risk, as the system recommendations were based on best clinical practice.

Provider-related factors (table 4)

The lack of training in the use of CDSSs and the limited computer skills of clinicians were flagged repeatedly as a significant barrier to use. These impacted on providers' confidence in using the systems, and in some cases, clinicians reported anxiety about using the CDSS at the point of care. Clinicians emphasized the need for further computer training, but also highlighted a concern that up-skilling in this domain may lead to de-skilling in clinical decision making, resulting in over-dependence on technology. CDSS use was perceived by some to enhance knowledge, while others reported that using CDSS was “admitting a personal inadequacy”.67

In some circumstances, providers preferred to use other information sources over CDSS (eg, in a complex case they may prefer to consult their colleagues). There was evidence of a general resistance to change existing practices, a strong belief that clinicians were already practicing in an evidence-based fashion, and the perception that introduction of CDSSs threatened professional autonomy (eg, one study referred to this issue as reverting to “assembly line medicine”56). Conversely, other studies indicated that a CDSS was more likely to be used when clinicians believed it enhanced decision making, and that such systems resulted in better prescribing practices.

Patient-related factors (table 5)

The primary factors identified within this domain related to patient characteristics, patient–doctor interactions and the perceived risks and benefits for patients. Patient factors such as age, clinical condition, tolerance to medications and patients' own preferences (eg, desire for treatment and compliance) impacted on CDSS use. These factors may be a barrier or facilitator to the uptake of CDSS depending on the clinical circumstances and the information provided by the system. For example, clinicians may accept drug allergy alerts in patients considered truly allergic, yet override the same alert in patients who had tolerated the medication in the past.

There was a range of views expressed about the benefits of CDSS within the consultation and its influence on the patient–doctor interaction. In some studies, clinicians felt it enhanced dialog with their patients, whereas in others, CDSS was seen to detract from the patient interaction (eg, loss of eye contact). These responses also appeared to be linked to the level of acceptance of computers within the consultation by patients and physicians. Similarly, there were divergent views about the benefits of CDSS—some studies reported that providers believed CDSS enhanced the quality of care and had a positive impact on patient outcomes while others reported that it may do “more harm than good”.27

Specific issues related to CDSSs (table 6)

A range of CDSS-specific factors, such as integration with clinical workflows and the content and its presentation were identified as impacting on uptake. Not surprisingly, ease of use (eg, quick access, minimal mouse clicks and key strokes), simplicity and visibility of messages were key drivers of use. Importantly, CDSS tools were seen to be beneficial for providing physicians with reminders about patient safety and long-term management.

On the other hand, systems where end-users had difficulty switching between displays or requiring backtracking were less likely to be adopted. Information-dense messages with inconsistent vocabulary and the requirement to re-enter patient data to generate advice were also deterrents to use. The timing and frequency of prompts, such as alerts appearing at inappropriate times in the workflow, were key factors relating to use and acceptability. In addition, the high frequency of alerts was perceived by clinicians as annoying, irritating and intrusive to the consultation. As a consequence, providers felt they may become desensitized to alerts and miss important information. There was some suggestion that alerts should be graded by severity, and alerts associated with potentially serious clinical consequences should be difficult to override by clinicians.

The importance of CDSS content, particularly its relevance to individual patients, was a recurring theme. Overall, clinicians communicated preferences for up-to-date evidence-based information. However, clinicians variously reported recommendations that were too extensive, too lengthy, too trivial or redundant. CDSS components valued by end users included drug interaction alerts, patient information sheets and links to supporting information. Notwithstanding these considerations about content and presentation, generic CDSSs that did not account for local constraints, such as the availability of specific drugs in that setting, would not be used.

Comparison of themes across different time periods and settings

Our analyses revealed some subtle differences in themes between studies conducted in the different time periods (1990–99 vs 2000–07) and settings (ambulatory versus institutional care), particularly across the organizational and patient domains.

Only studies conducted since 2000 reported the importance of endorsement and demonstration of CDSS benefits by management or senior clinicians. The role of financial incentives in facilitating system uptake was also a feature of more recent studies. While earlier studies highlighted that clinician concerns about professional liability and patient privacy may restrict the use of CDSSs, studies conducted since 2000 referred to the benefits of CDSS in terms of risk mitigation. Studies conducted in ambulatory care tended to report the need for technical assistance in relation to hardware and software issues and highlighted the minimal use of CDSS if software was not integrated with existing systems. In addition, patient-related factors were mostly reported in studies conducted within ambulatory care settings and in studies conducted since 2000. Although CDSS-specific factors were consistently reported over time and across different settings, studies conducted in ambulatory care often identified issues concerning the quality of CDSS content; data pertaining to studies conducted solely in inpatient settings were limited.

Discussion

This review identified a range of factors influencing CDSS use and demonstrated that simply providing the clinical information in electronic format does not guarantee uptake. Our overall findings suggest that there is no “one size fits all approach” to influencing prescribing via CDSSs,75 and factors beyond software and content must be considered when developing CDSS systems for prescribing. Fundamental issues include the availability and accessibility of hardware, sufficient technical support and training in the use of the system, the level of system integration into clinical workflow and the relevance and timeliness of the clinical messages provided. Further, acceptance of the system by the various stakeholders (eg, management and end users), clear articulation and endorsement of the system's benefits in patient care, and minimizing the perceived threats to professional autonomy are important to the success of CDSSs.

Importantly, our review suggests that despite advances in technology and likely increased sophistication of CDSSs, issues influencing CDSSs use for prescribing have not changed substantially over time. Key concerns relate to the usability of the system and relevance of the content. The mention of these issues in more recent studies suggests there is still much to be done to make these systems work in routine clinical practice. There appeared to be some differences according to the practice setting; problems due to lack of integration of prescribing tools with existing software tended to be mentioned in studies conducted in ambulatory care. However, these issues may be equally important in institutional settings, just more easily addressed in hospitals where there are high levels of computerization for managing patient administration and a range of aspects of clinical care.

Not surprisingly, provider-related issues were reported consistently over time and irrespective of setting, which probably relates to the challenges of changing the knowledge, attitudes and behaviors of human beings. On the positive side, these issues are predictable and those armed with the responsibility of CDSS implementation should be well prepared to counter some of the fundamental barriers to use. However, it would be unrealistic to expect that even best practice system implementation will result in immediate and sustainable change across the entire target audience. Healthcare organizations need to have dedicated staff to champion and facilitate an appropriate environment for implementing CDSS so that it may be used to its full potential.4 Further, we established a notable consistency in CDSS-specific issues over time. Some CDSSs are highly sophisticated, well developed and evaluated extensively, however they tend to come from a small number of institutions recognized internationally for their work in medical informatics.19 ,20 ,30 ,32 ,33 ,44 ,45 ,46 ,66 The recurring themes related to CDSS specific issues most likely reflects the range of systems and platforms being tested and implemented, and the heterogeneity of prescribing software deployed across many healthcare settings.

We highlighted a notable absence of studies reporting the impact of system endorsement before 2000. While many interventions targeting physician behavior change use endorsement and promotion by respected peer group members as a fundamental component of their implementation strategy,76 this may not have been seen as a key driver for change in the early studies. Thus, study designs may have omitted addressing this factor and/or respondents did not acknowledge its importance as a facilitator of uptake. This could also be true for the absence of reporting patient factors in the earlier studies. With more widespread use of computers in clinical practice over time, the potential for interference in the doctor–patient interaction might be magnified. Interestingly, the earlier studies highlighted concerns about professional liability and patient privacy in relation to the use of CDSSs. However, greater acceptance of the technology on the part of end-users and the efforts of organizations such as the American Medical Informatics Association in overseeing and endorsing the introduction of guidelines and regulations75 is likely to have dispelled some of the early concerns.

Studies evaluating the impact of CDSSs for prescribing in ambulatory care highlighted a lack of technical support addressing day-to-day software and hardware issues and limited integration of CDSS with existing software as important barriers to uptake. In many ways this is not surprising given the greater diversity of clinic locations in community practice and the heterogeneity of systems used in this setting.77 In contrast, hospitals have their own information technology infrastructure and many CDSSs have been designed specifically to dovetail with their existing computerized physician order entry systems. Further, the influence of patient factors was a key feature of studies conducted in ambulatory care and effectively absent from studies conducted in institutional care. Again, this is likely to relate to the nature of ambulatory care and the conditions physicians treat in this setting. Previous systematic reviews have demonstrated the greater effectiveness of CDSSs in hospital compared with ambulatory care10 and for the management of acute rather than chronic conditions.13 It was postulated that these differences might be attributed to the stricter controls on healthcare professionals and a greater willingness to abide by externally imposed rules in institutional settings. However, this review suggests that patient factors may create an additional layer of complexity in healthcare professionals' decisions in community practice.

The strengths of this study lie in the systematic approach to identifying studies, the inclusion of a range of study designs, our attempts to capture CDSS features beyond content and functionality, and the stratification of our analysis by the time period in which the studies were conducted and the setting in which they were undertaken. Importantly, our key findings, generated from a diverse literature, support the opinions and recommendations of luminaries in the field who have written extensively about the key requirements for successful implementation of CDSS in clinical practice.3 ,4 ,75 ,78

The review however, has a number of limitations. Despite our intensive efforts we may not have identified all relevant studies as some may not be available in the public domain, and others may be published outside the peer-reviewed academic literature. Our review studies were heterogeneous in terms of design and data collection methods so we did not conduct comprehensive quality assessment of individual reports. Although time periods were defined somewhat arbitrarily, we believed that year of study conduct would more accurately capture any changes in the factors influencing CDSS uptake over time. However, we imputed year of study conduct when it was absent using publication year, and our “sensitivity” analysis however confirmed that our classification system did not change study findings. We used an organizational framework adapted from previous research20 that may not necessarily reflect the level of interplay between the various factors, and we did not attempt to map these interrelationships or infer their relative importance. We also noted the frequency with which studies reported specific domains, themes and subthemes (tables 3–6). Importantly, these data may not necessarily indicate the significance of a particular issue. Rather, the relative weight of these factors should be determined in planning and implementing specific CDSSs.

The limited information available in many of the published manuscripts precluded stratifying results by specific CDSS features. Clearly an important move for future research will be greater clarity and emphasis on reporting of specific design features; journal editors may have a role in setting minimum standards for this purpose. With the advent of supplementary online information for manuscript publication there is a mechanism for making these details available in the public domain.

A number of important questions also remain unanswered. What are practitioners' perceived needs for prescribing decision support? Do these needs vary according to their clinical experience? How can needs be best met within the time constraints of a patient consultation? The complexities relating to CDSSs for prescribing and the state of current technology means that most organizations will probably only realize moderate benefits from the implementation of such systems.4 However, substantial opportunities do exist for all stakeholders to collaborate and explore the potential of CDSSs to support medication use that is as safe and effective as possible.

Although there is widespread interest in CDSS development, worthwhile progress will come with attention to both computer system enhancements and the human factors influencing responsiveness to new systems and change. Further work with end-users is required to explore these issues before system implementation. Although widespread dissemination of appropriate CDSSs might be expected to improve clinical practice, simply providing the information in electronic format alone does not ensure uptake.

Funding

The project was funded by the Australian Department of Health and Ageing through the National Prescribing Service, as part of a research partnership with the University of Newcastle and the University of New South Wales. Other funders: Australian Department of Health and Ageing.

Competing interests

None declared.

Contributors

All authors contributed to the design, implementation, data analysis and interpretation and production of the manuscript. We acknowledge the contributions of other Study Guidance Group Members: James Reeve, Bryn Lewis, Malcolm Gillies, Michelle Sweidan, Michelle Toms, Adi Smith and Jonathan Dartnell.

Provenance and peer review

Not commissioned; externally peer reviewed.

Footnotes

References

View Abstract