OUP user menu

★ Review paper ★

Computerized clinical decision support for prescribing: provision does not guarantee uptake

Annette Moxey , Jane Robertson , David Newby , Isla Hains , Margaret Williamson , Sallie-Anne Pearson
DOI: http://dx.doi.org/10.1197/jamia.M3170 25-33 First published online: 1 January 2010


There is wide variability in the use and adoption of recommendations generated by computerized clinical decision support systems (CDSSs) despite the benefits they may bring to clinical practice. We conducted a systematic review to explore the barriers to, and facilitators of, CDSS uptake by physicians to guide prescribing decisions. We identified 58 studies by searching electronic databases (1990–2007). Factors impacting on CDSS use included: the availability of hardware, technical support and training; integration of the system into workflows; and the relevance and timeliness of the clinical messages. Further, systems that were endorsed by colleagues, minimized perceived threats to professional autonomy, and did not compromise doctor-patient interactions were accepted by users. Despite advances in technology and CDSS sophistication, most factors were consistently reported over time and across ambulatory and institutional settings. Such factors must be addressed when deploying CDSSs so that improvements in uptake, practice and patient outcomes may be achieved.

  • Clinical decision support systems
  • medication systems
  • drug prescriptions
  • drug utilization
  • physician practice patterns

Over the last two decades there have been rapid advances in information technology, increased acceptance of computers in healthcare and widespread interest in developing evidence-based computerized clinical decision support systems (CDSSs). There has been not only a greater infiltration of CDSSs in clinical practice over time but also increased levels of technical sophistication and portability.1 ,2 The simplest systems present narrative text requiring further processing and analysis by clinicians before decision-making, while the more sophisticated systems are “interactive advisors”, integrating patient-specific information, such as laboratory results and active orders, with guidelines or protocols, and presenting derived information for decision making.3

CDSSs have been developed for a range of clinical circumstances and play an important role in guiding prescribing practices such as assisting in drug selection and dosing suggestions, flagging potential adverse drug reactions and drug allergies, and identifying duplication of therapy.4 Systematic reviews have demonstrated that use of CDSSs for prescribing can reduce toxic drug levels and time to therapeutic control,5 ,6 reduce medication errors7 ,8 and change prescribing in accordance with guideline recommendations.9 Further, there is some evidence pointing to greater impacts in institutional compared with ambulatory care and for fine-tuning therapy (eg, recommendations to improve patient safety, adjust the dose, duration or form of prescribed drugs, or increase laboratory testing for patients on long-term therapy) rather than influencing initial drug choices.10

Systematic reviews of electronic and paper-based clinical decision support across a range of clinical domains have demonstrated its effectiveness in changing clinicians' practices such as screening, test ordering and guideline adherence,5 ,7 ,8 ,9 ,11 ,12 ,13 but there is no consistent translation into patient outcomes.11 Some of these reviews have also attempted to evaluate whether particular system or organizational features predict successful CDSS implementation and changes in practice and patient outcomes.10 ,11 ,12 ,14 However, insufficient attention to the reporting of these features in intervention studies has hampered these approaches.14 Despite these limitations, some reviews have demonstrated the benefits of computer- over paper-based decision support,12 system- over user-initiated tools,10 ,11 and integrated over stand-alone systems,10 and the advantage of providing specific treatment recommendations or advice rather than a simple problem assessment requiring further consideration by end users.12 CDSS success has also been shown to be associated with higher levels of integration into the clinical workflow and advice presented at the time and location of decision making.12

Despite the growing evidence base in this field, there remains some level of inconsistency about the relative merits of CDSSs in influencing practice patterns and patient outcomes. Importantly there is clear evidence that CDSS tools are not always used when available,15 with up to 96% of alerts being overridden or ignored by physicians.16 ,17 ,18 ,19 This variability in uptake (use and adoption of recommendations generated by CDSSs) and impact is most likely due to a range of inter-related factors including, but not exclusive to the technical aspects of the CDSS. They also include the setting in which the system is deployed and the characteristics of system end users and the patients they treat. Given the limited detail in intervention studies about system-specific features and other potentially important factors impacting on the acceptability and uptake of CDSSs, there is likely to be merit in examining data from studies beyond those evaluated in the strict confines of systematic reviews of intervention studies. These include information generated from studies investigating the factors influencing the uptake of specific decision support systems as well as those exploring attitudes and practices toward CDSSs in general.

Therefore, the objective of this study was to review systematically the peer-reviewed literature to better understand the barriers to, and facilitators of, the use of CDSSs for prescribing. In particular, we examined whether the factors impacting on CDSS uptake varied over time and by study setting.


Literature search and included studies (figure 1)

We searched Medline (1990 to November Week 3, 2007), PreMedline (November 30, 2007), Embase (1990 to Week 47, 2007), CINAHL (1990 to November Week 4, 2007) and PsycINFO (1990 to November Week 4, 2007). We restricted the review to English-language studies published since 1990. We combined keywords and/or subject headings to identify computer-based decision support (eg, decision support systems clinical, decision making computer assisted) with the area of prescribing and medicines use (eg, prescription drug, drug utilization) and medical practice (eg, physicians' practice patterns, medical practice). We also searched INSPEC (1990 to November 2007) and the Cochrane Database of Systematic Reviews (November 2007), including reviews and protocols published under the Effective Practice and Organisation of Care Group. Finally, we hand searched reference lists of retrieved articles and reviews.

Figure 1

Process by which studies were identified for review. In cases where one study was published across multiple manuscripts35 ,49 ,50 ,56 ,67 ,68 we combined data from the manuscripts to form one set of data per study. In addition one manuscript provided data on two separate studies.69 Thus 58 studies, from 60 manuscripts, were reported in the review.

We reviewed the titles and abstracts of studies captured in the search strategy for relevance to the study aims. The full-text versions of potentially relevant articles were retrieved and considered for inclusion if they met the following criteria:

  • examined any type of decision support or evidence based information presented electronically (eg, alerts, dose calculators, electronic guidelines);

  • the decision support provided guidance on prescribing-related issues (eg, drug interactions, drug monitoring, treatment recommendations);

  • primarily targeted physicians but were not necessarily exclusive to this clinical group; and

  • provided information on the barriers to, and facilitators of, the uptake of CDSSs for prescribing based on primary data collection methods (eg, surveys, interviews, focus groups).

Editorials or studies reporting the views of individuals or speculation as to why a specific CDSS was or was not used were excluded from the review.

Data extraction

Data were extracted from eligible studies on:

  1. Study characteristics—year of publication, year study was conducted, objectives, setting, clinical focus, clinical setting (ambulatory versus institutional care), study design and participant numbers.

  2. CDSS features—type of decision support presented to users and system features identified in previous literature reviews.10 ,11 ,12 These details could only be ascertained for studies evaluating a specific CDSS. We extracted information on:

    • Whether systems used prompts, guidelines, calculators or risk assessment tools or were information retrieval systems.

    • How the CDSS was accessed, that is, system-initiated (eg, alerts or reminders) versus user-initiated support (eg, online information retrieval systems such as Medline or electronic guidelines).

    • Whether the system was integrated into existing programs (eg, electronic medical records) or stand-alone.

    • The type of advice given to clinicians. The system may have provided an overall assessment requiring further consideration by the user or specific recommendations for action.

  3. Barriers to, and facilitators of, CDSS use were recorded exactly as described in the individual studies. We classified these into four domains using a previously published schema20:

    • Organizational (eg, resource use, access to computers, organizational support)

    • Provider (eg, computer skills, knowledge and training)

    • Patient (eg, patient characteristics and interaction during consultation)

    • Specifics of the CDSS (eg, presentation format and usability).

Further, within each domain we developed a hierarchical theme and subtheme structure and noted the studies reporting specific themes and subthemes within this framework.

Data extraction was undertaken independently by two reviewers (AM and IH). A third reviewer (SP) assessed a sample of studies to validate the extraction method and clarify any disagreements between the primary data extractors. All data were subsequently entered into an Excel spreadsheet to facilitate analysis (available from the authors on request).

Analysis and reporting

We undertook a thematic analysis of the “verbatim” data extracted from the studies according to the four domains described previously. The verbatim extracts were reviewed independently by pairs of reviewers (AM and IH, AM and SP) and the findings were analyzed, overall, and by time period (1990–1999 versus 2000–2007) and study setting (ambulatory versus institutional or inpatient care). Reviewers reached consensus around the interpretation of findings via group discussion.

In order to undertake a time-based analysis, we required manuscripts to report the year in which the study was conducted. We chose to use this variable over year of publication as we felt it would more accurately capture any changes in factors affecting the use of CDSSs over time. However, 17 studies did not provide details of when studies were undertaken. Of these, four were published either prior to or during 2000 and were subsequently classified as having been conducted in the period 1990–1999. The remaining studies published after 2000 were classified in the 2000–2007 period. To ensure that these assumptions did not impact on study findings, we conducted the time-based analysis with and without the inclusion of the 17 studies that did not report year of study conduct. We found no difference in the outcomes so we report our analysis according to the classification described above.

The overall findings are reported in summary tables that also detail the frequency with which studies report specific domains, themes and subthemes. However, these data may not necessarily represent a ranking of importance of a particular issue. As such we do not report individual frequencies in the body of the results section. We also use examples and/or quotes from the original manuscripts to illustrate particular issues emerging from the data, and accompany each quote by our classification of study period and the setting in which the study was conducted.


Studies identified (figure 1)

Of 174 potentially relevant articles, 58 studies1 ,2 ,16 ,18 ,19 ,20 ,21 ,22 ,23 ,24 ,25 ,26 ,27 ,28 ,29 ,30 ,31 ,32 ,33 ,34 ,35 ,36 ,37 ,38 ,39 ,40 ,41 ,42 ,43 ,44 ,45 ,46 ,47 ,48 ,49 ,50 ,51 ,52 ,53 ,54 ,55 ,56 ,57 ,58 ,59 ,60 ,61 ,62 ,63 ,64 ,65 ,66 ,67 ,68 ,69 ,70 ,71 ,72 ,73 ,74 were included in the review. Eight studies reported the outcomes of randomized controlled trials but had also provided additional primary data on the barriers and/or facilitators.22 ,27 ,33 ,40 ,51 ,60 ,66 ,70

Study characteristics (table 1)

A detailed description of individual study characteristics is outlined in table 1 and in the supplementary tables (online Appendix A).

View this table:
Table 1

Summary of study characteristics (n=58)

Most studies explored clinicians' opinions of a specific CDSS (n=50); were classified as being conducted since 2000 (n=43); were conducted within ambulatory care (n=38); and were undertaken in North America (n=35). Twenty-nine studies focused solely on the opinions or behaviors of physicians.

A range of clinical areas were addressed, the most common being cardiovascular disease (n=12), respiratory conditions (n=5) and antibiotic prescribing (n=5). Nineteen studies focused on drug alerts (eg, drug interaction, drug allergy and drug age).

Studies employed a range of data collection methods including self-report questionnaires (n=30), interviews (n=21), analysis of computer log files detailing reasons for overriding alerts (n=7), focus groups (n=6) and observation (n=6); 13 studies employed more than one data collection method. Given the variety of study designs, we did not formally assess the quality of the individual studies in this review.

CDSS features (table 2)

Of the 50 studies focusing on a specific CDSS, 38 systems used prompts, such as alerts or reminders, within the computerized order entry system or electronic medical record. The majority of studies reported on systems that were integrated with existing software programs (n=33), were system-initiated (n=28) and provided an assessment and specific recommendations for treatment (n=35).

View this table:
Table 2

Summary of computerized clinical decision support systems (CDSS) features (n=58)

Barriers to and facilitators of CDSS uptake

Tables 3–6 summarize the key factors reported in the studies according to four domains—organizational, provider-related, patient-related factors and specific issues relating to the CDSS. Given the overwhelming consistency in our findings when we compared themes across different time periods and settings, we first report our overall findings according to domain and then highlight pertinent issues that emerged when we compared themes by time and setting.

View this table:
Table 3

Organizational factors impacting on computerized clinical decision support system (CDSS) uptake (31 studies)

View this table:
Table 4

Provider-related factors impacting on computerized clinical decision support system (CDSS) uptake (43 studies)

View this table:
Table 5

Patient-related factors impacting on computerized clinical decision support system (CDSS) uptake (26 studies)

View this table:
Table 6

Specifics of the computerized clinical decision support system (CDSS) impacting on uptake (51 studies)

Organizational factors (table 3)

The quality and quantity of infrastructure provided and the way in which the CDSSs were implemented were key factors impacting on the uptake of decision support. Studies reported consistently that limited computer availability at the point of care impeded CDSS use. Further, even when computer workstations were accessible, clinicians identified ongoing technical problems such as malfunctions, system failures and slow computer speeds as barriers to use. This often resulted in frustration for end users.

Technical assistance to address hardware and software issues was often limited. Studies also reported that CDSS use was compromised if the software itself could not be integrated with existing systems, and the roles and responsibilities of end users were not clearly delineated during the implementation phase (eg, who would be responsible for managing the clinical issues relating to an alert).

A key facilitator to CDSS uptake was the endorsement, demonstration and/or communication of the systems' benefits by management, administration or senior clinicians. Further, financial incentives for clinicians, as well as having adequate funds to support the introduction of the CDSS, were reported to facilitate uptake. While some studies reported that clinician concerns about professional liability and patient privacy may restrict the use of CDSSs, others highlighted that CDSS use may reduce risk, as the system recommendations were based on best clinical practice.

Provider-related factors (table 4)

The lack of training in the use of CDSSs and the limited computer skills of clinicians were flagged repeatedly as a significant barrier to use. These impacted on providers' confidence in using the systems, and in some cases, clinicians reported anxiety about using the CDSS at the point of care. Clinicians emphasized the need for further computer training, but also highlighted a concern that up-skilling in this domain may lead to de-skilling in clinical decision making, resulting in over-dependence on technology. CDSS use was perceived by some to enhance knowledge, while others reported that using CDSS was “admitting a personal inadequacy”.67

In some circumstances, providers preferred to use other information sources over CDSS (eg, in a complex case they may prefer to consult their colleagues). There was evidence of a general resistance to change existing practices, a strong belief that clinicians were already practicing in an evidence-based fashion, and the perception that introduction of CDSSs threatened professional autonomy (eg, one study referred to this issue as reverting to “assembly line medicine”56). Conversely, other studies indicated that a CDSS was more likely to be used when clinicians believed it enhanced decision making, and that such systems resulted in better prescribing practices.

Patient-related factors (table 5)

The primary factors identified within this domain related to patient characteristics, patient–doctor interactions and the perceived risks and benefits for patients. Patient factors such as age, clinical condition, tolerance to medications and patients' own preferences (eg, desire for treatment and compliance) impacted on CDSS use. These factors may be a barrier or facilitator to the uptake of CDSS depending on the clinical circumstances and the information provided by the system. For example, clinicians may accept drug allergy alerts in patients considered truly allergic, yet override the same alert in patients who had tolerated the medication in the past.

There was a range of views expressed about the benefits of CDSS within the consultation and its influence on the patient–doctor interaction. In some studies, clinicians felt it enhanced dialog with their patients, whereas in others, CDSS was seen to detract from the patient interaction (eg, loss of eye contact). These responses also appeared to be linked to the level of acceptance of computers within the consultation by patients and physicians. Similarly, there were divergent views about the benefits of CDSS—some studies reported that providers believed CDSS enhanced the quality of care and had a positive impact on patient outcomes while others reported that it may do “more harm than good”.27

Specific issues related to CDSSs (table 6)

A range of CDSS-specific factors, such as integration with clinical workflows and the content and its presentation were identified as impacting on uptake. Not surprisingly, ease of use (eg, quick access, minimal mouse clicks and key strokes), simplicity and visibility of messages were key drivers of use. Importantly, CDSS tools were seen to be beneficial for providing physicians with reminders about patient safety and long-term management.

On the other hand, systems where end-users had difficulty switching between displays or requiring backtracking were less likely to be adopted. Information-dense messages with inconsistent vocabulary and the requirement to re-enter patient data to generate advice were also deterrents to use. The timing and frequency of prompts, such as alerts appearing at inappropriate times in the workflow, were key factors relating to use and acceptability. In addition, the high frequency of alerts was perceived by clinicians as annoying, irritating and intrusive to the consultation. As a consequence, providers felt they may become desensitized to alerts and miss important information. There was some suggestion that alerts should be graded by severity, and alerts associated with potentially serious clinical consequences should be difficult to override by clinicians.

The importance of CDSS content, particularly its relevance to individual patients, was a recurring theme. Overall, clinicians communicated preferences for up-to-date evidence-based information. However, clinicians variously reported recommendations that were too extensive, too lengthy, too trivial or redundant. CDSS components valued by end users included drug interaction alerts, patient information sheets and links to supporting information. Notwithstanding these considerations about content and presentation, generic CDSSs that did not account for local constraints, such as the availability of specific drugs in that setting, would not be used.

Comparison of themes across different time periods and settings

Our analyses revealed some subtle differences in themes between studies conducted in the different time periods (1990–99 vs 2000–07) and settings (ambulatory versus institutional care), particularly across the organizational and patient domains.

Only studies conducted since 2000 reported the importance of endorsement and demonstration of CDSS benefits by management or senior clinicians. The role of financial incentives in facilitating system uptake was also a feature of more recent studies. While earlier studies highlighted that clinician concerns about professional liability and patient privacy may restrict the use of CDSSs, studies conducted since 2000 referred to the benefits of CDSS in terms of risk mitigation. Studies conducted in ambulatory care tended to report the need for technical assistance in relation to hardware and software issues and highlighted the minimal use of CDSS if software was not integrated with existing systems. In addition, patient-related factors were mostly reported in studies conducted within ambulatory care settings and in studies conducted since 2000. Although CDSS-specific factors were consistently reported over time and across different settings, studies conducted in ambulatory care often identified issues concerning the quality of CDSS content; data pertaining to studies conducted solely in inpatient settings were limited.


This review identified a range of factors influencing CDSS use and demonstrated that simply providing the clinical information in electronic format does not guarantee uptake. Our overall findings suggest that there is no “one size fits all approach” to influencing prescribing via CDSSs,75 and factors beyond software and content must be considered when developing CDSS systems for prescribing. Fundamental issues include the availability and accessibility of hardware, sufficient technical support and training in the use of the system, the level of system integration into clinical workflow and the relevance and timeliness of the clinical messages provided. Further, acceptance of the system by the various stakeholders (eg, management and end users), clear articulation and endorsement of the system's benefits in patient care, and minimizing the perceived threats to professional autonomy are important to the success of CDSSs.

Importantly, our review suggests that despite advances in technology and likely increased sophistication of CDSSs, issues influencing CDSSs use for prescribing have not changed substantially over time. Key concerns relate to the usability of the system and relevance of the content. The mention of these issues in more recent studies suggests there is still much to be done to make these systems work in routine clinical practice. There appeared to be some differences according to the practice setting; problems due to lack of integration of prescribing tools with existing software tended to be mentioned in studies conducted in ambulatory care. However, these issues may be equally important in institutional settings, just more easily addressed in hospitals where there are high levels of computerization for managing patient administration and a range of aspects of clinical care.

Not surprisingly, provider-related issues were reported consistently over time and irrespective of setting, which probably relates to the challenges of changing the knowledge, attitudes and behaviors of human beings. On the positive side, these issues are predictable and those armed with the responsibility of CDSS implementation should be well prepared to counter some of the fundamental barriers to use. However, it would be unrealistic to expect that even best practice system implementation will result in immediate and sustainable change across the entire target audience. Healthcare organizations need to have dedicated staff to champion and facilitate an appropriate environment for implementing CDSS so that it may be used to its full potential.4 Further, we established a notable consistency in CDSS-specific issues over time. Some CDSSs are highly sophisticated, well developed and evaluated extensively, however they tend to come from a small number of institutions recognized internationally for their work in medical informatics.19 ,20 ,30 ,32 ,33 ,44 ,45 ,46 ,66 The recurring themes related to CDSS specific issues most likely reflects the range of systems and platforms being tested and implemented, and the heterogeneity of prescribing software deployed across many healthcare settings.

We highlighted a notable absence of studies reporting the impact of system endorsement before 2000. While many interventions targeting physician behavior change use endorsement and promotion by respected peer group members as a fundamental component of their implementation strategy,76 this may not have been seen as a key driver for change in the early studies. Thus, study designs may have omitted addressing this factor and/or respondents did not acknowledge its importance as a facilitator of uptake. This could also be true for the absence of reporting patient factors in the earlier studies. With more widespread use of computers in clinical practice over time, the potential for interference in the doctor–patient interaction might be magnified. Interestingly, the earlier studies highlighted concerns about professional liability and patient privacy in relation to the use of CDSSs. However, greater acceptance of the technology on the part of end-users and the efforts of organizations such as the American Medical Informatics Association in overseeing and endorsing the introduction of guidelines and regulations75 is likely to have dispelled some of the early concerns.

Studies evaluating the impact of CDSSs for prescribing in ambulatory care highlighted a lack of technical support addressing day-to-day software and hardware issues and limited integration of CDSS with existing software as important barriers to uptake. In many ways this is not surprising given the greater diversity of clinic locations in community practice and the heterogeneity of systems used in this setting.77 In contrast, hospitals have their own information technology infrastructure and many CDSSs have been designed specifically to dovetail with their existing computerized physician order entry systems. Further, the influence of patient factors was a key feature of studies conducted in ambulatory care and effectively absent from studies conducted in institutional care. Again, this is likely to relate to the nature of ambulatory care and the conditions physicians treat in this setting. Previous systematic reviews have demonstrated the greater effectiveness of CDSSs in hospital compared with ambulatory care10 and for the management of acute rather than chronic conditions.13 It was postulated that these differences might be attributed to the stricter controls on healthcare professionals and a greater willingness to abide by externally imposed rules in institutional settings. However, this review suggests that patient factors may create an additional layer of complexity in healthcare professionals' decisions in community practice.

The strengths of this study lie in the systematic approach to identifying studies, the inclusion of a range of study designs, our attempts to capture CDSS features beyond content and functionality, and the stratification of our analysis by the time period in which the studies were conducted and the setting in which they were undertaken. Importantly, our key findings, generated from a diverse literature, support the opinions and recommendations of luminaries in the field who have written extensively about the key requirements for successful implementation of CDSS in clinical practice.3 ,4 ,75 ,78

The review however, has a number of limitations. Despite our intensive efforts we may not have identified all relevant studies as some may not be available in the public domain, and others may be published outside the peer-reviewed academic literature. Our review studies were heterogeneous in terms of design and data collection methods so we did not conduct comprehensive quality assessment of individual reports. Although time periods were defined somewhat arbitrarily, we believed that year of study conduct would more accurately capture any changes in the factors influencing CDSS uptake over time. However, we imputed year of study conduct when it was absent using publication year, and our “sensitivity” analysis however confirmed that our classification system did not change study findings. We used an organizational framework adapted from previous research20 that may not necessarily reflect the level of interplay between the various factors, and we did not attempt to map these interrelationships or infer their relative importance. We also noted the frequency with which studies reported specific domains, themes and subthemes (tables 3–6). Importantly, these data may not necessarily indicate the significance of a particular issue. Rather, the relative weight of these factors should be determined in planning and implementing specific CDSSs.

The limited information available in many of the published manuscripts precluded stratifying results by specific CDSS features. Clearly an important move for future research will be greater clarity and emphasis on reporting of specific design features; journal editors may have a role in setting minimum standards for this purpose. With the advent of supplementary online information for manuscript publication there is a mechanism for making these details available in the public domain.

A number of important questions also remain unanswered. What are practitioners' perceived needs for prescribing decision support? Do these needs vary according to their clinical experience? How can needs be best met within the time constraints of a patient consultation? The complexities relating to CDSSs for prescribing and the state of current technology means that most organizations will probably only realize moderate benefits from the implementation of such systems.4 However, substantial opportunities do exist for all stakeholders to collaborate and explore the potential of CDSSs to support medication use that is as safe and effective as possible.

Although there is widespread interest in CDSS development, worthwhile progress will come with attention to both computer system enhancements and the human factors influencing responsiveness to new systems and change. Further work with end-users is required to explore these issues before system implementation. Although widespread dissemination of appropriate CDSSs might be expected to improve clinical practice, simply providing the information in electronic format alone does not ensure uptake.


The project was funded by the Australian Department of Health and Ageing through the National Prescribing Service, as part of a research partnership with the University of Newcastle and the University of New South Wales. Other funders: Australian Department of Health and Ageing.

Competing interests

None declared.


All authors contributed to the design, implementation, data analysis and interpretation and production of the manuscript. We acknowledge the contributions of other Study Guidance Group Members: James Reeve, Bryn Lewis, Malcolm Gillies, Michelle Sweidan, Michelle Toms, Adi Smith and Jonathan Dartnell.

Provenance and peer review

Not commissioned; externally peer reviewed.



View Abstract