OUP user menu

Health information technology: fallacies and sober realities

Ben-Tzion Karsh, Matthew B Weinger, Patricia A Abbott, Robert L Wears
DOI: http://dx.doi.org/10.1136/jamia.2010.005637 617-623 First published online: 1 November 2010

Abstract

Current research suggests that the rate of adoption of health information technology (HIT) is low, and that HIT may not have the touted beneficial effects on quality of care or costs. The twin issues of the failure of HIT adoption and of HIT efficacy stem primarily from a series of fallacies about HIT. We discuss 12 HIT fallacies and their implications for design and implementation. These fallacies must be understood and addressed for HIT to yield better results. Foundational cognitive and human factors engineering research and development are essential to better inform HIT development, deployment, and use.

Introduction

Current research demonstrates that health information technology (HIT) can improve patient safety and healthcare quality, in certain circumstances.16 At the same time, other research shows that HIT adoption rates are low,710 and that HIT may not reliably improve care quality11 ,12 or reduce costs.13 A recent National Research Council report14 provided a hypothesis to explain these observations: … current efforts aimed at the nationwide deployment of health care IT will not be sufficient to achieve the vision of 21st-century health care, and may even set back the cause if these efforts continue wholly without change from their present course. Specifically, success in this regard will require greater emphasis on providing cognitive support for health care providers and for patients and family caregivers … This point is the central conclusion of this report.

This is a stunning conclusion, especially in light of the new Meaningful Use rules.15 Yet, it is consistent with evidence of HIT failures and misuses.1620 In this article, we argue that the twin issues of the failure of HIT adoption and of HIT efficacy can be understood by examining a series of misguided beliefs about HIT. The implications of these fallacies for HIT design and implementation need to be acknowledged and addressed for HIT use to attain its predicted benefits.

The ‘risk free HIT’ fallacy

Many designers and policymakers believe that the risks of HIT are minor and easily manageable. However, because HIT is designed, built, and implemented by humans, it will invariably have ‘bugs’ and latent failure modes.21 ,22 The deployment of HIT in high-pressure environments with critically ill patients poses significant risk.1719 Fallible humans have learned to build generally reliable complex physical systems (eg, bridges, buildings, cars), but it took more than a century to understand and mitigate the myriad of hazards of these systems. In contrast, we cannot yet design and deploy complex software systems that are on time, within budget, meet the specified requirements, satisfy their users, are reliable (bug free and available), maintainable, and safe.23 ,24 Edsger Dijskstra, a recognized leader in software engineering, lamented that: … most of our systems are much more complicated than can be considered healthy, and are too messy and chaotic to be used in comfort and confidence. The average customer of the computing industry has been served so poorly that he expects his system to crash all the time, and we witness a massive worldwide distribution of bug-ridden software for which we should be deeply ashamed.23

There are two additional reasons why HIT failures are particularly problematic. First, they are often opaque to users and system managers alike; it can be very challenging to understand exactly how a particular failure occurred. Envisioning paths to IT failure in advance, so they might be forestalled, is particularly difficult.16 ,25 ,26 Second, HIT systems tend to have a ‘magnifying’ property, wherein one exchanges a large number of small failures for a small number of large, potentially catastrophic failures. For example, instead of one pharmacist making a single transcription error that affects one patient, when a medication dispensing robot has a software failure it can produce thousands of errors an hour. Moreover, as different HIT systems become coupled (eg, when a CPOE system is directly linked to a pharmacy information system and that to an electronic medication administration record), errors early in the medication process can more quickly pass unscrutinized to the patient.

Currently, there are no regulatory requirements to evaluate HIT system safety even though these systems are known to directly affect patient care in both positive and negative ways.2 ,17 ,18 ,2734 Thus, current HIT may:

  • Have been developed from erroneous or incomplete design specifications;

  • Be dependent on unreliable hardware or software platforms;

  • Have programming errors and bugs;

  • Work well in one context or organization but be unsafe or even fail in another;

  • Change how clinicians do their daily work, thus introducing new potential failure modes.16 ,18 ,28 ,3539

Decades of experience with IT in other hazardous industries has emphasized the importance of these problems40 ,41 and led to the development of methods for safety critical computing.42 ,43 Healthcare has been slow to embrace safety critical computing,44 and HIT software has commonly been identified as being among the least reliable.45 A recent National Academy of Science report concluded that IT should be considered “guilty until proven innocent”, and that the burden of proof should fall on the vendor to demonstrate to an independent certifier or regulator that a system is safe, not on the customer to prove that it is not.41 No other hazardous industry deploys safety critical IT without some form of independent hazard analysis (eg, a ‘safety case’); it is unwise for healthcare to continue to do so.

The ‘HIT is not a device’ fallacy

An off-shoot of the ‘risk free HIT’ fallacy is the belief that HIT can be created and deployed without the same level of oversight as medical devices. Currently, an FDA-approved drug (eg, an opioid) is delivered by an FDA-approved device (infusion pump) to a patient in pain. But none of the HIT that mediates and influences all of the critical steps between the clinician's determination that pain relief is needed and the start of the opioid infusion (eg, order entry with decision support, pharmacy checking and dispensing systems, robotic medication delivery and dispensing systems, and bedside medication management systems) is subject to any independent assessment of its safety or fitness for purpose. The complexity of HIT systems and the risk of potentiating serious error is sufficiently significant to demand effective regulatory oversight.46 The Office of the National Coordinator has recognized this risk, and held hearings on HIT safety on 25 February 2010,47 but it is unlikely that any effective process of independent review of safety will be in place in the time frame produced by the HITECH Act.

The issue of regulation of HIT for safety and effectiveness is a difficult and contentious one. The need for some form of independent evaluation of HIT safety prior to market introduction has gained recent attention.47 Much has changed since the 1997 publication of Miller and Gardner's consensus recommendations,48 most notably the recent, relatively rapid and semi-compulsory implementation of complex HIT systems under the HITECH Act in organizations without much prior experience in rolling out or managing such products. Many in the HIT industry and academics argue that the institution of FDA-type regulation would be counterproductive, by, for example, slowing innovation, freezing improvement of current systems with risky configurations, and ‘freezing out’ small competitors. However, the current approach can no longer be justified. A passive monitoring approach, as currently suggested by the Office of the National Coordinator,49 seems likely to be both expensive and ultimately ineffective. We believe that a proactive approach is required.

An alternative to FDA-type regulation would be a pre-market requirement for a rigorous independent safety assessment. This approach has shown some promise in proactively identifying and mitigating risks without unduly degrading innovation and necessary product evolution. Such an approach has been endorsed by international standards organizations50 ,51 and is beginning to be applied in Europe.52

The ‘learned intermediary’ fallacy

One of the drivers of the ‘risk free HIT’ fallacy is the ‘learned intermediary’ doctrine, the idea that HIT risks are negligible because ‘the human alone ultimately makes the decision’. It is believed that because a human operator monitors and must confirm HIT recommendations or actions, that humans can be depended on to catch any system-induced hazards.53 Paradoxically, this fallacy stands a fundamental argument in favor of HIT on its head (ie, that HIT will help reduce human errors but we will rely on the human to catch the HIT errors). Moreover, this fallacy assumes that people are unaffected by the technology they use. However, it is well established that the way in which problems, information, or recommendations are presented to users by technology reframes them in ways that neither the users nor the designer may appreciate.39 ,54 Data presentation format will affect what the user perceives and believes to be salient (or not) and therefore affects subsequent decisions and actions.55 ,56 The clinician does not act or decide in a vacuum, but is necessarily influenced by the HIT. Users are inevitably and often unknowingly influenced by what many HIT designers might consider trivial design details—placement (information availability), font size (salience), information similarity and representativeness, perceived credibility (or authority), etc.5759 For example, changing the order of medication options on a drop down pick list will influence clinicians' ordering behavior.

Empirical studies have demonstrated that people will accept worse solutions from an external aid than they could have conceived of, unaided.60 Because information presentation profoundly affects user behavior and decision-making, it is critical that information displays be thoughtfully designed and rigorously tested to ensure they yield the best possible performance outcomes. These evaluations must consider the full complexity of the context in which the system is to be used.61

The ‘bad apple’ fallacy

It is widely believed that many healthcare problems are due primarily to human (especially clinician and middle manager) shortcomings. Thus, computerization is proposed as a way to make healthcare processes safer and more efficient. Further, when HIT is not used or does not perform as planned, designers and administrators ask, “Why won't those [uncooperative, error-prone] clinicians use the system?” or “Why are they resisting?” The fingers are pointed squarely at front-line users.

However, human factors engineers, social psychologists, and patient safety researchers long ago debunked the bad apple theory of human error, replacing it with the more accurate and useful systems view of error,21 ,62 ,63 which is supported by strong theory and evidence from safety science, industrial and systems engineering, and social psychology.62 ,64 ,65 Thus, bad outcomes are the result of the interactions among systems components including the people, tools and technologies, physical environment, workplace culture, and the organizational, state, and federal policies which govern work. Poor HIT outcomes do not result from isolated acts of individuals, but from interactions of multiple latent and triggering factors in a field of practice.18 ,20 ,65 ,66

The ‘use equals success’ fallacy

Equating HIT usage with design success can be misleading and may promulgate inappropriate policies to improve ‘use’.67 Humans are the most flexible and adaptable elements in any system, and will find ways to attain their goals often despite the technology. However, the effort and resilience of front-line workers is a finite resource and its consumption by workarounds to make HIT work because its use is required reduces the overall ability of the system to cope with unexpected conditions and failures. Thus, the fact that people can use a technology to accomplish their work is not, in itself, an endorsement of that technology. Conversely, a lack of use is not evidence of a flawed system—clinicians may ignore features like reminders for legitimate reasons. Healthcare is a complex sociotechnical system where simple metrics can mislead because they do not adequately consider the context of human decisions at the time they are made. Thus, the promulgation of ‘meaningful use’ may lead to undesirable consequences if such use is not contextually grounded and tied to improved efficiency, learning, ease of use, task and information flow, cognitive load, situation awareness, clinician and patient satisfaction, reduced errors, and faster error recovery.

The ‘messy desk’ fallacy

Much of the motivation for HIT stems from the belief that something is fundamentally wrong with existing clinical work, that it is too messy and disorganized. It needs to be ‘rationalized’ into something that is nice, neat, and linear.68 However, as a complex sociotechnical system, many parts of healthcare delivery are messy and non-linear. That is not to say that waste does not exist nor does it mean that standardization is unwise. There exist processes within clinical care that require linearity and benefit from standardization. But, in many clinical settings, multiple patients are managed simultaneously, with clinicians repeatedly switching among sets of goals and tasks, continuously reprioritizing and replanning their work.69 ,70 In such settings, patient care is less an algorithmic sequence of choices and actions than an iterative process of sensing, probing, and reformulating intermediate goals negotiated among clinicians, patients, caregivers, and the clinical circumstances. Because of time constraints, many care goals, and the tasks or decisions needed to pursue those goals, are intentionally deferred until a future opportunity.

However, HIT designs often assume a rationalized model of healthcare delivery. Templates walk clinicians through a prescribed set of questions even though the questions and/or their order may not be relevant for a particular patient at that time. Similarly, some clinical decision support (CDS) rules force clinicians to stop and respond to the CDS, interrupting their work, substituting the designer's judgment for that of the clinician.71 This mismatch between the reality of clinical work and how it is rationalized by HIT leads clinicians to perceive that these systems are disruptive and inefficient. Accommodating the non-linearity of healthcare delivery will require new paradigms for effective HIT design. Consistent and appropriate data availability and quick access may need to supplant ‘integration into workflow’ as a key design goal.

The ‘father knows best’ fallacy

While HIT has been sold as a solution to healthcare's quality and efficiency problems, most of the benefits of current HIT systems accrue to entities upstream from direct patient care processes72—hospital administrators, quality improvement professionals, payors, regulators, and the government.73 In contrast, those who suffer the costs of poorly designed and inefficient HIT are front-line providers, clerks, and patients. Thus, most HIT has been designed to meet the needs of people who do not have to enter, interact with, or manage the primary (ie, raw) data. This mismatch between who benefits and who pays leads to incomplete or inaccurate data entry (‘garbage in—garbage out’), inefficiency, workarounds, and poor adoption.74 This fundamental principle has been expressed as Grudin's Law, one form of which is: “When those who benefit from a technology are not those who do the work, then the technology is likely to fail or be subverted.”75

As noted by Frisse,76 HIT that focuses too much on the administrative aspects of healthcare (eg, complete and accurate documentation to meet authorization rules or to improve revenue) rather than on care processes and outcomes (ie, the actual quality of disease management) will result in a missed opportunity to truly transform care. Healthcare does not exist to create documentation or generate revenue, it exists to promote good health, prevent illness, and help the sick and injured. Efforts currently underway to align incentives to enhance adoption of electronic health records (EHRs) are acknowledged and warranted. However, the definitions of ‘meaningful use’ and ‘certified systems’, and how these milestones are to be measured, must be considered carefully. Otherwise, unintended consequences, such as physicians and hospitals investing in HIT to the exclusion of what might actually be more effective local strategies (eg, use of nurse case managers or process redesign), may occur.

The ‘field of dreams’ fallacy and the ‘sit-stay’ fallacy

The ‘field of dreams’ fallacy suggests that if you provide HIT to clinicians, they will gladly use it, and use it as the designer intended. This fallacy is further reinforced by the belief that clinicians should rely on HIT because computers are, after all, smarter than humans (the ‘sit-stay’ fallacy explained below).

The ‘field of dreams’ fallacy is well-known in other domains where it is also referred to as the ‘designer fallacy’ or ‘designer-centered design’.77 Here, if a system's designer thinks the system works, then any evidence to the contrary must mean the users are not using it appropriately. In fact, designers sometimes design for a world that does not actually exist78 (also called the ‘imagined world fallacy’). For healthcare, the imagined world may be a linear orderly work process used by every clinician.

Computers cannot be described as being inherently ‘smart’. Instead, computers are very good at repeatedly doing whatever they were told to do, just like a well-trained animal (ie, ‘sit-stay’). Computers implement human-derived rules and with a degree of consistency much higher than human workers. This does not make them intelligent. Instead, computers are much more likely than humans to perform their clever and complex tricks at inappropriate times. Moreover, a computer, in its consistency, can perpetuate errors on a very large scale. People, on the other hand, are smart, creative, and context sensitive.79 Technology can be at its worst, and humans at their best, when novel and complex situations arise. Many catastrophes in complex sociotechnical systems (eg, Three Mile Island) occur in such situations, particularly when the technology does not communicate effectively with its human users.

Thus, HIT must support and extend the work of users,80 ,81 not try to replace human intelligence. Cognitive support that offers “clinicians and patients assistance for thinking about and solving problems related to specific instances of healthcare”14 is the area where the power of IT should be focused.

The ‘one size fits all’ fallacy

HIT cannot be designed as if there is always a single user, such as a doctor, working with a single patient. The one doctor–one patient paradigm has largely been replaced by teams of physicians, nurses, pharmacists, other clinicians, and ancillary staff interacting with multiple patients and their families, often in different physical locations. HIT designed for single users, or for users doing discrete tasks in isolated ‘sessions’, are misconceived. There are tremendous differences in the HIT needs of different clinical roles (nurse vs physician), clinical situations (acute vs chronic care), clinical environments (intensive care unit vs ambulatory clinic, etc), and institutions. The interaction of HIT with multiple users will influence communication, coordination, and collaboration. Depending on how well the HIT supports the needs of the different users and of the team as a whole, these interactions may be improved or degraded.8082 To succeed in today's team-based healthcare reality, HIT should be designed to: (a) facilitate the necessary collaboration between health professionals, patients, and families; (b) recognize that each member of the collaborative team may have different mental models and information needs; and (c) support both individual and team care needs across multiple diverse care environments and contexts. This will require more than just putting a new ‘front end’ on a standard core; it needs to inform the fundamental design of the system.

The ‘we computerized the paper, so we can go paperless’ fallacy

Taking the data elements in a paper-based healthcare system and computerizing them is unlikely to create an efficient and effective paperless system. This surprises and frustrates HIT designers and administrators. The reason, however, is that the designers do not fully understand how the paper actually supports users' cognitive needs. Moreover, computer displays are not yet as portable, flexible, or well-designed as paper.83

The paper persistence problem was recently explored at a large Veterans Affairs Medical Center84 where EHRs have existed for 10 years. Paper continues to be used extensively. Why? The paper forms are not simple data repositories that, once computerized, could be eliminated. Rather such ‘scraps’ of paper are sophisticated cognitive artifacts that support memory, forecasting and planning, communication, coordination, and education. User-created paper artifacts typically support patient-specific cognition, situational awareness, task and information communication, and coordination, all essential to safe quality patient care. Paper will persist, and should persist, if HIT is not able to provide similar support.

The ‘no one else understands healthcare’ fallacy

Designers of HIT need to have a deep, rich, and nuanced understanding of healthcare. However, it is misguided to believe that healthcare is unique or that no one outside of the domain could possibly understand it. This fallacy mistakes a condition that is necessary for success (ie, the design team must include clinicians in the design process) from one that is sufficient (ie, only clinicians can understand and solve complex HIT issues). Teams of well-intentioned clinicians and software engineers may believe that understanding of clinical processes coupled with clever programming can solve the challenges facing healthcare. But such teams typically will not have the requisite breadth and depth of theories, tools, and ideas to develop robust and usable systems. By seeing only what they know, such teams do not understand how clinical work is really carried out, what clinicians' real needs are, and where the potential hazards and leverage points lie. As a result, problems have been framed too narrowly, leading to impoverished designs and disappointing ‘solutions’.85

Understanding what would help people in their complex work is not as simple as asking them what they want86—an all too common approach in HIT design. People's ideas for what should be part of HIT design are hypotheses based on their perceptions of the world.87 Like all hypotheses, some or many could be wrong. Furthermore, most clinicians are not experts in device design, user interface design, or the relationship between HIT design and performance. What clinicians say they want may be limited by their own understanding of the complexity of their work or even their design vocabulary. Thus, simply asking clinicians (or any end-user, for that matter) what they want and giving it to them is not a wise approach. What clinicians want and what will actually improve their work may be quite different.

Clinicians need to be studied so that the designer is aware of the complexities of their work—the tasks, processes, contexts, contingencies, and constraints. The results of observations, interviews, and other user research can best be analyzed by trained usability engineers and human factors professionals to properly inform design. Similarly, it takes special training and skills to evaluate a human–computer interface, assess the usability of a system, or predict the changes in communication patterns and social structures that a design might induce. Furthermore, design decisions should be based on test results, not user preferences. The involvement of human factors engineering, cognitive engineering, interaction design, psychology, sociology, anthropology, etc, in all phases of HIT design and implementation will not be a panacea, but could substantially improve HIT usability, efficiency, safety, and user satisfaction.80

What should we do now?

HIT must be focused on transforming care and improving patient outcomes. HIT must be designed to support the needs of clinicians and their patients.62 As pointed out recently by Shavit,88 “It is health that people desire, and health technology utilization is merely the means to achieve it.” The needs of users and the complexities of clinical work must be analyzed first, followed by evaluation of the entire scope of potential solutions, rather than examining the current array of available products and characterizing the needs that they might meet.88 We must delineate the key questions (based on the critical problems) before we arrive at answers. Unfortunately, insufficient contextual research has been conducted to support effective HIT design and implementation.14 Exemplary research on relevant topics has been carried out for several decades,8998 but it does not seem that commercial HIT has benefited adequately from these findings. Much more foundational work is needed. We applaud the recent funding of the Strategic Health IT Advanced Research Project on cognitive informatics and decision making in healthcare by the Office of the National Coordinator,99 and hope that this represents the beginning of a sustained research effort on the safe and effective design of HIT.

As stated earlier, appropriate metrics for HIT success should not be adoption or usage, but rather impact on population health. The ‘comparative effectiveness’ perspective must also be applied to HIT—what is the return-on-investment of each HIT initiative compared with alternative uses of these funds? Importantly, just as the structure of a single carbon group on a therapeutic molecule can make the difference between a ‘miracle cure’ and a toxic substance, the details of HIT design and implementation100 in a specific context can make a huge difference to its effectiveness, safety, and real cost (ie, not just the purchase price but training costs, lost productivity, user satisfaction, HIT-induced errors, workarounds, etc).

There are fundamental gaps in the awareness, recognition, and application of existing scientific knowledge bases, especially related to human factors, and systems and cognitive engineering, that could help address some of HIT's biggest problems. To that end, we recommend the following:

  • These challenges will only be overcome by collaborating substantively with those who can contribute unique and important expertise such as human factors engineers, applied psychologists, medical sociologists, communication scientists, cognitive scientists, and interaction designers. Pilots did not improve aviation safety nor did nuclear power operators improve nuclear safety … by themselves. Rather they worked closely with experts in cognitive, social, and physical performance and safety to improve safety. HIT stands to benefit in the same way.

  • Humans have very limited insight into their own performance, and even more limited ability to articulate what might improve it. We need substantial research on how clinical work is actually done and should be done. Methods to accomplish this include cognitive field analyses101 (eg, cognitive work analysis,78 cognitive task analysis102), workflow and task analyses103 ,104 (eg, hierarchical task analysis, sequence diagrams), and human-centered design evaluations61 ,105108 (eg, usability testing). The latter takes the results of domain studies and validates them. Validation of HIT cannot be achieved by asking a clinician if they like the design. Validation requires thorough experimental testing of the design based on well-defined performance criteria.

  • Measurements of meaningful use15 are designed to facilitate payment of government incentives to physicians for adopting HIT. However, use may not truly be meaningful in a clinical sense until HIT truly supports users' needs. During HIT development, vendors and healthcare organizations must focus on more meaningful measures of design success: clinician and patient ease of learning, time to find information, time to solve relevant clinical problems, use errors, accuracy of found information, changes in task and information flow, workload, situation awareness, communication and coordination effectiveness, and patient and clinician satisfaction.65 ,109112 These measures should be applied to all members of the care team.

These steps alone will require a significant investment by vendors, healthcare organizations, and government funders. The path may seem daunting and the fruits of the investment distant, so a little perspective might help. In 1903, the first controlled powered airplane took flight. In 1947, Fitts113 published a paper in which he explained that, … up to the present time psychological data and research techniques have played an insignificant role in the field … Particularly in the field of aviation has the importance of human requirements in equipment design come to be recognized. There probably is no other engineering field in which the penalties for failure to suit the equipment to human requirements are so great. With present equipment, flying is so difficult that many individuals cannot learn to pilot an aircraft safely, and even with careful selection and extensive training of pilots, human errors account for a major proportion of aircraft accidents. The point has been reached where addition of new instruments and devices … on the cockpit instrument panel actually tends to decrease the over-all effectiveness of the pilot by increasing the complexity of a task that already is near the threshold of human ability. As aircraft become more complex and attain higher speeds, the necessity for designing the machine to suit the inherent characteristics of the human operators becomes increasingly apparent.

Substitute ‘clinician’ for ‘pilot’ and ‘patient room’ for ‘cockpit’ and the text feels current. In the more than 60 years since that publication, commercial aviation has become very safe. While it may not take 60 years for HIT to become as safe, if we do not change from our current course, it never will be.

Throughout human history, significant innovations have always been associated with new perils. This is as much the case for fire, the wheel, aviation, and nuclear power as it is for HIT. HIT affords real opportunities for improving quality and safety. However, at the same time, it creates substantial challenges, especially during everyday clinical work. This paper is not a Luddite call to cease HIT development and dissemination. Rather, it is a plea to accelerate and support the design and implementation of safer HIT so that we need not wait as long as did aviation to see the fruits of innovation.

We must also consider the likely undesirable consequences of current HIT deployment policies and regulations. The ‘hold harmless’ clauses53 found in many HIT contracts are anathema to organizational learning, innovation, and safety because they stifle reporting and sharing of experiences and data (‘risk free’, ‘field of dreams’, and ‘father knows best’ fallacies). Current meaningful use rules and deadlines leave little time for HIT product improvement and testing, incentivizing rapid implementation of whatever is available (‘one size fits all’ fallacy). Despite compelling evidence that HIT works best (and is safest) when it is customized to local circumstances and workflows, the government-sponsored push for meaningful use may leave clinicians trying to adapt their care practices to suboptimal systems (‘field of dreams’ and ‘sit-stay’ fallacies). Finally, the current functional usage measures of meaningful use will focus healthcare facilities and practices on meeting those measures (eg, a certain percentage of prescriptions must be generated by HIT systems) to the exclusion of others (eg, the incidence of inappropriate prescribing) that may be more important (‘use equals success’ fallacy). However, if put on the right path now, HIT will ultimately take its rightful place in healthcare, supporting and extending clinician and patient efforts to enhance human health and well-being.

Funding

The authors' time has been supported by grants R18SH017899 from AHRQ and R01LM008923-01A1 from NIH to BK; IAF06-085 from the Department of Veterans Affairs Health Services Research and Development Service (HSR&D) and HS016651 from AHRQ to MBW; and R18HS017902 from AHRQ to RLW.

Competing interests

None.

Provenance and peer review

Not commissioned; externally peer reviewed.

Footnotes

  • This paper stemmed from the authors' participation as external resources in a workshop sponsored by the Agency for Healthcare Research and Quality (AHRQ) entitled ‘Wicked Problems in Cutting Edge Computer-Based Decision Support’ held on March 26–27, 2009 at the Center for Better Health, Vanderbilt University, Nashville, Tennessee.

References

View Abstract