OUP user menu

Developing and validating a model to predict the success of an IHCS implementation: the Readiness for Implementation Model

Kuang-Yi Wen , David H Gustafson , Robert P Hawkins , Patricia F Brennan , Susan Dinauer , Pauley R Johnson , Tracy Siegler
DOI: http://dx.doi.org/10.1136/jamia.2010.005546 707-713 First published online: 1 November 2010


Objective To develop and validate the Readiness for Implementation Model (RIM). This model predicts a healthcare organization's potential for success in implementing an interactive health communication system (IHCS). The model consists of seven weighted factors, with each factor containing five to seven elements.

Design Two decision-analytic approaches, self-explicated and conjoint analysis, were used to measure the weights of the RIM with a sample of 410 experts. The RIM model with weights was then validated in a prospective study of 25 IHCS implementation cases.

Measurements Orthogonal main effects design was used to develop 700 conjoint-analysis profiles, which varied on seven factors. Each of the 410 experts rated the importance and desirability of the factors and their levels, as well as a set of 10 different profiles. For the prospective 25-case validation, three time-repeated measures of the RIM scores were collected for comparison with the implementation outcomes.

Results Two of the seven factors, ‘organizational motivation’ and ‘meeting user needs,’ were found to be most important in predicting implementation readiness. No statistically significant difference was found in the predictive validity of the two approaches (self-explicated and conjoint analysis). The RIM was a better predictor for the 1-year implementation outcome than the half-year outcome.

Limitations The expert sample, the order of the survey tasks, the additive model, and basing the RIM cut-off score on experience are possible limitations of the study.

Conclusion The RIM needs to be empirically evaluated in institutions adopting IHCS and sustaining the system in the long term.


More healthcare organizations are adopting interactive health communication systems (IHCS), making it important to understand the factors that predict a successful implementation. This paper describes two empirical studies used to formulate and validate such a predictive model, the Readiness for Implementation Model (RIM).


Interactive health communication systems (IHCS)

The major conceptualization of interactive health communication systems (IHCS) in the USA has been undertaken by the Science Panel on Interactive Communication and Health.1 The panel defines IHCS as “the operational software program or modules that interface with the end user, including health information and support web sites and clinical decision-support and risk assessment software.” ‘End user’ refers to patients and their families, and their use distinguishes an IHCS from other types of health information systems, such as those for clinicians and administrators. An IHCS supplies information, enables informed decision making, promotes healthful behaviors, encourages peer communication and emotional support, and helps manage the demand for health services.1 As IHCS become more available, they provide healthcare organizations with a valuable method to encourage patients take a more active role in their healthcare.25 Several studies have reported that patients have widely accepted IHCS and benefited from a better quality of life, greater participation in healthcare, and reduced communication barriers and cost of care, regardless of race, education, or age.615

Implementation challenges with IHCS

Health information systems hold great promise for improving healthcare, but can also produce unwanted consequences.1620 A failed implementation can be costly and raise cynicism about such innovations.2124 Most research about implementing technology relates to administrative, financial, or clinical data, such as electronic medical records,2527 computerized physician order entry,2831 and expert clinical decision-support systems.32 Few studies have reported on implementing IHCS,33 ,34 although introducing this tool poses unique challenges compared to introducing other types of technology.2 For example, when an IHCS is adopted in a cancer clinic, the end-user (the patient and their family) is not an employee of the organization but a customer. This means that the change requires different communication and operational adjustments compared to those required of new technology used only by staff members.3537 In addition, clinicians are especially influential in promoting the use of an IHCS to their patients and families, although the system changes aspects of the doctor–patient relationship, such as communication between them. Thus, adopting IHCS involves barriers and facilitators not examined previously in the literature on implementing health information technology. To our knowledge, no previous research has been designed to predict and guide IHCS implementation.

Theoretical framework

A theoretical approach to adopting IHCS can organize and guide activities in ways that increase the likelihood of a successful implementation. Three theories have guided our work on a RIM for IHCS. Diffusion of innovations is the “process by which an innovation is communicated through certain channels over time among the members of a social system.” Adopting an IHCS is an innovation for patients and the healthcare system. This theory organizes insights into adopting the innovation, such as stages of adoption, attributes of innovations and innovators, and social-structural constraints.3842 Because we view implementing an IHCS as an organizational act and issue, organizational change theory relates to adopting IHCS as well. This theory suggests that readiness for change can ease adoption, which requires modifications in the behavior of organizational members and often involves realignments of departmental and personnel responsibilities.4345 Finally, an organization's policies and practices need to support putting an IHCS into practice after the adoption decision has been made. Implementation theory describes the phases of the process, from a formal introduction to the institutionalization of the change. It also addresses the extent to which an innovation fits within the organization's infrastructure and climate.46 ,47

The Readiness for Implementation Model (RIM)

Guided by the three theories, the RIM was conceptualized, developed, and validated in four phases (figure 1). Phases 1 and 2 are briefly described below (see online appendix A available at www.jamia.org for an in-depth description).48 Phases 3 and 4 are the main subject of this paper.

Figure 1

The RIM development process. The headings describe the phases of the process and beneath are descriptions of the key components in each phase. Phases 3 and 4 are the focus of this paper.

Phase 1: Advisor-panel model building

A panel of six advisors developed a straw model of elements likely to relate to implementation success, along with two or three descriptions, called ‘element levels,’ of how strongly each element influences implementation. The element levels represent a continuum from strong positive influence to minor influence to strong negative influence.

Phase 2: Exploratory case studies

We refined the straw model by interviewing key informants at five sites where IHCS had been adopted, asking about implementation barriers and facilitators. When the interviews at one site were finished, we modified the model and used the revision in the interviews at the next site.49 We concluded with a model that has seven higher-layer factors and 42 elements without weights (table 1).

View this table:
Table 1

The seven RIM factors with their definitions and elements

Model factorDefinitionElements
Organizational environmentState of the institution
  • Organizational history of innovation

  • Leader innovativeness

  • Internal turbulence

  • Within-department cooperation

  • Between-department cooperation

  • Influence of external healthcare environment

Organizational motivationExtent to which the innovation fits with institutional goals, resources, and support
  • Fit with key organizational goals

  • Costs and savings from the technology

  • The technology's ability to solve a key problem

  • Patients' expressed needs for the technology

  • Support from corporate administrator

  • Resources for implementation

Meeting user needsQuality of the innovation and the availability of help
  • Regularity of updates

  • Affordability

  • Convenience of access

  • Ease of patients finding what they need

  • Technical help for users and staff

  • Some efficacy data supporting use

PromotionPresence and influence of institutional champions and communication channels
  • Promotion within the organization and to patients

  • Existence of corporate champion

  • Influence of corporate champion

  • Existence of department champion

  • Influence of department champion

  • Regular progress reports

ImplementationRobustness of implementation strategies
  • Technology is part of standard practice guidelines

  • Customizability

  • Processes to identify, refer, and support users

  • Implementation role training for staff

  • Feedback used to remove barriers and improve

  • processes

Fit in departmentExtent to which the innovation fits with departmental processes
  • Home department of technology respected

  • Implementation started in unit where it will likely be successful

  • Good fit with other services/procedures

  • Technical difficulties

  • Staff familiarity with the technology

  • Effect on staff workload

  • Effect on care provider role

Awareness and supportOngoing internal marketing and enthusiasm for the innovation
  • Key opinion leader support of the technology

  • Department manager support

  • Key persons' understanding of implementation and use

  • Clinicians see that their patients benefit from technology

  • Clinicians advise patients to use technology

  • Powerful skeptics' concerns are addressed

Model formulation (phase 3)

In phase 3, 410 experts quantified the weights of RIM factors in two approaches, self-explicated (SE) and conjoint analysis (CA). Both approaches are popular among marketers for measuring customer preferences, and both have been used increasingly to answer questions about healthcare preferences and resource allocation.5061 We used the two approaches to produce decision-making information which will guide organizational strategies for implementing IHCS.6264 The validity, reliability, and predictive power of both approaches are well established.6567

The SE approach asks respondents to rate (1) how desirable each factor is on a scale of 0–100, and (2) how important each factor is. To determine importance, respondents typically allocate 100 points across the factors. Overall preferences are obtained by multiplying the importance number by the factor level.68 The SE is substantially easier to use than the CA approach for both respondents and investigators.69

Conjoint analysis was developed in mathematical psychology and has a strong theoretical basis.58 ,59 ,70 ,71 Respondents react to a set of hypothetical profiles. Each profile realistically describes a product, service, or situation; respondents give ratings of or make choices based on the profiles. The goal of CA is to determine which combination of factors most influences respondents' decisions. This approach yields more realistic information than the SE approach,50 but it requires creating and evaluating many hypothetical profiles, which can be overwhelming with a large number of factors.

Phase 3 study methods

Self-explicated and conjoint analysis survey development procedures (four steps)

Identifying factors

The factors and elements developed in phases 1 and 2 were used to construct the SE and CA surveys. For the SE approach, experts rated the importance of the factors by allocating 100 points across the seven factors.

Assigning factor performance level

In phase 1, the advisory panel wrote two or three descriptions of the potential of each element to influence implementation. In phase 3, the seven factors were described in four levels, from the least desired condition to the most, by assigning to each level a different combination of element descriptions. In the least desired condition, all the elements in the factor have no influence or negative influence on implementation. In the most desired condition, all the elements have a positive influence on implementation. The other two factor levels (medium high and medium low) have elements of medium intensity between the most and the least desired conditions. The most desired level for the factor ‘organizational environment’ is the one in which the six elements are all positive: a past history of successful innovation, innovative leaders, and so on. For the SE approach, experts would rate the factor levels on a 0–100 desirability scale.

Developing hypothetical profiles for the conjoint analysis

This step involved designing profiles that described a hypothetical organization's implementation efforts by using different combinations of the factor levels. Respondents would rate the profiles by weighing the factors jointly. The number of profiles that can be constructed from a set of seven factors, each with four levels, is 47 or 16 384. We reduced this to a more manageable 700 profiles using a fractional factorial design to generate an orthogonal array. We also produced a set of 120 profiles, which we called ‘holdout profiles,’ to use in assessing the validity of the estimated weights from both the SE and CA approaches (see online appendix B available at www.jamia.org for a sample profile). Five experts rated each of the 820 profiles (a within-profile design), and each expert was given a different set of 10 profiles (a within-subject design). This required a total of 410 experts and 4100 profile scores (5×820=4100 profile scores and 4100/10=410 experts).

Deciding profile rating preference

To make the model useful for predicting sustainability, we asked each expert how likely the hypothetical organization was to continue using the technology after the initial implementation. They responded by giving a ‘% chance,’ such as a 70% chance. These responses were used to derive the factor weights for the CA (700 profiles) and to cross-validate both the SE and CA models (120 profiles).

Expert sample eligibility

Members of the American Medical Informatics Association and the Society of Behavioral Medicine were invited by mail to participate in the project because of their potential expertise in implementing IHCS. We sought individuals in these categories: (1) corporate executive involved in approving and/or securing resources for an IHCS; (2) department manager with overall responsibility for implementing an IHCS; (3) champion of an IHCS who has pushed to get it implemented; (4) front-line staff person who had some responsibility for implementing an IHCS; and (5) academic or consultant who has studied or offered advice on implementations.

Expert data analysis

Self-explicated approach analysis

Using Pearson's correlation, we first carried out a consistency check to confirm that the order of factor-level desirability ratings would correspond to the order of their theoretical readiness intensity levels.56 We computed the relative importance of each factor in percentage terms in order to compare SE results with CA results.

Conjoint approach analysis

Regression techniques are commonly used to analyze CA responses.56 ,72 We used stepwise regression for parameter fitting, assuming a linear function of the readiness factors. This procedure has often been employed in healthcare research using the CA approach.7376 Expert ratings for the 700 profiles were used as the dependent variables. Stepwise linear regression with dummy coding was performed to generate factor weights (p for entry <0.05). The β weight of the least desired level for each factor was set at zero; the remaining levels were estimated in contrast to zero. We computed the relative importance of each factor in percentage terms by taking the range of weights for any factor (highest minus lowest), dividing it by the sum of all the weights, and multiplying by 100.75

Holdout profiles for cross-validation analyses

We used the 120 holdout profiles to carry out a preliminary evaluation of both approaches. Spearman's P correlations were conducted to test the predictive validity of calculated model scores from both approaches.

Phase 3 study results

Expert responses

Each wave of invitations to experts had an average participation rate of 33%. Of the 314 individuals who returned opt-out cards, 67% reported that they did not fit into the expert categories and the rest did not have time to participate. We mailed 33 waves of invitations, reached 1490 individuals, and sent 2079 survey packets (some individuals were sent the same packet twice) to obtain responses from 410 experts. Experts were contacted if their surveys had missing values in order to have a complete data set.

Characteristics of experts

Most participants reported that they were implementation team members (73%) and champions (61%). Participants could give multiple responses. For example, one person could be both a champion and a member of the implementation team. As to their roles within organizations, participants' top three responses were clinician (50%), academic researcher (50%), and department manager (25%). Again, participants could give multiple responses.

Factor importance weights from the SE and CA approaches

The internal consistency check showed that the SE factor level ratings were consistent with their theoretical readiness intensity (r>0.90, p<0.001). In the CA approach, all factors were retained in the regression model (p<0.05). Factor importance weights separately derived by the SE and CA approaches are shown in table 2. Both approaches identified ‘organizational motivation’ and ‘meeting user needs’ as most important in predicting successful implementation. These factors were twice as important as some other factors. ‘Awareness and support’ was identified as a relatively more important factor by the CA. ‘Promotion’ was deemed relatively less influential.

View this table:
Table 2

Relative importance of the RIM factors as percent of total weights

Factor: mean importance (%)Self-explicatedConjoint analysis
Organizational environment1411
Organizational motivation1622
Meeting user needs2118
Promotion10 7
Fit in department1114
Awareness and support1319
Internal validity of 120 holdout profiles: Spearman's r correlation between predictive scores and observed ratings0.81 (p<0.001)0.85 (p<0.001)

Holdout profile cross-validation results

For both approaches, internal cross-validation showed that the observed ratings for the 120 holdout profiles and the calculated predictive model scores were highly correlated (r>0.81, p<0.001; table 2). The CA model was constructed as a least-square fit to the 700 profile ratings, while the SE model was developed independently without profile information. We would expect, therefore, that the CA would perform better than the SE approach in this comparison using the same set of 120 profiles.

Model validation (phase 4)

To see how well the RIM reflects actual implementations, we conducted in phase 4 a 1-year longitudinal study in 25 sites that were implementing IHCS. We used the study to validate the weights developed from the SE and CA approaches and compare the effectiveness of the two analytic techniques.

Phase 4 study methods

Recruitment and data collection

Criteria for organizations to be included in the study were: (1) the IHCS would allow patients to be more actively involved in their healthcare; (2) organizations were providing the system to their patients; and (3) organizations were just starting to implement the system. At each site, we recruited representatives of a cross-section of roles (eg, administrators who pushed the project, implementation team members, front-line staff). To observe changes in implementation readiness over time, respondents were surveyed at three time points (0, 6, and 12 months). At each point, we asked participants to rate how their organization was functioning on the 42 RIM elements. Time 0 was the point when a formal decision to adopt an IHCS was made but actual implementation had not yet started.

Implementation outcome questions

We also asked respondents questions about the implementation at 6 and 12 months. We had learned through our interviews in phase 2 that organizations defined success in an implementation not simply by the number of patients using the IHCS, but by the attitude toward the technology and other factors. As a result, our six outcome questions asked, for example, whether respondents were glad the IHCS was available and if it was used in other parts of the organization. We also conducted a factor analysis to determine how the questions related to one another.

Site-level consistency analysis

Some of the sites recruited for this phase 4 study were different departments or locations in the same organization. Nonetheless, we treated each site as an individual case. We made this decision because sites within a single organization often differ greatly in their environment, thus producing different needs and circumstances. They may also differ widely in support from management, climate for implementation, and other factors. Sometimes different sites within one organization were adopting different IHCS. To estimate internal consistency within each site, we calculated the percentage of agreement across respondents.

Validation data analysis

To estimate how well the model captured implementation, we used a Spearman's P non-parametric test to compare the RIM prediction and the outcome at 6 and 12 months. The closer the measures, the more accurate we expected the model to be. We were also interested in the long-term predictive power of the model. Hence we had three prediction–outcome combinations: prediction at the start versus outcomes at months 6 and 12; prediction at month 6 versus outcome at month 12; and prediction at the start and month 6 versus outcome at month 12. Regression analysis was used to evaluate the predictive power of the model.

To judge an IHCS implementation as successful or not according to the model, we needed a cut-off level. The cut-off of the implementation outcomes, which were assessed on a 0–5 scale, was set at 3.5. For the RIM scores, 70 (out of a maximum of 100) was set as the cut-off. We chose this high figure to minimize the number of falsely identified successful IHCS adoptions. We examined the long-term success rates (12-month outcomes) between sites with low RIM scores and those with high RIM scores.

Phase 4 study results

Healthcare organization recruitment results

We contacted more than 50 healthcare organizations about the study and recruited 25 sites from 15 healthcare organizations. The organizations included managed-care organizations, social-service agencies, university medical centers, and inner-city clinics. The organizations were spread across the country.

Implementation outcome questions

Using data from months 6 and 12, an exploratory factor analysis of the six outcome questions revealed a single underlying construct. As a result, we used the average score of the six outcome questions as a composite measure for the rest of the outcome evaluation.

Site-level consistency results

The number of respondents per site ranged from 3 to 16. Across the 25 sites, the average percentage of element-rating agreement ranged from 65% to 80%. This supported aggregating individual responses to the site level for analysis. Aggregated site means were adopted for the rest of the analysis.

Validation data analysis results

The correlation between the RIM predictive scores using both approaches and outcomes for the concurrent assessment ranged from 0.65 to 0.70 (p<0.001), demonstrating the accuracy of both models. As for long-term prediction, the model scores at month 6 were more accurate than at month 0 in predicting outcomes at month 12. The results of using both month 0 and month 6 model scores (adjusting for baseline variability) to predict month 12 outcomes was virtually the same as using month 0 data alone, without any significant R2 change. The model is a slightly better predictor of 1-year outcomes than half-year outcomes, perhaps because it takes time for a picture of implementation to emerge. The predictive validity of the SE approach for month 12 was slightly better than for the CA approach, but was not statistically significant (table 3).

View this table:
Table 3

Comparison of the RIM predictive score and organization self-reported outcome in 25 implementation cases

Spearman's r correlation for accuracy analysisSelf-explicatedConjoint analysis
   M6 RIM predictive score versus M6 organization self-reported outcome0.65 (p<0.001)0.66 (p<0.001)
   M12 RIM predictive score versus M12 organization self-reported outcome0.68 (p<0.001)0.70 (p<0.001)
Regression R2 for long-term prediction
   M0 RIM predictive score (independent variable) versus M6 organization self-reported outcome (dependent variable)0.510.50
   M0 RIM predictive score (independent variable) versus M12 organization self-reported outcome (dependent variable)0.510.50
   M6 RIM predictive score (independent variable) versus M12 organization self-reported outcome (dependent variable)0.550.54
   M0+M6 RIM predictive scores (independent variables) versus M12 organization self-reported outcome (dependent variable)0.570.56
  • M indicates month.

Because we found no significant difference between the two approaches, we compared only the SE baseline scores to the outcomes at month 12. The model correctly predicted 68% of the successful IHCS implementations and under-estimated 32% of the successful ones (figure 2). Further, it correctly identified 83% of the unsuccessful IHCS initiatives and falsely identified 17% of the unsuccessful IHCS initiatives as potentially positive.

Figure 2

RIM predictive scores compared with perceived implementation success. The bolded lines indicate the cut-off points. ○, Correctly predicted successful IHCS initiatives; ◊, under-predicted successful IHCS initiatives; +, correctly predicted unsuccessful IHCS initiatives; ×, falsely predicted unsuccessful IHCS initiatives.


Study findings validated the RIM model. It appears that the conceptual development, mathematical foundation, and data collections that built RIM produced the desired results.

Comparison of the two decision-analytic modeling techniques

Although CA has a theoretical advantage over SE in predictive validity,58 ,59 our analysis did not find that the two approaches produce different results. Sattler and Haensel-Borner conducted a comprehensive analysis of empirical studies of these approaches and failed to confirm the superiority of CA.50 The majority of the comparisons they studied found either non-significant differences between the methods or higher predictive validity for the SE approach.

Characteristics of the RIM

Several unique characteristics of the RIM make it a useful tool. First, it is quick and easy to use; respondents can complete the survey in about 15 min. Second, the response options for each question (element) in the RIM are exclusive descriptions rather than Likert-type scales. Exclusive descriptions force respondents to choose an answer that best describes their organization's readiness for each of the 42 model elements. We believe that this process also fosters in respondents a better understanding of their organization's readiness to change. Answering with a number from 1 to 10 on a Likert-type scale would be less likely to stimulate this understanding and could easily result in a ceiling effect. Third, our RIM allows the seven factors to be weighted differently. Stablein et al examined the readiness of 17 hospitals to implement computerized physician order entry and concluded after additional research that some of the readiness indicators should be more heavily weighted than others.29 With the pre-calculated score conversion table, the global RIM scores are very convenient to compute with factor weights incorporated (see online appendix C available at www.jamia.org for the RIM survey and score conversion table).

Application of the RIM

The RIM clearly identifies factors critical to implementing an IHCS. The model provides a way to determine whether the likelihood of success warrants the effort required. An organization can also use the RIM to assess its own strengths and barriers to adoption and prepare for implementation. In addition, the model can be used to monitor progress over time to keep the effort on track and measure the effect of actions taken between evaluations. The factor weights can be used to help allocate limited resources to produce the greatest chance of success.

We suggest that at least five to seven individuals, including members of the implementation team and the project champion, complete the RIM survey to assess how their organization functions on each element. The RIM assessment should take place while the organization is deciding whether to adopt an IHCS, during the implementation planning, 6 months after implementation begins, and annually thereafter.

Study limitations

While the model validation is encouraging, results should be considered in light of study limitations. First, 50% of the 410 external experts used in phase 3 identified themselves as working in academic settings, which may bias the findings. All experts were drawn from American Medical Informatics Association and Society of Behavioral Medicine members, possibly explaining the high percentage of academic respondents and suggesting potential bias in sample selection. Second, the order of the SE and CA tasks may have affected the results. Each expert first completed the SE task and then did the profile ratings for the CA task. Participants might respond to the profiles more cautiously to create answers consistent with their ratings in the SE task. This behavior might also contribute to the small difference found in the results of the two approaches. Completing two decisional tasks in one survey also increased the cognitive burden on participants and may have affected their ratings. Future studies should focus on the effect of the task order and minimize the cognitive burden on participants. Third, in theory, decisional factors should be mutually independent in CA experiments. The outcome in one factor should not influence the outcome in another. In our study, some interaction between factors might be present. For instance, ‘internal awareness and support’ might be enhanced when ‘promotion’ of the IHCS is pushed. We acknowledge that an additive model such as ours prevents estimation of how factors interact and may therefore be considered inaccurate. But using an additive model works well in practice. Using multilevel analysis would make data analysis and interpretation much more complicated while hardly improving the fit of the model.77 Fourth, we selected 70 as the RIM score cut-off (used to minimize falsely identified successful implementations) based on our experience in developing multi-attribute predictive models of the efforts required in successful implementations. A ROC curve analysis is needed in future research.78

Future direction

First, the model would benefit from research with a larger sample for a longer time. Because of resource limits, we were able to follow 25 sites for just 1 year. Observing implementation over at least 2 years would be much better (our advisory panel said 2 years are needed to show sustainability). The sensitivity of the RIM to change over time also needs validation in a longer study. Second, although the RIM was assessed in 25 sites prospectively for its predictive validity, future studies are needed to examine the effectiveness of the RIM in guiding implementation and improving outcomes. Such studies will use baseline and process assessments by the RIM to develop tailored implementation strategies.


Installing an IHCS from adoption through institutional acceptance requires careful attention to implementation. The RIM has great potential for helping institutional planners measure an organization's readiness for IHCS adoption and implementation.


This research was supported by the Agency for Healthcare Research and Quality (AHRQ R01 HS10246).

Competing interests


Ethics approval

This study was conducted with the approval of the University of Wisconsin, Madison, Wisconsin.

Provenance and peer review

Not commissioned; not externally peer reviewed.


We thank Bobbie Johnson for her editorial assistance.


View Abstract