OUP user menu

People and Organizational Issues in Research Systems Implementation

Joan S. Ash, Nicholas R. Anderson, Peter Tarczy-Hornoch
DOI: http://dx.doi.org/10.1197/jamia.M2582 283-289 First published online: 1 May 2008


Knowledge about people and organizational issues pertinent to implementation and maintenance of clinical systems has grown steadily over the past fifteen years. Less is known about implementation of systems used for clinical and biomedical research. In conjunction with current National Institutes of Health Roadmap efforts that promote translational research, these issues should now be identified and addressed. During the 2007 American College of Medical Informatics Symposium, members discussed behavioral aspects of translational informatics. This article summarizes that discussion, which covered organizational issues, implications of how knowledge about clinical systems implementation can inform research systems implementation, and those issues unique to each kind of system.


Recent studies examining people and organizational issues regarding implementation of clinical information systems have generated a body of research about best practices.1,2 Organizations developing National Institutes of Health (NIH) Roadmap initiative-funded Clinical and Translational Science Award (CTSA) projects3,4 supporting translational research5 can benefit from collaboration and teamwork internally and across institutions. This article summarizes what is known about three types of information system implementations, including those for biomedical research, clinical research, and clinical patient care. This article incorporates presentations developed by the authors, findings from ongoing research, and comments from the discussion at the American College Medical Informatics (ACMI) symposium in 2007.

One might define translational research as encompassing a spectrum of biomedical applications, ranging from systems that assist in making biological discoveries, to systems that help to test and validate such discoveries in clinical trials, to systems that improve treatment of individual patients.6 In contrast, many academic researchers restrict the definition of translational medicine to only those bench-level scientific studies that lead to improvements in clinical medicine. The Clinical Research Roundtable at the Institute of Medicine outlined major challenges for clinical research, including organizational aspects. The ACMI members who were Roundtable participants identified two categories of impediments to uneventful transitions of research from bench to bedside. The barriers involve: (1) basic science discoveries that influence clinical studies, and (2) clinical studies that impact medical practice. Focusing on the second barrier, Roundtable members identified informatics as both a challenge and an opportunity; they encouraged collaboration and cross training between informaticians and clinical researchers.7 Payne et al.8 conducted a study that used qualitative interviews to learn what factors might promote or inhibit integration of informatics and translational research, and identified six barriers. Five of the six present significant impediments: fragmentation of information technology (IT) resources, funding problems, lack of awareness of local IT resources, the immaturity of current electronic health record systems, and lack of appropriately trained IT staff. The sixth barrier, limited ability to conduct Web-based subject recruitment, no longer presents a challenge, as this practice is now more common.

From the informatics perspective, some important concepts merit clarification and elaboration. The implementation of clinical patient care information systems at both the local and the national (vendor/developer) levels is never-ending. The technology underlying such systems evolves continuously, the state of the art in clinical practice constantly advances, and the organizations that implement clinical systems also evolve. System evolution and organizational evolution intertwine as they move on a path motivated by quality improvement. Research system implementations involve information systems that support clinical, basic, or translational research. The primary focus of clinical research is patient care, and it concerns collection and analysis of clinical data (e.g., a clinical trial evaluating the efficacy of a new drug for a targeted condition). In contrast, basic research typically is performed “at the wet bench” in laboratories and usually does not involve intact whole human patients or the types of data routinely collected in clinical settings. Translational research includes aspects of both clinical research and basic research. Research systems support improved information capture, storage, and analysis. They can foster reproducibility of results by other investigators. All types of research systems are increasingly being integrated with other systems to enhance collaboration at the institutional, international collaboratory, or national governmental levels. For example, a standalone research laboratory information management systems (LIMS)9,10 within an individual researcher's laboratory might be combined with systems from other domains, such as has occurred in the Harvard–Massachusetts Institute of Technology IT i2b2 project, an NIH-funded initiative that is developing a scalable informatics framework for bridging clinical data with basic science data.11 Such integration in the research environment mirrors earlier clinical systems integration of the sort fostered by the National Library of Medicine's IAIMS initiative begun in 1983.

Growing numbers of health care sites have adopted electronic health record and clinical decision support systems as a means of improving patient care and safety. As a result, there is greater awareness (based on solid research) about what it takes to make implementations successful. Studies identifying success factors for implementing research systems, whether in support of clinical, basic, or translational research, are far less common. This article examines whether the success factors identified for clinical patient care systems may be applicable to research systems.

Organizational Issues

Prior research has identified a number of success factors for implementing clinical systems.12,13 Their potential relevance to the context of research systems implementation is discussed.


Time has consistently been reported as one of the most important issues for clinical systems implementation.12,14 Time includes computer application code efficiency (the response time of the system), which must be extremely fast (less than a few seconds for pauses) to please busy clinicians. Time also includes human end-user efficiency, which includes the amount of effort required to navigate through the system to accomplish a goal (e.g., entering data directly, pulling down menus, selecting from pick-lists, or filling in templates), plus the time spent thinking about each step. These factors can be combined as an “elapsed time for overall task”, for example, from the time a clinician starts entering a patient's orders using a computer to the time the medication is delivered to the bedside and administered, or time from order generation until a laboratory test result is reported as available. Another time-related element is the implementation life cycle, which, as noted above, continuously evolves in an open-ended manner. Installation and implementation are no longer considered to be synonymous for clinical applications.

Temporal issues are important in research systems, but for different reasons. One concern is that delayed, prolonged implementations for research systems may delay production of research results (e.g., new discoveries) and thereby provide opportunities for other researchers to report the results or discoveries sooner.15 Another time-sensitive issue for research systems involves optimal management of consolidated biological data warehouses that accumulate data from LIMS or other instrumentation systems. These data must rapidly pass through various processing stages to be used, shortly thereafter or even simultaneously, for secondary research. Such data warehouses require policies and workflow arrangements to accommodate the high-speed nature of clinical systems.16 Another concern is the lifespan of research system software, which may be limited when researchers have information management needs that change rapidly,15 making outdated software irrelevant to the original experimental questions.

Currently, researchers typically use a number of different systems, each for a relatively limited period of time in a particular phase of an experiment. For the next experiment in a sequence, different experimental approaches may require different application systems. For example, one experiment in a laboratory may involve microarray analysis using mRNA to evaluate the expression differences between diseased tissue and healthy tissue; the next experiment in that laboratory may build on the results of this experiment and involved single nucleotide polymorphism (SNP) data, tools for SNP analysis, and statistical tools for Genome Wide Association Studies (GWAS) in broader populations. This is in contrast to a typical inpatient or outpatient clinical setting, where often a single integrated system is in use. In the research environment, the time required for training to master the multiple software applications can also impact research productivity when it is excessive.

Motivation and Context

Motivation and context are important considerations in clinical systems implementation. If the motivation for installing a clinical system is other than improving patient care, and if clinicians are not the main source of the motivation (e.g., if administrators purchased the system for financial cost savings), success is less likely because clinicians might resist using the system.16,17 Context or environment can influence clinical installations. For example, if the geographic region includes high-tech industries, health care organizations in the area often display increased interest in using IT. Outside pressure at either the regional or state level, or at the national level, such as the U.S. Leapfrog Group's initiatives,18 can also stimulate implementation efforts. Policy pressures at the national level have clearly had significant impact in Europe and Australia. Even in the United States, the federal government's encouragement of electronic prescribing is beginning to have some impact.

For research systems, policy pressures and federal efforts have also had an impact on system deployments. Efforts at the national level in the United States, such as the National Cancer Institute's (NCI's) caBIG19 project for a comprehensive cancer research information system initiative, the Biomedical Informatics Research Network (BIRN),20 and CTSA,3 also provide motivation for implementation of research systems. There are now common requirements from the U.S. NIH that grant proposals describe how information will be managed and results data will be shared.21 Institutions conducting research have begun to develop policies regarding long-term data management for grant-supported research.

Motivation for acquiring an individual laboratory-level system typically comes from a project's principal investigator, who may believe that a specific research system will be superior for analysis of the type of data generated by the project. The individual investigators (or directors of informatics cores within larger center or program grant projects) may desire a standalone system for ownership purposes because they desire complete control over use of the system, because they want to be the sole owner of the data, or because of policies or regulations prescribing how the system must be acquired, maintained, or used within the guidelines of the funding source. In summary, a large difference exists between what is in general an organizational approach to clinical system implementation and what is usually an individualized approach in the research environment. This means the difference between organizational and individual motivation and context for research systems will be particularly important to consider.


Multidimensional integration is important in clinical systems implementation.22 Tight system integration makes it possible for clinical users to log onto the system once, through a single interface, so that system usage is efficient and consistently easy. Clinicians require a complete overview of information about a given patient, or all patients cared for by the clinician. Such overviews often aggregate data from a large, diverse collection of individual clinical systems.

For research teams, integration of research information into individuals' workflows to provide high data availability is of similar high importance as it is in clinical settings. Nevertheless, the researcher's need for real-time data may be less critical than for clinicians. The requirement for a tight fit into team workflows may be different for research systems than for clinical systems. Sometimes only a limited number of persons in a laboratory will directly use the system attached to a particular set of instruments (for example, a specific-purpose LIMS), and only those individuals' workflows need be accommodated. Research data warehouses may require better workflow integration because they are often designed to be general-purpose research resources. Clinical systems may only require integration of a small number of core information resources (for example transcription, laboratory, radiology, pathology, pharmacy, order entry). At present, few research systems have analogous, agreed-upon core resources. One possible exception is the desire to integrate biomedical research results with the evolving body of publications available through national resources such as PubMed. Approaches to multidimensional integration within the research context need to be designed to be flexible enough to support a broad range of sources, each of which may represent very large datasets. Such research datasets will in many cases be larger than those that a clinician may require, and the corresponding resources may include both custom systems developed by a laboratory or by a group of researchers (for example, mutation-specific databases) as well as more domain-level resources, such as epidemiological databases or other public resources.


A universally important success factor for clinical and research systems is adequate financial resources to cover not only original system purchases, but also ongoing hardware and software maintenance and support. Support for research systems or for their upgrades may have unique problems: if a system was purchased with grant funds, and the grant runs out, there may no longer be funding for support.

Organizations implementing clinical systems (e.g., clinics or hospitals) are generally more stable and long-lived than research laboratories. The clinical aim of providing care remains the same over time and often generates adequate income for system acquisition and maintenance. The aims of research must change over time in response to funding agency interests and scientific progress. Mechanisms to fund large research systems on an ongoing basis may be problematic. Because system needs vary more widely from one researcher to another, it is more difficult to pool resources across research groups or to transition “used” systems from one research team to another.

Although there are mechanisms from the National Center for Research Resources (NCRR) for shared instrumentation grants,23 it is anticipated that the advent of larger collaborative efforts such as the CTSA initiative may provide the national collaborative research and funding stimulus to develop distributed access to localized resources. Earlier federated authentication and authorization initiatives such as caGRID24 or the U.K.-based e-Science initiative25 have developed technical infrastructures to allow for large-scale virtual collaborations, but until such approaches become self-sustaining and part of the common fabric of research funding, they will likely need the support of broad translational research visions such as CTSA to drive the communications and policy making necessary to sustain distributed research.

Meeting Information Needs and Providing Value to Users

Clinical systems must meet the information needs of users, and this involves a number of technical issues. Successful clinical systems must be customized to meet the needs of both organizations and individuals. Commercial systems that are flexibly configurable to adapt to the workflows of an organization and of its individuals will likely be better received. Clinicians need all relevant available information for decision making about a patient, so comprehensiveness is a goal. All of these aspects of good clinical systems hold equally well for research systems. Each laboratory is somewhat different, so the ability to customize to fit local and current information management requirements as they evolve over time is particularly important. The more complete the analysis and results are, the better the science is. The heterogeneity of information needs of researchers requires in general not only that each system be customizable but also that a larger number of systems be implemented for a given researcher.

End-users of all types of systems become familiar with the systems' strengths and weaknesses. These depend on the value they offer to users and the tradeoffs that must be considered. For clinical system users, positive attributes such as legible clinical records and the ability to provide clinical decision support can offset drawbacks such as system rigidity (when the system inflexibly makes the user do things in an awkward manner) and the additional time that system use adds to busy workflows. Researchers require that their systems add a different value, that of promoting or facilitating better science. An issue here is who reaps the value, the investigator, the laboratory, or the broader organization. The relevant tradeoffs are not yet known for research systems in general. One known factor is the initial ramp-up cost, in terms of time and resources, for systems in research environments that are used infrequently. For such systems, maintaining user skills and proficiency is problematic.

Special People

The availability and dedication of special types of people who bridge the gap between the clinical and IT worlds is important for clinical systems implementation. Administrative leaders must be steadfast about implementation because it is not an easy process. Clinical leaders must be interpreters for clinicians and administrators about workflow and professional culture as well as about IT. There are others who bridge gaps by providing training and support, as well as continuously available help that is easy to obtain. Finally, system vendor representatives play an essential role in communicating.

There are parallel special types of people for research systems implementation. The influence of the principal investigator is significant, and his or her support is crucial to supporting information management within a laboratory environment.26 Other support roles are still evolving, and may be provided by a core service unit within the organization, from a dedicated bioinformatics information service in the library,27 or from other researchers outside the laboratory, as well as students, hired consultants, and new hires.28

Training and Support

Another key strategy for success is provision of adequate training and support. Clinical systems implementers often talk about using pizza to entice users to initial training sessions, to training updates, and to user feedback (complaints and new ideas) sessions. These sessions offer a social opportunity for clinicians to spend more time with the technical staff in a nonthreatening context. Training and support seem to be equally important for research systems, although rather than having everyone trained, only the small number of people critical to the research task and to the technology at hand are selected. The widespread existence of laboratory “happy hour” sessions might serve as a reasonable substitute for pizza (so long as the technology can be delivered and discussed in the happy hour setting) in incentivizing laboratory-based research system training.

Although training within research laboratories shares some themes with training in clinical environments, there are unique aspects as well. For example, it is more difficult to retain researchers trained on current technology within the environment of short-term postdoctoral students and grant cycles. Furthermore, the number of evolving and disparate systems being used in many modern research laboratories is larger than in the clinical environments. The lack of a common integrated user interface across systems makes it necessary for researchers in the laboratory who do not use a given system for months at a time to undergo frequent retraining. A related issue is training of students as future professionals in health care or research. Initial training in informatics should occur at some level during their core predoctoral curricula, which will serve as a foundation but will require frequent updating as well.29,30

Foundational Underpinnings

Clinical system implementation requires certain foundational underpinnings to succeed. The organizational culture must be such that clinicians trust administrators; the latter should commit to the implementation and share their vision for what these systems can do for health care. Leadership must be open to feedback. The organization and the vendor need to be stable.

For research system implementation on a small scale within an individual laboratory, the foundational underpinnings may be quite different. The trust a researcher often needs is trust in someone else who has used the system and can recommend it based on scientific utility. There can also be an element of distrust of data and analyses produced by the system without direct personal involvement. Many biological researchers still have a hands-on attachment to generating research data, and tend to prefer situations in which they can confirm that all procedures and assumptions involved in data generation and analysis are valid. Large data repositories often have limited utility when researchers lack confidence in how they were generated. Efforts are being made by publications such as Nature Methods,31PLoS ONE,32 and others to ensure more consistency and trust in the data—in particular, the processes surrounding the developmental provenance of the data. The foundational underpinnings that are needed for research systems are likely similar to clinical systems when considering large-scale implementations either organization-wide or across organizations, but further research is needed in this area.

Collaboration and Trust

Collaboration and trust are critical to successful clinical system implementations. Professionals from disparate health care disciplines (medicine, nursing, laboratory, pharmacy, etc.) must collaborate closely during development, implementation, and clinical use, especially when decision support is involved. Not only is trust between individual administrators and clinicians necessary, but also the organization must be able to trust the vendor to deliver on promises and end-user requests. For research systems, different circumstances drive the need for collaboration. Collaboration is a larger issue across laboratories, at the institutional or domain levels. As previously stated, researchers must trust the scientific validity of automated data gathering and analysis. Some researchers depositing information into a shared repository express concerns that others may discover findings in the deposited data before the original contributors. This fear may constitute a barrier to voluntary participation in such repositories. For translational research accomplished through collaborations between academia and industry, especially pharmaceutical companies, “such collaborations sometimes can be challenging due to differences between the cultures and priorities of the two parties.”33, p 1685

Project Management

All projects related to system implementation and maintenance require management expertise. Providing such expertise should be an institutional function that supports clinical, research IT, and administrative systems implementation groups. The three cornerstones of good project management are the management of resources, of the schedule, and of the quality of the outcome. Most importantly, good management depends on careful planning for each cornerstone. When one objective changes, plans for the others must be changed accordingly. Scope creep is common during clinical system implementations as demands for more features escalate. A key management function is to prioritize requests for additions as constituting immediate needs for success, as important on some intermediate time frame, or as “nice to have” in some distant future, and communicating such prioritization back to the original requesters.

Project management is equally critical for research systems implementation, although it is not common, especially in smaller laboratories. Many researchers adopt project management strategies to orchestrate experiments that require novel roles and capabilities not locally available to the laboratory. Policies of granting agencies often require that project management principles be used when purchasing systems with grant funds. The funding of CTSA projects may provide models of excellence for managing future research implementations. Even when resources exist for system implementation within a research laboratory, non-CTSA grants or project funding cycles may not reliably provide resources for ongoing project management during the maintenance phase of system use.

Evaluation and Learning

Continuous improvement through evaluation and learning is needed for clinical systems implementation. Such systems undergo constant modification, as noted previously. Technical and cultural success demands a high level of clinician–user involvement in the entire process, from selection to implementation to modification and growth in levels of decision support. Continuous improvement and refinement are needed for research systems as well, but again, the nature of the system seems to make a difference. A small system to be used by a few people in a laboratory may require refinement and evolution that can be handled by a single skilled individual. A larger, more expensive system to be used by multiple laboratories would require more formal management coordination. Planning that optimizes development, implementation, and availability benefits the researchers using the system.

Issues Specific to Translational Research Systems

Additional studies by thoughtful leaders and informatics researchers must address discovery and evolution of best practices for implementing and using clinical and biomedical research systems. This includes organizational issues related to CTSAs, existing roadblocks to sharing data, and communication issues.


The caBIG effort has faced major organizational issues, and likely the CTSAs will too.34 The issues related to data sharing are discussed later. The CTSA collaborators may come from different professional cultures and use different jargon-filled vocabularies. The CTSA projects will force changes in the way work is done. Like hospitals implementing clinical systems, universities with CTSA grants may require that users use certain systems and share in ways that, although of benefit to the community, may not actually be to the advantage of the individual users. Care must be taken to plan and assess how roles will change. It is too early to have learned organizational lessons from the CTSAs because there are as yet no publications reporting results of CTSA-related studies. One must assume that the organizations will need to remain flexible while monitoring the pulse of organizational issues. The focus of the CTSA process on the creation of national committees to foster and disseminate best practices hopefully will diminish the slope of an otherwise steep learning curve as organizations seek to create structures to foster collaboration for translational research.35

Roadblocks to Sharing Data

For the entire spectrum of translational research and related research results, the sharing of data presents new and serious problems.35 Researchers, used to being autonomous and independent, are protective of their data. Nevertheless, health services and public health researchers must access clinical data belonging to others. Organizations are fearful that privacy-related Health Insurance Portability and Accountability Act violations might occur. The potential for large-scale repositories of genomic information coupled with de-identified phenotypic information from the electronic medical record raises potential concerns over the ability to uniquely identify individuals. Lin et al. have shown that “specifying DNA sequence at only 30 to 80 statistically independent SNP positions will uniquely define a single person.”36, p 183 Too much fear can unfortunately impede transfer of information about discoveries back to patient care activities as well.37

A major roadblock to sharing data, development of standards, is also a large organizational issue. For example, using the same vocabulary for clinical work at the bedside and for clinical research is difficult because the level of detail needed for one or the other may be greater. To select data standards, agreements need to be reached between researchers and clinicians at an organizational level, and clinicians who gather the original data also need to agree to take on the extra workload. To “facilitate the transfer of research findings into routine care, clinical and translational research must employ applicable standards (e.g., identifiers, vocabularies, transactions, security measures).”38, p 6 Standards are greatly needed “to maximize interoperability between internal systems and systems in outside organizations.”38, p 16 Agreements and policies on the secondary use of clinical data will be needed as well if data are to be shared.39

There are few motivators for sharing, however. Institutional review boards have been put in place to protect patient data, and individuals on those boards take their responsibility seriously. Researchers often do not understand many policies concerning protection of data or the typically long durations of time and labor-intensive activity required to approve and provide access to such data. Researchers need the data to complete their projects within the duration of their grants, and may not be allowed to use these data after funding expires. This will be a particularly complex issue for the CTSAs, where initiatives may share data across institutional and state borders. Researchers also encounter roadblocks or resistance at the individual level. Those who generate or “own” data do not want to change the way they have previously dealt with data. These roadblocks can only be lowered through judicious use of conflict management and negotiation skills and (ideally) informed policy. There must be a careful weighing of the needs of researchers and science against the need for regulation. As research cores and services expand, they can become more effective at assisting with collaborations and facilitating, through informatics tools and services, removal of roadblocks. These services could decrease the stress on researchers, provide educational support, facilitate collaboration, and therefore increase motivation for sharing hard-to-obtain data. “A thoughtful implementation of informatics—one that factors in social and organizational nuances—will undoubtedly lead to a more efficient and effective clinical research enterprise.”40, p 327


Communication about professional and organizational cultures, motivations, and goals across the translational spectrum is always needed. With clinical systems, there may be no such thing as overcommunication.41 Studies to identify cultural aspects that foster collaborating laboratories and best practices for clinical research organizations, and of how to disseminate optimally the knowledge gained through research, are greatly needed.

The organizational cultures of laboratories differ from those of health care institutions. The main motivators for principal investigators are not patient care oriented, they are motivated by advancing the science.42 Silos, such as individual laboratories that communicate little with others, are common, alhough these also occur in health care organizations. The definition of success is different in that the aim is to acquire and maintain highly competitive project support (e.g., grant funding or private philanthropy). Laboratory staff members turn over quickly, both because many are trainees and because junior researchers seek independence and their own laboratories. Although the situation is changing, there are few domain-wide motivating forces such as identified in the Institute of Medicine reports.4345 The primary forces driving communication change are coming from granting agencies requiring collaboration, as well as biological science journals increasingly requiring that researchers submit large structured datasets as a condition of publication of research articles.

The roles of individuals within laboratories are changing as well. Collaborations are starting to be encouraged, necessitating greater interdisciplinary skills to bridge technical and social organizations and organizational cultures. The need for IT for data analysis has precipitated the organizing of informatics-related service cores that can assist in sharing expensive instrumentation as well as analysis. One potential benefit of the increasing development of these resources is that laboratory personnel may begin to receive continuous updates about informatics tools that are relevant to experiments that they are conducting in these facilities, as well as benefit from education associated with these tools and services. Traditional silos are being consolidated, if not broken down.


There is a common and increasing interest in understanding the organizational implications of implementing clinical and research systems. Motivation for such understanding comes from the growing volume of research per se, the advent of new initiatives such as CTSA, and increasing numbers of companies interested in profiting from development and maintenance of research systems. If translational efforts are to succeed in speeding “scientific discovery and its efficient translation to patient care,”5, p 171 then organizational and cultural issues must be addressed directly and openly. Identification of best practices is imperative for collaboration, data sharing, development of standards, preparation of individuals for new roles, and fostering of effective communication across disciplines. The translational research agenda must include investigations to clearly and adequately characterize these issues and to identify strategies for addressing them. Informatics, as a discipline and as a service that can promote clinical, clinical research, and biomedical research productivity, can contribute tremendously to the ultimate success of translational research efforts.


The authors thank the members of the American College of Medical Informatics who were present during the discussion.


  • Supported by National Library of Medicine grant LM06942 and training grants T15LM07442 and ASMMI0031.


View Abstract