Objective To expand an emerging classification for problems with health information technology (HIT) using reports submitted to the US Food and Drug Administration Manufacturer and User Facility Device Experience (MAUDE) database.
Design HIT events submitted to MAUDE were retrieved using a standardized search strategy. Using an emerging classification with 32 categories of HIT problems, a subset of relevant events were iteratively analyzed to identify new categories. Two coders then independently classified the remaining events into one or more categories. Free-text descriptions were analyzed to identify the consequences of events.
Measurements Descriptive statistics by number of reported problems per category and by consequence; inter-rater reliability analysis using the κ statistic for the major categories and consequences.
Results A search of 899 768 reports from January 2008 to July 2010 yielded 1100 reports about HIT. After removing duplicate and unrelated reports, 678 reports describing 436 events remained. The authors identified four new categories to describe problems with software functionality, system configuration, interface with devices, and network configuration; the authors' classification with 32 categories of HIT problems was expanded by the addition of these four categories. Examination of the 436 events revealed 712 problems, 96% were machine-related, and 4% were problems at the human–computer interface. Almost half (46%) of the events related to hazardous circumstances. Of the 46 events (11%) associated with patient harm, four deaths were linked to HIT problems (0.9% of 436 events).
Conclusions Only 0.1% of the MAUDE reports searched were related to HIT. Nevertheless, Food and Drug Administration reports did prove to be a useful new source of information about the nature of software problems and their safety implications with potential to inform strategies for safe design and implementation.
equipment failure analysis
medical errors/statistics and numerical data
Health information technology (HIT) has the potential to deliver great benefits but, if poorly designed or implemented, poses a risk to patient safety.1–5 HIT is broadly defined to include ‘hardware or software that is, used to electronically create, maintain, analyze, store, receive (information), or otherwise aid in the diagnosis, cure, mitigation, treatment, or prevention of disease, and that is, not an integral part of (1) an implantable device or (2) medical equipment’.6 The complexity involved in safely designing, implementing and using such systems is being recognized with their proliferation across the health system.7,8 In 2011, HIT was listed in the top 10 technology-related hazards identified by the Emergency Care Research Institute among a range of common problems.9
Strategies to minimize the risks of HIT need to be based upon a proper understanding of the nature of problems encountered, their contributing factors, and their safety implications.10 As in other patient safety domains (eg, falls, medication errors) there is no single source of information about HIT problems. A range of information sources, including record reviews, root cause analyses, and observational studies are required (see appendix A, supplementary material at www.jamia.org).11,12 Reports on patient safety incidents are a valuable source because they facilitate rapid communication about emerging problems13,14 and have been proposed as one of seven steps to improve safety.13,14 A definition of a patient safety incident is ‘an event or circumstance which could have resulted, or did result, in unnecessary harm to a patient’.15 In this study we focus on patient safety incidents involving problems with HIT or ‘HIT events.’
Although incident reports cannot be used to examine the frequency of HIT problems, and are subject to bias from a number of sources,16 they provide information about the profile of problems, contributing factors, and consequences so that the most safety-critical problems can be identified.17 The identification of problems proceeds by examining different sources, until saturation for new problem types occurs. We previously identified 32 categories of HIT problems using reports submitted by health professionals to a state-wide incident-reporting system in Australia.18 While the analysis provides some insight into the nature of HIT problems, the dataset was limited to only one of the many possible information sources about emerging problems.
Another potential source of information about HIT problems comprises reports about equipment failure and hazards submitted by users and vendors.19 One such source is the US Food and Drug Administration (FDA) Manufacturer and User Facility Device Experience (MAUDE) database, which contains reports of events involving medical devices.20 As part of FDA regulatory requirements, manufacturers in the USA are required to report medical device malfunction, and problems leading to serious injury and death. MAUDE has been in use since the early 1990s and received around 600 000 reports in 2010. At present, there is considerable debate in the USA about the FDA's role in regulating HIT.10 Under the Federal, Food, Drug, and Cosmetic Act, HIT is a medical device.1 However, the FDA does not currently enforce its regulatory requirements with respect to HIT. Nevertheless, some manufacturers have voluntarily listed their systems, and the FDA has received reports of events involving HIT. In February 2010, the FDA reported receiving 260 events in the previous 2 years involving a range of HIT and related devices; these were linked to 44 injuries and six deaths.1 While the MAUDE database has been used previously to examine the safety of medical devices such as infusion pumps21,22 and pacemakers,23 there has been limited exploration of its utility for understanding HIT problems.24 In this study, we set out to systematically search and analyze the events submitted to MAUDE, aiming to better understand the nature of these problems and to expand our HIT safety classification.
We searched 899 768 reports of events submitted to MAUDE from January 2008 to July 2010. For each event, the MAUDE database contains a master record that is linked to free-text descriptions provided by reporters (appendix B, supplementary material at www.jamia.org). Multiple reports belonging to the same event are linked by an internally generated key. Reports about HIT were retrieved by searching the free-text descriptions, as well as the device brand name, generic name, and manufacturer. We generated keywords to describe computer hardware, software, or displays based upon our previous analysis18 and knowledge of the systems in use (appendix C, supplementary material at www.jamia.org). The internally generated key was then used to link reports belonging to the same event.
The reports were categorized using our earlier classification of 32 types of HIT problems (figure 1).18 Events were first classified as human- or machine-related, and then subdivided based upon problems at the point of information entry (input), transfer (transfer), or retrieval (output) (figure 1). A ‘general technical’ category accounted for hardware and software issues that did not fit into these categories. A category of ‘contributing factors' accounted for socio-technical contextual variables contributing to HIT events, such as multitasking while using HIT. We also examined the free-text descriptions of 26% of the events to identify any new ‘natural categories’.19 Based upon this analysis, the category for computer software in the previous classification (4.4 in figure 1) was expanded.
Revised classification for health information technology problems (new categories for software problems are underlined).
Two of the investigators (FM, MO) classified the remaining 74% of events using the expanded classification. To allow comparison with our previous analysis of reports from the Advanced Incident Management System (AIMS) we used the same categories to examine consequences (harm to a patient (adverse event); arrested or interrupted sequence (near miss); event with noticeable consequence but no harm; event with no noticeable consequence; hazardous event or circumstance; complaint; loss). Free-text descriptions were examined to identify problems and assess the direct consequences of an event. More than one category could be chosen for an event if multiple problems were identified. An inter-rater reliability analysis using the κ statistic was performed to determine consistency among coders.25 If an event was assigned to more than one category, the primary classification or problem type (the one most directly related to any actual or potential consequences) was used for the κ score calculations. When coders disagreed on a classification, the event was re-examined and a consensus category assigned. The inter-rater reliability for the primary classification was κ=0.84 (p<0.001), 95% CI (0.80 to 0.88).25 The inter-rater reliability for classification of the direct consequence was κ=0.90 (p<0.001), 95% CI (0.87 to 0.94). Descriptive analyses were undertaken for all events to examine the distribution of events by category and consequences.
Our search retrieved 1100 reports. We removed duplicates, those with inadequate descriptions of the events, and those that did not relate to HIT (422), leaving 678 reports describing 436 events (appendix D, supplementary material at www.jamia.org).
Classification of HIT problems and consequences
Examination of the 436 events revealed 712 problems. Of these, 682 (96%) were machine-related, and 30 (4%) were problems at the human–computer interface (ie, involving human–computer interaction26). The percentages for each category are listed in table 1, together with a comparison of those from our previous study.18 We identified four new categories to expand the previous classification (4.4 in figure 1) to include specific problems with software: functionality (4.4.1)27; system configuration (4.4.2); interface with devices (4.4.3); and network configuration (4.4.4). The consequences of a problem were available for 99% of the events (n=432). The majority of events related to hazardous circumstances (46%) or had no noticeable consequence (32%). While 11% of events were associated with an adverse event, 10% had a noticeable consequence but were not associated with harm. In the following sections, we provide a description of problems by primary classification.
↵* New categories for software problems are underlined.
Information input problems
Data capture down or unavailable
The 41 machine-related information input problems (6%; 1.1 in table 1) predominantly related to delays and failures in capturing data from imaging devices to picture archiving and communication systems (PACS). In addition to failures in reading images into the PACS, images were reported to be: lost, distorted, or flipped (left and right markers reversed); transmitted in incorrect order; stored under the wrong patient's folder; and exchanged with another patient's images en route. Failures with hardware components such as touch screens, keyboards, connectors, and hard disk crashes were also reported; software issues included interface, virus, networking, and server problems.
While radiologists were reported to work around such problems by generating reports directly from the imaging device (eg, ultrasound; n=35), in most cases the imaging procedure was repeated resulting in some patients being re-exposed to radiation (n=7). Problems with barcode readers resulted in administration of a wrong dose of a medication on one occasion and in corruption of patient records on another. In a third case, incorrect identifiers for the patient and sample were reported to be the outcome of an attempted workaround to manually enter data into laboratory information systems when a barcode reader failed.
Use errors15 in entering information and manipulating records accounted for 23 problems (3%; 1.2.1 in table 1). These involved selecting or entering information into computerized physician order entry (CPOE) and laboratory information systems. Errors in identifying patient records and selecting orders (eg, right patient, right medication, right dose, right route) were largely attributed to poor user interface design including screen layouts and formats (n=17). Poor design of the user interface of CPOE systems contributed to selection of the wrong test for the right patient and vice versa, resulting in incorrect tests, misadministration of radiation, medication errors, and the wrong patient being sent to surgery. For example, one CPOE system required users to scroll through 225 options on a drop-down menu. The options were arranged counter-intuitively, in alphabetical order resulting in a patient being overdosed with four times more digoxin than intended. In another case, a test using radioactive tracers was erroneously ordered for the wrong patient; because of poor usability of the user interface, the patient received a radioactive injection. Small font and poor visibility of the medication strength were associated with a 10-fold overdose of epinephrine, resulting in myocardial infarction. A cancer patient being looked after by multiple doctors was overdosed on a cocktail of anticoagulants—a combination of enoxaparin, unfractionated heparin by continuous infusion, warfarin, and aspirin.
Use errors were also attributed to mismatches of the system with clinical workflow. A CPOE user interface which did not provide medication doses in milligrams was associated with administration of three times the maximum dose of tylenol-oxycodone in 24 h. This resulted in acute renal failure and death. In another case, duplicate medication orders were associated with multiple order sets stored in ‘20 electronic silos’; a patient was infused with total parenteral nutrition and concentrated dextrose causing pulmonary edema.
Errors15 in entering information were associated with serious consequences. In one case, a technician mistakenly entered the date of birth of a baby instead of the study date, making a chest x-ray appear older than it was. A radiologist subsequently viewed the image for peripherally inserted central catheter (PICC line) placement. Seeing that the comparison image did not have the line present, it was concluded that the line had been removed. Unfortunately, the line was placed too far in the infant, and the premature baby died. In another case, entry of a portable x-ray image into a PACS system under the wrong name resulted in a wrong diagnosis and subsequent intubation, which may have contributed to the patient's death.
Use errors in manipulating patient data also had significant consequences (1.2 in table 1). The incorrect merging of artificial test record data with a live patient database was associated with one near miss; a patient who was wrongly scheduled for surgery was detected by a technician. Another case involved surgery on the wrong patient when a user merged incorrect data and mistakenly rejected the original images after the merge. A third example was the mistaken deletion of original x-ray images stored in a PACS. The imaging procedure had to be repeated due to intermittent failure of an automatic archive facility, exposing the patient to additional risks through repeated inquiry.
Only a small proportion of problems related to the transfer of information (2%, n=13). Hospital wide network problems and unplanned breakdown of CPOE systems were reported to impact clinical tasks. For example, when updates to existing prescriptions were not communicated to the pharmacy, patients received incorrect medications or correct medications at the wrong frequencies. System integration issues also meant that results were not appropriately inserted in laboratory information systems. Events were occasionally associated with patient harm. In one case, a hospital-wide breakdown of the CPOE system delayed postsurgery treatment resulting in a permanent musculoskeletal disability. In another, a patient died when a network problem in the PACS system delayed transmission of images to a remote site for diagnosis.
Ongoing system maintenance and updates to records were also problematic. For instance, manually entered allergy information was overwritten during an automatic update of the hospital system because of improper database configuration. A patient was then given the wrong medication, resulting in an allergic reaction. Problems with identification and merging of patient records were also reported (n=9). For example, two patient files with the same first and last names were incorrectly merged due to inconsistencies in the user interface.
Information-output problems (29%, n=208) largely related to inaccuracies in the display of results from PACS (eg, CT, plain x-ray; 3.3 in table 1). In addition to manipulation difficulties (eg, scrolling through multiple images) and failures in displaying results (eg, images not displayed), hazards involving PACS related to display errors for example, wrong patient, and/or wrong record retrieved. For example, a CT of the chest, abdomen, and pelvis was ordered, but results of an MRI of the spine were posted. Software problems with PACS also prevented doctors' access to prior examinations from long-term archives needed to evaluate the course of diseases such as cancer.
Output/display errors with PACS
Imaging study date and time
PACS were reported to incorrectly present the date and time of data entry as the study date and time with potential for mis-diagnosis because of inaccuracies varying from minutes to years. In certain configurations, the time of acquisition was not displayed with the image causing users to select the wrong image or the PACS defaulting to displaying the oldest study after activation of the patient record. In a near miss, inaccurate dates presented by a PACS led a neuro-radiologist to conclude the disease had diminished when in fact it had progressed. The error was subsequently detected up by another clinician. Another patient was reported to re-present with a widely spread cancer after a radiologist was wrongly shown an image that was 2 years old.
Markers and image orientation
The display of image markers and orientation of images were also reported to be problematic. Image markers and reference flags were not displayed, or were unclear or incorrectly displayed by PACS systems (n=13). For instance, a surgeon was reported to operate on the wrong side of patient's head when left and right orientation markers were swapped.
Displaying images on third-party applications
Problems with data accuracy were associated with displaying images on other applications. CT and MR images tended to be incorrectly oriented, ‘flipped’ when displayed on a separate diagnostic image review and analysis workstation. For instance, mammography images were not correctly displayed on a third-party PACS, and images were obscured by text which was incorrectly displayed and did not rotate when an attempt was made to rotate the image. Another web-based PACS was reported to randomly change the orientation of CT images.
Output to electronic media and printing were also problematic. For example, critical information (ie, hemodynamics and conclusions from nuclear imaging data) which was displayed and stored on a PACS was not included in a report generated by a system. In another event, mammograms printed from a PACS were not the actual size, although stated as such on the film. These events were associated with significant consequences. In one case, CT sinus images that reverted to their original orientation when sent from a PACS to a CD burner led a surgeon to operate on the wrong side.
Integration of PACS with clinical workflow
Other output problems with PACS related to mismatches with the clinical workflow (n=18). Examples include PACS displaying the wrong comparison exam; not displaying the current exam after a comparison exam was viewed; and displaying the notes about a previous exam or for the wrong patient. When consulting other records for reference while dictating notes, PACS prevented users from returning to the dictation window or displayed wrong patient information in the screen header. Inconsistency in image manipulation and unexplained changes in the orientation of images halfway through a series were also reported to be hazardous. One PACS was reported to flip only half the images selected by a technician to be flipped.
Data consistency within and across clinical applications
Consistency between markers and the orientation of images was also reported to be problematic. For example, image review and analysis workstations displayed mammograms with wrong view positions, although markers were correctly displayed. In another case, lumbar spine images displayed by a PACS were flipped, though left and right markers were not. The use of unconventional display formats was also reported to be a hazard. For example, an axial image displayed by a PACS was flipped horizontally and not in conventional display format.
Synchronization and consistency of patient information reported across multiple windows within and between software systems were also reported to be problematic. For example, a PACS concurrently displayed information belonging to two patients or displayed images of the previous patient when more than one viewing window was open. Another PACS was reported to be out of synch with the radiology information system, displaying records about a different patient. Consistency in patient information within single windows was also problematic, such as when headers did not match the rest of the information displayed.
Switching from one system mode to another also proved to be a hazard because of the lack of consistency in the patient information displayed—for example, a PACS showed a different patient's image when switching from the display mode to the edit mode. Browser configuration issues also contributed to the display of incorrect patient information (eg, a PACS displayed cached images from a previous patient). Problems with viewing and transferring information meant that some procedures (eg, neurosurgical) were performed without 3D imaging data.
Output/display errors with CPOE, EMR, and laboratory information systems
As with PACS, information output problems associated with CPOE, EMR, and laboratory information systems generally related to the display of information, sometimes leading to serious consequences (3.3 in table 1). For example, a patient experienced an allergic reaction when a CPOE did not display allergy information. An unacceptably long time lag in the response of an EMR caused a user to click multiple times, inadvertently signing documents and viewing messages without being aware of having done this.
Problems also related to the display of incomplete patient information and in incorrect locations. Incomplete work lists were associated with missed orders—for example, display of a list of medications not for the current day but the following 24 h period. In other cases, CPOE results were incomplete or not displayed in a recipient's in-box, causing delays in follow-up. Prescription orders that did not appear in the work folder led to a 3-day delay in the administration of medications leading to a patient with an ulcer requiring emergency gastrectomy. Random failure of a CPOE in displaying a postoperative order to discontinue fluid in a nurses task list resulted in a patient being overloaded with fluid. Another failure to display a free-text update for an existing order to hold insulin at night resulted in a patient becoming hypoglycemic with severe symptoms.
Unclear display formats were also reported to be problematic—for example, a system displaying new medication and resuscitation orders alongside old orders that were not easily distinguishable from each other. In other cases, CPOE medication lists did not distinguish the form of medications—for example, if a patient is taking multiple forms of the same medication, it is not clear which is which. The display and rounding of pediatric doses were also reported to be hazardous (eg, desmopressin 0.025 ml=0.1 μg is rounded to 0.03 ml=0.12 μg, a 20% increase in dose). One near miss involving a 10-fold overdose of insulin was associated with a system that did not display volumes <0.01 ml, requiring nurses to anticipate the problem and calculate volumes by hand.
Data consistency within and across clinical applications
As with PACS, unexplained changes, inaccuracies, and inconsistencies between printed reports and CPOE, EMR, or laboratory information system display were related to system integration and upgrade issues. For example, laboratory information systems displayed a total protein result as an albumin result; failed to send critical (panic level) results to a phone list; printed results from two different patients under the information of one patient; and allowed printing of unsigned reports while they were being revised by the pathologist, resulting in two reports with different diagnoses being sent to the referring doctor.
Display of alerts
Software issues also led to problems with displaying alerts. For example: (1) blood-bank software did not alert users about mismatches in blood groups; (2) an EMR incorrectly flagged a heart rate as abnormal; and (3) a laboratory information system did not display low reference flags in HL7 messages. Errors in system configuration, faulty updates, or a lack of system rules following upgrades also led to inaccuracies and alert failures. For instance, a laboratory information system was reported to display the incorrect name and test order due to a software driver being out of date; blood-bank software was reported to display incorrect messages when data did not match rules defined in the system database. The lack of error checking had serious consequences. For example: an ambiguous stress test order allowed by a CPOE led to life-threatening acute asthma attack; entry of invalid dates was associated with the prescription of a medication that was known to have caused an adverse event on a previous admission, and should have been avoided.
Sixteen percent (n=117) of problems related to hardware issues with the poor performance or failure of handheld computers. A further 41% (n=294) of problems related to software (1) functionality (n=229), (2) configuration (n=24), (3) interface with devices (n=40), and (4) network configuration (n=1). We expanded our classification to include these new categories and summarize these types of software problems with PACS, CPOE, and laboratory information systems in the following sections.
These related to inadequate software functionality to carry out clinical tasks (n=85; 4.4.1 in table 1). PACS did not reliably support the review and interpretation of imaging studies. Software was reported to (1) incorrectly mark images which were viewed for comparison as dictated; (2) attach dictation notes to the wrong exam; (3) reject the wrong images (ie, images were no longer available for interpretation and review); (4) record the incorrect time of acquisition for radiographs affecting medical care; and (5) corrupt the database when changing modalities and saving an exam. Updates to imaging studies were also problematic. PACS did not support correction of reports and were noted to (1) overwrite the original report when an addendum was saved; (2) overwrite notes with those of another patient; (3) incorrectly merge new studies with existing studies; and (4) produce incomplete reports when edited manually (eg, manually added information resulting from ‘worksheet interactions' was reported to be missing).
PACS display problems were associated with local configuration issues and linked to software upgrades (n=9; 4.4.2 in table 1). For example, a PACS system was reported to move radiologists' annotations to another part of the image if the window/level was adjusted. Angiography studies skipped a portion of the view when rotated on a diagnostic image review and analysis workstation. Configuration issues also led to display problems. Image dimensions were incorrect because of a lack of on-site configuration or failure to follow the vendor's instructions—for example, the minus sign was not displayed for images with a negative angle. Another PACS displayed the wrong image because images were stored in an incorrect location due to a configuration error. A third PACS displayed old results as new and vice versa because the settings had reverted back to the default configuration and were reported to result in unnecessary surgery.
Functionality of blood-bank software and laboratory information systems
Software problems also impacted blood bank and laboratory information systems (n=39; 4.4.1 in table 1). Blood-bank software: (1) did not allow review of transfusion history, blood type, and antibodies after a service pack was loaded; (2) wrongly deleted all audit history for processed units when information was cleared from the work space used to process units; (3) allowed a new search without clearing the contents of a previous search, resulting in records being incorrectly updated; and (4) wrongly allowed users to issue a blood product without completing an antibody identification test. Software problems caused laboratory information systems to assign results to the incorrect record of the correct patient and automatically accept flagged results into the system.
CPOE functionality and configuration
The problems with software functionality of CPOE primarily related to the updating of orders (n=52; 4.4.1, 4.4.2 in table 1). CPOE (1) did not present procedures and tests as a standardized list; (2) did not adequately distinguish the range of preparations available for a medication in the list presented to users; (3) were unable to handle variable dose medications (eg, when venlafaxine was ordered as 150 mg daily at 09:00 and 75 mg daily at 14:00, the nurse task list showed 225 mg to be administered at 09:00); (4) produced orders with an incorrect dose when the available formulation did not match the required dose; (5) did not consistently present information (eg, doses for morning and evening appear correctly on the doctors screen but both doses appear to have to be given at 09:00 on the nurses screen); (6) did not support discontinuation and modification of orders (eg, a doctor unable to discontinue an order leaves a paper note ie, not seen by the nurses and three patients erroneously continued to receive antibiotics); (7) did not support reconciliation of completed orders; (8) were reported to convert inpatient medications to outpatient prescriptions; (9) spuriously canceled orders; (10) did not transfer orders; and (11) were reported to maintain two separate identifiers of the same patient causing results to be assigned to wrong records not visible in the doctors list, disrupting and delaying treatment.
Inadequate functionality of CPOE software (poor usability of the user interface and poor configuration of CPOE with clinical workflow) impacted clinical tasks resulting in the duplication of tests, medication orders, and treatments. For instance, CPOE systems that did not present a clear list of current medications and treatments, and systems that did not separate pre- and postoperative orders and results were reported to be error prone. A clean postoperative abdomen was wrongly irrigated based on a preoperative order. In such systems, postoperative orders were entered by deleting active orders that were no longer needed. Users reported having to read ‘up to 300 lines of active orders,’ which was not always done. In one case, a patient's medication list was reported to list duplicates and triplicates of five medications.
Poor functionality of CPOE software had serious consequences including: (1) a ‘missed opportunity to diagnose and treat life-threatening disease, contributing to death’; (2) delay in the diagnosis of chest organ cancer in more than six patients because specimens were not analyzed as ordered, and cytology results were not available; (3) delay in management of a neurologic infection; (4) failure to execute an order for a patient with a transcutaneous pacemaker with life-threatening consequences; (5) transfer of an at-risk patient without a heart monitor.
Multiple users not supported
Some software did not adequately support concurrent use by multiple users. In one case, administration of critical medications was delayed when they could not be ordered on a CPOE system because patients' files were being viewed in the pharmacy. Requests were attached to the wrong patients when a laboratory information system had multiple users. In other cases, blood-bank software allowed multiple users to concurrently edit the same shipment order; software problems caused studies in a PACS to be locked (even though they were not being reviewed), delaying interpretations.
Learning from event reports
We have previously published a classification for HIT problems comprising 32 categories17 and foreshadowed the need to expand this classification using other sources of information (such as device registries, closed claims, and complaints databases).18 To this end, we downloaded and analyzed nearly 900 000 MAUDE reports, and identified 712 HIT problems in 436 events reported over a 30-month period. Although only 0.1% of nearly 900 000 MAUDE reports were related to HIT, these did prove to be a useful source of information about the nature of software problems and their safety implications. Of the 46 events (11%) associated with patient harm, four deaths were associated with HIT problems (0.9% of 436 events). The reports to MAUDE involving patient harm or death provide a timely reminder that HIT problems need to be taken seriously.
The expanded classification for HIT problems provides a clinically useful, comprehensive means of eliciting information about, and collating and classifying HIT events. With additional categories for software problems, mechanisms can be established to improve reporting by better eliciting information from reporters, and we will then be better able to characterize software problems and to provide a basis for designing corrective and preventive strategies. Identifying the natural categories of safety events has been the method used for creating the AIMS classification, which is the starting-point for the International Classification for Patient Safety framework.12,19 Our approach is supported by Reason's Swiss Cheese model.28 While the sequence of HIT problems leading to patient harm may be unique to a specific event, the common types of problems are probably finite and, when characterized and systematically addressed, can lead to improvements in patient safety. We propose that a further health incident type for the International Classification for Patient Safety be developed for HIT problems, incorporating the new categories identified here (figure 1). HIT problems identified in other areas, such as medicolegal and complaints databases, also need to be examined with respect to obtaining more insights, and users and vendors should be further encouraged to report problems. The US Common Formats have recently created a category to facilitate the reporting of HIT events.29
There has been a previous study of HIT events reported to the FDA, which used the FDA's on-line search facility to examine clinical information systems by manufacturers.24 However, this yielded only 120 reports from 1984 onwards. For this study, we in contrast elected to first download all reports without filtering, and then to search across the report set using our own methods rather than the FDA interface. We linked reports belonging to the same event and searched the free-text narratives for events involving a broad range of stand-alone software including PACS and blood bank systems. This led to a far larger yield of events than the previous study, with 436 events being identified. As we have shown previously, voluntary incident reports from health professionals contain a large proportion of events which involve human–computer interface problems, and many of these constitute hazardous circumstances with no immediate consequences for patients. In our previous analysis of HIT events, delays in clinical tasks were a major consequence of machine-related problems (70%), whereas re-work was a major consequence of human–computer interface problems (78%). About one-third of all events in both our previous HIT study and this one had no noticeable consequence, but a far greater percentage of events were associated with harm in the present study. A detailed analysis of the events which caused harm or death has been presented elsewhere.30
We have speculated previously on the low percentage of HIT events in voluntary reporting systems, with possible reasons being the fact that the systems are not specifically designed for capturing HIT problems and that health professionals may have low expectations of the reliability of computers and IT systems, and regard some problems as being ‘business-as-usual’ and not worth reporting.18 It is possible that similar factors may have limited reporting to MAUDE. Currently, there is no requirement to report HIT events to the FDA. It does not solicit reports about human–computer interface problems but rather solicits reports about problems with machines themselves. This is reflected in the fact that 96% of the problems reported in this study were machine-related problems, and only 4% were problems at the human–computer interface. MAUDE reports often provided good descriptions of technical issues and rich information about the types of software problems encountered. While the incidents reported by health professionals provided some useful information of relevance to training and work-flow, the MAUDE reports provide insights into how software and hardware systems are failing, with the potential to build in safeguards, and potentially setting standards for future design of systems.
New categories for software problems
We identified four new categories to describe software problems and expanded a classification of HIT problems we had developed previously. The new categories account for problems with: (1) software functionality: match of the software user interface and functions to tasks; (2) software configuration: site specific clinical implementation (eg, local rules for decision support) and technical maintenance (eg, updates and application of patches); (3) software interface with devices; and (4) network configuration: implementation of software on local networks.
Software issues were the most prevalent problem, accounting for more than 40% of the events identified. This is hardly surprising, given the increased utilization of software to manage patient data and clinical workflow. The most common problem relates to patient misidentification caused by deficiencies in the software. This typically occurred in the display of patient records. Often, clinicians could be viewing multiple patient records, and information was sometimes entered in or read from the wrong record. Consequently, a wrong diagnosis was assigned or procedure undertaken, potentially resulting in patient harm. Another common software fault is incorrect orientation of the images displayed, resulting in a procedure being performed on the wrong side of the patient.
Unlike hardware where failure modes are generally well defined, software problems are much harder to find and eliminate. Our analysis reveals four categories of software problems, and some recommendations for mitigating them through safe design and implementation are provided in box 1. As software evolves, and new technology is introduced, different challenges will arise. Continuous monitoring and assessment of these systems are critical in ensuring safety is not compromised. Our findings further emphasize the urgency of re-examining regulatory requirements for HIT.10
Some recommendations for the safe design and implementation of software
Ensure system model matches clinical workflow requirements and use model (eg, PACS allows user to safely compare images from previous exams; measures are in place to communicate critical results if doctor or team is expecting to be notified).
Patient information is accurate:
a. Right patient (adequate measures are in place to ensure accurate identification of patient and records, eg, identification should not rely solely on first and last names, and date of birth).
b. Right record (eg, results should be assigned to correct patient AND correct record).
c. Creation of new records should be controlled (eg, a system should not maintain multiple files for the same patient).
Patient information is clearly identified and consistent:
a. Within a window
b. Across windows (especially if records belonging to more than one patient can be viewed simultaneously).
c. Across systems (eg, information about patients across two clinical applications should be consistent).
d. Printed reports are accurate and complete (eg, critical information is not omitted in reports produced by a system).
Display of images is accurate and consistent:
c. Through a series (eg, images should not be flipped half-way through a series).
User interface supports safe human–computer interactions (eg, a drop-down box should not contain 24 options).
6. Local configuration of software and rules are maintained, especially following upgrades to software and/or an operating system.
7. Rules for decision support are documented and reviewed in a timely manner.
Software interface with devices
8. Ensure accuracy and consistency of information at hardware/software interfaces.
9. Ensure a network is reliable and available (eg, no black spots in a wireless network).
10. Scheduled and unscheduled interruptions to service must trigger a switch to a reliable, up-to-date backup system.
While a range of functionality and configuration options are available for most commercial software to accommodate local clinical conditions and workflows, the availability of multiple use models within and across clinical applications is a risk to patient safety. For example, the option within a CPOE to send a result to a doctor's inbox or make it available as part of the patient record increases the likelihood of results being missed. Similarly, inconsistencies in the use model for different types of imaging orders (eg, x-ray, CT) within the same organization may also contribute to use errors.
Our findings underline the importance of safe interactions at key interfaces to ensure patient information is accurate and consistent: (1) human–computer; (2) software–hardware (ie, data capture and output); (3) software–software (ie, between clinical applications). Along with designing safer user interfaces focusing on features for the safe entry and retrieval of clinical information, software design must also include error checking and redundancy to ensure reliable and accurate data transfer across software and hardware interfaces.
Comparison with events reported by health professionals
Unlike events reported by health professionals in our previous study, which related to the direct impact of HIT problems in delaying clinical tasks and resulting in rework, events from the MAUDE database predominantly described technical issues and provided richer descriptions about the types of software problems encountered. Contributing factors were not available, and there were fewer problems at the human–computer interface (MAUDE=4% vs AIMS=45%). Although fewer events related to HIT (MAUDE=0.1% vs AIMS=0.2%), there was greater reporting of events where patients were harmed (MAUDE=11% vs AIMS=3%). The arrival of newer technologies (eg, handheld devices) is marked by a comparable proportion of events that related to problems with desktop computers (MAUDE=16% vs AIMS=11%). As reports are not a true representation of the frequency of HIT events, we do not know whether differences in reporting represent a genuine difference in types of events, or more likely a selection bias among reporters who identify the type of event they think is worth reporting to the FDA.
Limitations of MAUDE data
The HIT events we studied involved systems voluntarily listed by their vendors with the FDA. Current regulatory requirements, which require the reporting of medical device malfunction, serious injury, and death, are not enforced with respect to HIT. Therefore, the events we examined are unlikely to be representative of all systems. Another limitation is that the reports, most likely by software vendors or IT staff, reflect the expertise of the reporter with all the inherent limitations of such a system, such as a bias toward reporting events which appear interesting or unusual.16 However, the events analyzed were reported over a significant period, providing data about the nature of software problems and for expanding our HIT classification.
Only 0.1% of reports in the FDA's MAUDE database were related to HIT. Reports about equipment failure and hazards submitted by users and vendors are a useful source of information about the nature and safety implications of problems associated with software functionality, system configuration, interface with devices, and network configuration. Strategies for the safe design and implementation of software must focus on matching the user interface and functions to clinical tasks, as well as configuration to local clinical and technical conditions.
This research is supported in part by grants from the Australian Research Council (LP0775532, DP0772487) and the National Health and Medical Research Council (Program Grant 568612, Project Grant 630583).
Provenance and peer review
Not commissioned; externally peer reviewed.
The authors wish to thank S Anthony for his assistance with extracting the MAUDE database.
; AMIA Board of Directors. Challenges in ethics, safety, best practices, and oversight regarding HIT vendors, their customers, and patients: a report of an AMIA special task force. J Am Med Inform Assoc 2011;18:77–81.
. An integrated framework for safety, quality and risk management: an information and incident management system based on a universal patient safety classification. Qual Saf Health Care 2006;15(Suppl 1):i82–90.
. Making information technology a team player in safety: the case of infusion devices findings. In: HenriksenK, BattlesJB, MarksES, et al., eds. Advances in Patient Safety: From Research to Implementation (Volume 1: Research Findings). Rockville, MD: Agency for Healthcare Research and Quality (US), 2005.
. Evaluating and predicting patient safety for medical devices with integral information technology and methodology. In: HenriksenK, BattlesJB, MarksES, et al., eds. Advances in Patient Safety: From Research to Implementation (Volume 2: Concepts and Methodology). Rockville, MD: Agency for Healthcare Research and Quality (US), 2005.