3: Reporting and Analyzing Patient Safety Incidents


CHAPTER 3
Reporting and Analyzing Patient Safety Incidents



The Universe is made of stories, not of atoms.


Muriel Rukeyser, The Speed of Darkness (1968)



Incident analysis, properly understood, is not a retrospective search for root causes but an attempt to look to the future. In a sense, the particular causes of the incident in question do not matter as they are now in the past. However, the weaknesses of the system revealed are still present and could lead to the next incident.


C.A. Vincent (2004)


To act you have to know—In medicine, both human and veterinary, there is a culture of infallibility, one in which mistakes are unacceptable and made only by bad clinicians. After all, it is difficult to accept an error when a life has been lost or a patient harmed. An effect of this cultural mindset is that error is under-reported and remains an under-recognized and hidden problem. In fact, discussion of error is actively avoided, generally considered taboo and unthinkable, despite the fact that errors occur regularly and will continue to occur.


It has been estimated that as many as 400,000 patients die prematurely in the United States as a result of hospital-associated preventable harm (James 2013), and it has been estimated that preventable errors occur in up to 7.2% of hospitalized patients (Baker et al. 2004; Hogan et al. 2012; Kennerly et al. 2014). It seems naively improbable, verging on arrogance, to think that a lower error rate exists in veterinary medicine. The problem is that we just don’t know. In human medicine we are aware of the tip of the iceberg in terms of the impact of errors on patients, while in veterinary medicine we’re sailing along seemingly ignoring the fact that icebergs even exist.


So it is safe to say that we are far behind human medicine and anesthesia when it comes to recognizing and managing error. We have even further to go before we can label veterinary anesthesia as being safe, before we can state with confidence that the risk of anesthesia causing preventable and unnecessary harm to our patients is negligible. Our first step is to recognize and accept that errors occur in veterinary medicine and that all of our practices can be made safer. The next task is for us to establish the extent and nature of the problem by discovering what errors occur, how often, and their true causality. This means we must make an effort to start reporting, analyzing, sharing, and discussing the errors we encounter. At first glance we may consider errors to be mundane, small events without consequence to our patients. But when error-prone conditions or events become aligned the errors that occur can have significant adverse impact on patient safety. For this reason we must view each error as a learning opportunity in our efforts to promote patient safety. Reporting and analyzing even basic errors can cause “Eureka!” moments that accelerate learning, understanding, and self-awareness, and give invaluable insight into the systems and processes with which we are involved on a daily basis (Tripp 1993). These insights can be significant catalysts in the process of change (Cope & Watts 2000).


The limitation in only counting errors


Highlighting only the occurrence and frequency of errors, such as using a simple error log, can be useful in some circumstances and may present opportunities for obvious, simple interventions. But there can be shortcomings. For example, at a large teaching hospital, operating room staff members voluntarily reported errors on a simple log when errors occurred (Hofmeister et al. 2014). After a period of 11½ months the log was analyzed and 20 incidences of the pop-off valve being accidentally left closed when setting up the operating room, 16 incidences of temporarily unrecognized esophageal intubation, five incidences of accidental intra-arterial drug administration, and 20 other medication errors were recorded. This is the first time such data have been collected and reported in the veterinary anesthesia literature; it is likely that this frequency of error events is mirrored in veterinary teaching hospitals throughout the world.


As a result of the initial findings, specific checks (“Technician checked OR” and “Technician Confirmed Intubation”) were incorporated into the anesthetic process. In addition, a different color for bandages covering arterial catheters was instituted, and a standard operating procedure (SOP) was created that required patient name, drug name, and route of administration be read aloud prior to administering any drug to an anesthetized patient. Gratifyingly, these interventions led to a 75% reduction in the incidence of pop-off valves being left closed, a 75% reduction in unrecognized esophageal intubation, a 60% decrease in accidental intra-arterial injection, and a 50% decrease in medication error. Case closed! Or is it? Could more be learned about these errors? Surely a reduction to zero should be what we strive for?


This was obviously a relatively successful outcome based on simple and efficient solutions, but perhaps this approach oversimplified the errors. Superficial analysis of incidents often uncovers only a single source of human error, which in turn often leads to blaming only the fallible individual while failing to recognize that we are all fallible; this approach also ignores the role of the system in the error. This leaves a lot of potentially vital information regarding error causality hidden and not analyzed. For example, assuming that pop-off valves were left shut merely due to human failing (be it lack of concentration, forgetfulness, distractions, etc.) fails to recognize something that has already been established: errors are often rooted in latent conditions within the system.


So, what if we ask: why did this human failing occur? What were the conditions that allowed these errors to occur? Could this approach identify other potential contributing factors? The answer is most definitely yes.


You could ask, does this matter in this case? No harm came to any patients and seemingly effective barriers against the errors are now in place. Perhaps it does matter, but we won’t know unless we fully analyze the errors and their underlying causes. Perhaps the technicians responsible for setting up the breathing systems in the operating room felt rushed due to the service being understaffed or having been assigned too many tasks and responsibilities. Was there a failure in training? Was there a larger problem in that the entire anesthetic machine in the operating room was not being fully checked (not just the pop-off valves)? The superficial analysis may have worked well to prevent the specific errors that were identified, but the underlying latent factors causing the errors in the first place still persist and, under different circumstances, will cause errors of a different nature. For example, if the anesthetic machines are not thoroughly checked then one day an empty auxiliary oxygen cylinder might go unnoticed and leave a patient without oxygen if the oxygen pipeline supply fails. Alternatively, further analysis might have identified why veterinary students had difficulty correctly intubating patients, a finding that could have led to a solution that more fully addressed the problem of failed intubations such as simulator training.


How can we learn the most from our errors?


An error report requires a thorough analysis in order to uncover the factors that detract from effective task performance, to find latent factors—underlying root causes—that created the environment in which the error could occur, factors that might have been responsible for impairing the performance level of the individual. Appropriate analysis helps to discover not only what occurred but also why it occurred. Merely tallying up the number of specific errors, for example, through using an error log, and then responding to them is insufficient; instead we need to analyze errors and the circumstances surrounding them. To do this we need to stop thinking of an error as a single event, but as an “incident.” Viewing an error as an incident moves away from the idea that it is a single, spontaneously occurring event and moves toward the view that it is the manifestation of a series of events and latent conditions that have evolved over time under a set of circumstances in a specific environment. Viewing an error as an incident—a chain of events—means that we have to create a far more complex account of errors; the most natural of these accounts is the “error narrative.”


The importance of narrative


A narrative is an account of events (an incident or story) over time; it involves relating people with the places, objects, and actions involved in an incident, but also recounts their reasoning, feelings, beliefs, and theories at the time of the incident, albeit often retrospectively. A good narrative report should provide the context to the incident described (basically who, what, where, when, and how) thus allowing a reader or listener to hypothesize regarding the reasons why the incident happened. As such a narrative is more than a factual list of physical events; it outlines both cause(s) and effect(s) and also provides a psychological overview of those who were involved. Developing a narrative is a natural form of human communication, one from which we learn well, perhaps more so than from other modes of learning, such as logical-scientific communication or deductive reasoning (Betsch et al. 2011; Dahlstrom 2014; Winterbottom et al. 2008). But why? Surely we can learn all we need to know from a listing of the facts of the incident that occurred? Well, no! As already discussed it’s not just about the events, but also about the human factors involved in an incident, those factors that affected the cognitive and physical performance of those involved, the entirety of the context within which the incident occurred. This is much more complex and requires more thought and processing. So why does a narrative help?


Narrative has been demonstrated to be an effective tool for understanding and learning because it allows more complex cognitive processing. This depth of cognitive processing has been attributed to two properties (Gerrig 1993): (1) transportation of the reader or listener to another time and place in a manner that is so compelling it appears real; (2) the reader or listener performs the narrative in their mind, lives the experience by drawing inferences, and experiences through empathy. This has been shown experimentally using functional magnetic resonance imaging (fMRI) to map the brain activity of storytellers and listeners (Stephens et al. 2010). During narrative storytelling the listener’s brain activity is coupled spatially and temporally, albeit with a small delay, with the speaker’s narrative. This phenomenon—“speaker-listener neural coupling”—may be a fundamental method the brain utilizes to convey information, make meaning of the information, and bring understanding of the world (Wells 1986). In the field of patient safety a rich narrative report is considered the only method capable of providing a full enough account of an incident to allow the complex conditions and processes that contributed to the event to be properly communicated and analyzed (Cook et al. 1998).


There are several methods by which a narrative report can be made. The most common are open discussions in the form of focus groups (such as morbidity and mortality rounds—M&Ms), interview techniques (including critical incident technique—CITs), and voluntary reporting.


Focus groups: morbidity and mortality rounds (M&Ms)


Although not an incident reporting and analysis method, morbidity and mortality rounds can be a useful starting point for identifying and combating error within a hospital or practice. These rounds are focus groups brought together following an incident of patient morbidity or mortality, and are generally used as part of a practice’s clinical audit and governance process. As such they are a means for promoting transparency concerning an organization’s safety climate and for raising everyone’s awareness of patient safety through open discussions on patient management and safety issues. They may be convened after specific incidents, or may recur on a scheduled basis.


The goal is to promote dialogue as to what went well and what did not, and what could be done differently on a specific case or set of cases involving errors or adverse outcomes. Unfortunately, morbidity and mortality rounds are usually only performed when a patient suffers serious harm or when there is an internal or external complaint made regarding management of a patient. In these situations case analysis can become a superficial process and a forum for criticism, a finger-pointing exercise with simplistic answers that often focus only on the person at the “sharp end.” Unfortunately, once blame is apportioned and simple remedial action is taken, the analysis stops and the system goes on as usual.


However, when performed well, discussions often highlight failings within systems as well as information on how overall case management could be improved. Cases are best presented following the timeline that they traveled through the hospital. It should be ensured that every member of staff involved with the case is able to describe their involvement, giving their interpretation of the events surrounding the case. An error or adverse incident should be considered an emotional event for the people involved and handled accordingly. Intense feelings such as anger, regret, frustration, helplessness, embarrassment, and guilt can be triggered in those involved. These emotions can be externalized in a variety of manners leading to an emotionally charged discussion (Cope & Watts 2000). For these reasons all enquiries and dialogue should be performed respectfully and empathetically, mindful of the sensibilities of those involved.


The case description should be followed by a reflective discussion, open to input from the floor. This allows a multifaceted interpretation of the events surrounding the case. Some method for allowing input from all who wish to be heard is required as too often it is the more senior and vocal members of the team who get their opinions heard, with the more junior members being left as non-participatory observers. For these sessions to be successful proper leadership and a neutral, non-confrontational, non-judgmental approach is required. The leadership role ideally should be provided by someone respected by all parties involved in the case, one who is generally considered to be fair, calm, and unbiased during conflict. The discussion moderator should be willing to step in and redirect discussions when they digress, become accusatory, or aggressive.


When managed well, morbidity and mortality rounds are recognized as being an important platform to explore, disseminate, and address in a timely manner system issues that contribute to errors and adverse incidents. However, many participants may be unwilling to share their thoughts in such an open forum. In such situations private interviews may be more appropriate.


Interview techniques


Private interview techniques are an alternative approach to morbidity and mortality rounds. In general, a senior staff member informally discusses the incident with each individual member of the team involved with the case. This approach avoids individuals feeling the pressure of having an audience of peers listening to their successes and failings, and is one that encourages a more honest and less defensive appraisal of the incident. However, private interviews reduce the learning experience for the rest of the team. Sometimes individuals feel more threatened and intimidated when separated from the team and as a result feel less empowered to speak freely as they no longer have the support of their peers.


Another problem is that interviews may be biased by the interviewer’s point of view, a bias that may direct the interview along a specific path. For this method to work successfully this type of analysis is better performed as part of a more structured interview such as the critical incident technique.


Critical incident technique (CIT)


The critical incident technique is a qualitative research method with its origins in job analysis as performed by industrial and organizational psychologists. It sets out to solve practical problems using broad psychological principles. The technique is based on firsthand reports of the incident, including the manner and environment in which the task was executed. Information is traditionally gathered in face-to-face interviews. During an interview, respondents are simply asked to recall specific events from their own perspective, using their own terms and language. Questions such as: “What happened during the event, including what led up to it and what followed it?”, “What did they do?”, and “Tell me what you were thinking at the time” are typically used to start the interview. As such the critical incident technique is not constrained by direct questioning or preconceptions of what factors in the incident were important to the respondent. As a result the interviewee is free to give a full range of responses without bias being introduced by the interviewer.


Introduced in 1954 by John C. Flanagan (1954), the critical incident technique actually had its roots in aviation during World War II when procedures for selecting pilots were investigated, specifically seeking why pilot candidates failed to learn to fly. Findings revealed that all too often analyses of pilot candidates were based on clichés and stereotypes such as “poor judgment,” or “lack of inherent ability” and “unsuitable temperament” (Flanagan 1954), but other specific behaviors were consistently reported and became the basis for ongoing research into pilot candidate selection. This research led to better methods for collecting data and became “the first large scale systematic attempt to gather specific incidents of effective or ineffective behavior with respect to a designated activity” (Flanagan 1954).


After the war, some of the psychologists involved in that program established the American Institute for Research (AIR) with the aim of systematically studying human behavior (Flanagan 1954). It was through the Institute that Flanagan formally developed the CIT. It was used initially in aviation to determine critical requirements for the work of United States Air Force officers and commercial airline pilots. Subsequently, the critical incident technique was expanded to establish critical requirements for naval research personnel, air traffic controllers, workers at General Motors Corporation, and even dentistry (Flanagan 1954). The latter, although not generally recognized as such at the time, was probably the first application of this technique in a medical discipline.


When Flanagan introduced this technique he stated that it “was…very effective in obtaining information from individuals concerning their own errors, from subordinates concerning errors of their superiors, from supervisors with respect to their subordinates, and also from participants with respect to co-participants” (Flanagan 1954).


Critical incident technique in anesthesia


The first documented suggestion to apply the critical incident technique to the practice of anesthesia was made in 1971 by Blum in a letter to the journal Anesthesiology (Blum 1971). Blum suggested the need to apply human factors and ergonomic principles when designing anesthetic equipment because human perception and reaction can influence the effectiveness of the “man-machine system.”


In 1978, Cooper reported the results of a modified critical incident technique, what he called a critical incident analysis, to perform a retrospective analysis of human error and equipment failure in anesthesia (Cooper et al. 1978). Information was obtained by interviewing anesthesiologists and asking them to describe preventable incidents they had observed or participated in that involved either a human error or equipment malfunction. Critical incidents were defined when an event fulfilled the following four criteria:



  1. It involved an error by a team member or a malfunctioning piece of equipment.
  2. The patient was under the care of an anesthetist.
  3. It could be described in detail by someone who was involved with or observed the incident.
  4. It was clearly preventable.

The interviewers were allowed to elicit details of the event through the use of generalized, prompting questions where needed, but they were not allowed to suggest any particular occurrence. Information was captured and organized into 23 categories (Table 3.1) (Cooper et al. 1978).


Table 3.1 Twenty-three major categories of information derived through interviews with anesthesiologists who had observed or participated in preventable incidents involving either human error or equipment malfunction.









Major categories of information


  1. Error or failure
  2. Location of incident
  3. Date of incident
  4. Time of day
  5. Hospital location
  6. Patient condition before the incident
  7. OR scheduling
  8. Length of OR procedure
  9. OR procedure
  10. Anesthetic technique
  11. Associated factors
  12. Immediate consequence to patient


  1. Secondary consequence to patient
  2. Who discovered incident
  3. Who discovered incident cause
  4. Discovery delay
  5. Correction delay
  6. Discovery of cause of delay
  7. Individual responsible for incident
  8. Involvement of interviewee
  9. Interviewee experience at time of interview
  10. Related incidents
  11. Important side comments

From Cooper, J.B., et al. (1978) Preventable anesthesia mishaps: a study of human factors. Anesthesiology 49: 399–406. With permission of the publisher.


The results gave a fascinating insight into an area of anesthesia that until then had remained unexplored. Cooper found that human error was involved in 82% of the preventable incidents while equipment failure was involved in only 14% of the incidents. Forty-four different predisposing factors were identified (the most common are listed in Table 3.2), including haste, fatigue and distraction, poor labeling of drugs, inadequate supervision, and poor communication.


Table 3.2 The most common predisposing factors for errors in anesthesia in order of reported frequency (count; % frequency rounded to whole number).









Categories of information


  1. Inadequate total experience (77; 16%)
  2. Inadequate familiarity with equipment/device (45; 9%)
  3. Poor communication with team, lab, etc. (27; 6%)
  4. Haste (26; 5%)
  5. Inattention/carelessness (26; 5%)
  6. Fatigue (24; 5%)
  7. Excessive dependency on other personnel (24; 5%)
  8. Failure to perform a normal check (22; 5%)
  9. Training or experience including other factors (22; 5%)
  10. Supervisor not present enough (18; 4%)
  11. Environment or colleagues—other factors (18; 4%)
  12. Visual field restricted (17; 4%)
  13. Mental or physical including other factors (16; 3%)
  14. Inadequate familiarity with surgical procedure (14; 3%)


  1. Distraction (13; 3%)
  2. Poor labeling of controls, drugs, etc. (12; 2%)
  3. Supervision—other factors (12; 2%)
  4. Situation precluded normal precautions (10; 2%)
  5. Inadequate familiarity with anesthetic technique (10; 2%)
  6. Teaching activity under way (9; 2%)
  7. Apprehension (8; 2%)
  8. Emergency case (6; 1%)
  9. Demanding or difficult case (6; 1%)
  10. Boredom (5; 1%)
  11. Nature of activity—other factors (5; 1%)
  12. Insufficient preparation (3; 1%)
  13. Slow procedure (3; 1%)
  14. Other (3; 1%)

From Cooper, J.B., et al. (1978) Preventable anesthesia mishaps: a study of human factors. Anesthesiology 49: 399–406. With permission of the publisher.


This study is recognized as being innovative in medicine and pivotal in driving forward the patient safety movement in anesthesia (Cullen et al. 2000), and did so long before the publication of the Institute of Medicine’s “To Err is Human” report in 2000. In fact the methods and results reported are still relevant and have become the basis of incident reporting systems in anesthesia today.


Voluntary reporting systems


Voluntary reporting systems are the most commonly used method in human medicine for error and patient safety incident analysis. When analyzed and managed properly voluntary reports are considered an effective method for inducing behavioral change in healthcare teams (Garrouste-Orgeas et al. 2012).


A number of vital components make up a good voluntary report (see Table 3.3). Probably the most important factor is a free text section in which the reporter outlines a narrative chain of events. An effective error reporting system encourages the reporter to provide a comprehensive and structured narrative that facilitates later analysis and investigations. This narrative should form a detailed description of what occurred and how it deviated significantly, either positively or negatively, from what is normal or expected (Edvardsson 1992).


Table 3.3 Characteristics necessary for an effective web-based voluntary reporting system, characteristics that help ensure incidents are reported appropriately.








  • Easy to find and widely accessible

    • One button access from local systems
    • Common website address for national systems
    • Links from all hospital computers
    • Accessible from home

  • Easy to enter case information

    • Simplicity
    • Pre-populated patient data
    • Intuitive flow of data entry
    • Menu driven
    • Checkbox data entry
    • Reactive logic, to hide irrelevant fields
    • Single narrative text box
    • No mandatory elements

  • Data elements and definitions created by consensus process


  • Assured confidentiality

    • Legal disclaimer at front
    • Transparency about who will see report

  • Anonymous data entry

    • Collection into appropriately structured database
    • Transparent schema allowing sorting under multiple classification systems
    • Search capability for finding and reviewing free text items

  • Visible use of data to improve patient safety

    • Publication of de-identified case reports and narratives
    • Publication of aggregated reports and trends
    • Sharing of aggregate data with outside stakeholder

From: Dutton, R.P. (2014) Improving safety through incident reporting. Current Anesthesiology Reports 4: 84–89. With permission of the publisher.


The primary aim of the narrative is to fully define the incident so that it can be fully and properly analyzed. Reporters should be encouraged to reflect critically upon the incident, questioning the actions and involvement of all the individuals involved, alongside the local practices, processes, and procedures (Tripp 1993). Reporters should be asked to identify critical requirements for success that were not carried out and the reasons behind these omissions. These reasons should include the attitudes, behaviors, knowledge, or skills of those individuals involved; the work environment; any problems with teamwork or communication as well as any actions and inactions that occurred. As a consequence, the perceptions and awareness of the reporter are an important aspect of this section and the structure of the report should not influence, lead, or bias the reporter. The report should seek to gather information in the same manner as that used in the critical incident technique. A report should also be used to gather other background information about the incident that lends itself to the analytical framework used to analyze the incident. The type of background data commonly collected alongside the narrative report include:



  • Location where the incident occurred.
  • Timing of the incident (date and time).
  • Information about the person reporting (e.g., their profession and role in the healthcare system).
  • Any actions taken as a result of the incident.
  • Patient outcome.
  • Patient details.
  • Mitigating circumstances.
  • More specific enquiries about the root causes.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Aug 14, 2022 | Posted by in SUGERY, ORTHOPEDICS & ANESTHESIA | Comments Off on 3: Reporting and Analyzing Patient Safety Incidents

Full access? Get Clinical Tree

Get Clinical Tree app for offline access