Accident and Error Management


15
Accident and Error Management


Daniel Pang


Department of Veterinary Clinical and Diagnostic Sciences, Faculty of Veterinary Medicine, University of Calgary, 3280 Hospital Drive NW, Calgary, Alberta, T2N 4Z6, Canada


Introduction


Errors, and their potential to lead to an adverse event, are common (see Table 15.1 for terminology). They have been described as part of the human condition, reflecting a normal aspect of cognitive function (Allnutt 2002; Reason 2008). If we accept that errors can occur in any system in which humans participate, we can consider how to create safer systems and organizations.


Awareness of error in human medicine received widespread attention with the publication of the Institute of Medicine’s report “To Err is Human: Building a Safer Health System” (Kohn et al. 2000). This report marked a shift away from a culture of blame, moving from a focus on individual contributions to errors toward improving the systems in which humans work. To put it simply, systems should be designed “that make it hard for people to do the wrong thing and easy for people to do the right thing” (Kohn et al. 2000). Anesthesiologists were at the forefront of this shift in attitude in medicine, embracing a systems approach to preventing errors (Cooper et al. 1984, 2002). Such systems approaches were originally developed to minimize errors in complex organizations with the potential for catastrophic failures, such as the nuclear power and aviation industries. Similarities have been drawn between the roles of anesthetists and airline pilots: both groups are faced with an array of complex equipment, providing large quantities of information in real time, upon which decisions are made that are often time sensitive and under pressure (Allnutt 2002; Gaba et al. 2003; Ludders and McMillan 2017; Reason 2008). Some authors go further, proposing that managing anesthesia is more complex because of the additional dynamics of a patient (variable clinical presentations and inherent biological variability) and frequent requirement to perform simultaneous teaching and organizational tasks (Helmreich 2000; Reason 2008).


This chapter begins with an overview of error theory. It will compare traditional and current approaches to understanding the circumstances that lead to adverse events. This foundation forms the basis of a structured approach to adverse event analysis and prevention, with the goal of helping readers improve the systems in which they work. Finally, case examples from equine anesthesia are used to illustrate how accidents happen alongside strategies of how to deal with the consequences. While adverse events can have a considerable emotional and physical impact on patients and personnel, it is important to appreciate that the large majority of errors that result in an adverse event are preventable (Arbous et al. 2001; Cooper et al. 1984; Gibbs et al. 2017; Kohn et al. 2000; Macintosh 1949). This gives hope that the current situation can be improved through discussion, analysis, and learning from our errors.


Table 15.1 Definition of terms used in this chapter.


References: Ludders and McMillan (2017), Reason (2000, 2008), Runciman et al. (2009), and Wiegmann and Shappell (2005).


































Term Definition
Active failures Unsafe acts: includes errors and violations (deliberate deviations from standard practice).
Adverse event (adverse incident, harmful incident, accident) An action or event that caused harm to a patient (or personnel). Harm is variably defined but often recognized as changing the course of patient management, prolonged hospital stay, long‐term disability, or death.
Error Performance or activity that fails to achieve the intended outcome. Errors may be further classified as skill‐based, decision‐based, or perceptual‐based.
Error wisdom The ability of people on the frontline to recognize situations in which an error is likely.
High‐reliability organization (HRO) Organizations defined by the characteristics of successfully managing complex technologies under time pressure and to a high standard; achieved with a very low incidence of failure. Such organizations are alert to the possibility and prevention of failure. Prototypical examples are air traffic control, nuclear power plants, and aircraft carriers.
Human Factors Analysis and Classification System (HFACS) An accident or adverse event investigation tool originally developed for the aviation industry. It is based on Reason’s Swiss cheese accident model.
Patient safety incident An event that could have resulted, or did result, in unnecessary harm.
Latent conditions Factors within a system that may predispose an adverse event (e.g. work culture supporting dangerous practices, inadequate supervision of trainees, poorly functioning equipment).
Near miss incident An incident (deviation from standard care) that did not result in harm to patient (or personnel) as a result of timely intervention or chance.

Anesthetic Mortality and Error


A recent systematic review and meta‐analysis showed significant and important decreases in peri‐operative human anesthesia‐related mortality internationally over the last five decades, despite an increase in baseline American Society of Anesthesiologists (ASA) status (Bainbridge et al. 2012). Mortality solely attributed to anesthesia has decreased 10‐fold to around 3.4 per 100 000. Nevertheless, it has been estimated that over 70% of anesthesia‐related deaths are preventable (Arbous et al. 2001; Cooper et al. 1984). In contrast to the relatively low incidence of anesthesia‐related death, the incidence of adverse events in human medicine is as high as 4% in hospitalized patients, with the majority being associated with human error (Cooper et al. 1984; Gawande et al. 1999; Kohn et al. 2000). The veterinary literature regarding adverse events and near misses is extremely limited, with one study documenting a rate of 3.6% (n = 74/2028 anesthetized patients) in a university hospital anesthesia service (Hofmeister et al. 2014). Promisingly, this was reduced to 1.4% following changes in practice. A recent study of errors and adverse events (not limited to anesthesia) from two university hospitals (large and small animal) and a private referral and emergency hospital, found a 15% incidence of adverse events (n = 560 incident reports analyzed) (Wallis et al. 2019).


In addition to errors themselves, the complexity of anesthesia provision can magnify the consequences of errors. Administering anesthesia requires numerous interactions between anesthetist, patient, personnel, and equipment (Gaba et al. 1987; Reason 2005; Sutcliffe 2011). When these interactions are “loosely” coupled, change in one component has slow or minimal effects on its connecting component (e.g. a misunderstanding between services in case scheduling). Conversely, “tight” coupling allows little margin for error so that change in one component has a rapid or major impact on another (e.g. the rapid and potentially disastrous consequences of an unobserved breathing system disconnection) (Gaba et al. 1987). Furthermore, the complexity of interactions in complex systems can make it difficult to predict effects and outcome of changes between different components of a system. Extending the example of case scheduling, the time to clarify and reorganize case order for one service may lead to the late start of a higher risk case for another service so that case ends after hours, when key resources (experienced anesthesia and post‐operative care personnel) may be limited.


Peri‐operative Morbidity and Mortality in Horses


The occurrences of adverse events have been acknowledged in equine anesthesia and identified as an important area for development (Hartnack et al. 2013; Johnston et al. 2002). The Confidential Enquiry into Perioperative Equine Fatalities (CEPEF), a prospective observational epidemiological multi‐center study, found a 7‐day post‐anesthesia mortality rate of 0.9% (out of 35 435 non‐colic surgeries, 95% confidence interval 0.8–1.0%) (Johnston et al. 2002). Although the incidence of adverse events in equine anesthesia (other than mortality) is unknown, Johnston et al. (2002) identified several factors in which error could play a role. These include time of day (odds ratio for midnight to 6 a.m. of 7.6; 95% confidence interval (CI) 2.2–26.7, odds ratio for 6 p.m. to midnight of 2.2; 95%CI 1.3–3.5) and day of the week (odds ratio of weekend of 1.5; 95%CI 1.0–2.3). The study suggested that fatigue and staffing levels could contribute to the risk associated with out‐of‐hours procedures (see the discussion on unsafe acts in the following text).


Error Theory


The likelihood of errors and possibility that they result in adverse events is predicated on two fundamental principles: (i) humans are error‐prone, and (ii) conditions within the systems in which humans work often increase the risk of errors. These principles underlie current anesthesia (and aviation) safety literature. By understanding these two principles, we are better able to recognize how and why errors occur and develop strategies for their prevention (Allnutt 2002; Reason 2000, 2005, 2008).


Error Analysis


There are three broad approaches to understanding, analyzing, and preventing errors: (i) person‐based approach, (ii) systems‐based approach, or (iii) a combined approach (Reason 2000, 2008).


Person‐Based Approach


This is the traditional and current model in the majority of veterinary clinics and hospitals, in which the search for the cause of an adverse event begins and ends with the person closest to the event. This is rational in the sense that within complex systems, such as anesthesia, human error accounts for the majority (>70%) of adverse events or near misses (Arbous et al. 2001; Cooper et al. 1984, 2002). The usual outcomes of this approach are to “blame and train” (to blame the individual and enforce additional training) or to “blame and shame” (Pang et al. 2018; Wiegmann and Shappell 2005). This approach conveniently ignores that errors are often made by highly trained, well‐intentioned individuals, and fails to account for system and organizational influences that predispose the final act of human error (Allnutt 2002; Reason 2000, 2005, 2008; Wiegmann and Shappell 2005). As described by Ludders and Macmillan, the human is “the final common pathway for an error, thrust there by a flawed system” (Ludders and McMillan 2017).


Human error is unavoidable; it is therefore better to accept the fallibility of humans and act to minimize the factors that contribute to errors occurring, or the severity of their consequences (Allnutt 2002; Kohn et al. 2000; Reason 2000). Focusing on human error and blame has been likened to swatting mosquitoes: they can be swatted individually (and doing so may be satisfying) but is an ineffective long‐term strategy. To eliminate mosquitoes, their source, the swamp, must be drained. In this analogy, the swamp represents the conditions within a system (“latent conditions,” see below) that predispose it to error and draining it the act of identifying and removing these conditions (Reason 2005). Furthermore, allocating resources to control individuals (additional training, stricter rules and guidelines, etc.) is only successful when the individuals concerned are performing below the expected standard (those particularly error‐prone, inexperienced, unmotivated, or badly trained) (Reason 2005). In recognizing this, it is logical to allocate resources to improving the system, an approach that yields long‐term improvements.


Systems‐Based Approach


Developed and successfully implemented by the nuclear power and aviation industries, systems‐based approaches encompass humans (and their errors) alongside the systems in which they work (Allnutt 2002; Cooper et al. 1984, 2002; Kohn et al. 2000; Reason 2000). This holistic approach recognizes and accounts for human error but, more importantly, places these errors in the context of underlying systems factors, called latent conditions, that contribute to the frequency, likelihood, and consequences of error. Latent conditions comprise organizational influences, unsafe supervision, and preconditions for unsafe acts.


This is not to say that all responsibility for error lies with the “system,” absolving humans of all responsibility (Reason 2008; Wiegmann and Shappell 2005). Reviewing the processes by which errors occur reveals that personal responsibility and error wisdom have a role to play. A more nuanced position is to recognize that while humans are central to the commission of errors, they are also critical in the prevention of errors or recovery from errors (Reason 2008).


The ability of humans to react and adapt is a valued characteristic and a defining feature of high‐reliability organizations (HROs). This is exemplified by situations in which humans compensate successfully during complex technical procedures with a high risk of errors. Notably, study of such procedures, such as pediatric cardiac surgery, has also revealed that there is a limited ability to compensate for errors, even those that are minor i.e. resilience to error is limited (Carthey et al. 2003; De Leval et al. 2000; Reason 2000, 2008). Monitoring of human efforts is important, not only to identify unsafe acts and their underlying causes, but also to identify if humans are compensating unnecessarily for a flawed system.


Combining Person‐ and Systems‐Based Approaches


Current frameworks for the study of errors and accidents strive for a balance between systems and person models. In anesthesia, these frameworks are based on the understanding that unsafe acts come from people in contact with a patient, but that local and organizational factors create the conditions for errors.


James Reason, a psychologist whose work is synonymous with the study of organizational safety, developed the Swiss cheese model of accidents (Reason 2008). Through its various iterations, it has become the most influential accident model in use (Figure 15.1) (Reason 2000, 2005, 2008). Reason’s Swiss cheese model describes barriers, defenses, and safeguards through which an accident trajectory may penetrate to cause an adverse event. Each defensive layer represents a different factor within the system: organizational influences, unsafe supervision, preconditions for unsafe acts, and unsafe acts. Ideally, each layer would be solid, devoid of weaknesses; however, this does not reflect the reality of complex systems such as anesthesia. When enough weaknesses (holes) align, a series of events, which may be minor or innocuous when considered alone, culminate in an adverse event. These holes are usually dynamic; emerging, disappearing, and shifting as individual factors change, and are classified as latent conditions or active failures. Latent conditions are resident within a system, in place because of design or organizational decisions. They are usually in place for some time and can sometimes, but not always, be identified before they contribute to an adverse event. In contrast, active failures are unsafe acts, committed by a person and usually representing the tipping point that leads directly to an adverse event. Most adverse events result from a combination of latent conditions and active failures.

Schematic illustration of the Swiss cheese model describing barriers, defenses, and safeguards through which an accident trajectory must pass through to cause an adverse event.

Figure 15.1 The Swiss cheese model describing barriers, defenses, and safeguards through which an accident trajectory must pass through to cause an adverse event. Note: not all layers (of cheese) need to be involved for an adverse event to occur.


Human Factors Analysis and Classification System


Building on Reason’s Swiss cheese model, the Human Factors Analysis and Classification System (HFACS) was developed to define the holes within the model layers, with the goal of aiding accident investigation (Wiegmann and Shappell 2005). As such, the HFACS is a framework for analysis and investigation, comprising of four broad categories representing latent conditions and active failures (Figure 15.2) (Diller et al. 2014; Wiegmann and Shappell 2005). The HFACS should be viewed as an aid to facilitate a complete exploration of factors contributing to adverse events, rather than an exhaustive list. The HFACS is presented here in the order in which it would be applied during an adverse event investigation, i.e. beginning with the action most closely associated with the event, usually an unsafe act, and working through each level in turn, up to organizational influences.


Unsafe Acts


Unsafe acts have been variably categorized as slips, lapses, mistakes, errors (skill‐based, decision and perceptual), and violations (Diller et al. 2014; Elbardissi et al. 2007; Reason 2005, 2008; Wiegmann and Shappell 2005). This confusing nomenclature represents different systems for classifying overlapping behaviors. Using the terminology of HFACS, there are three basic error types: skill‐based, decision, and perceptual errors (Figure 15.2).

Schematic illustration of the Human Factors Analysis and Classification System (HFACS) for aiding accident investigation.

Figure 15.2 The Human Factors Analysis and Classification System (HFACS) for aiding accident investigation.


A skill‐based error results from a failure of attention or memory when performing a routine, highly automated task, often in familiar circumstances. Examples of skill‐based errors are common, e.g. not flushing an IV extension line before connection to an IV catheter, not giving antimicrobials (where indicated) before surgery begins, or placing oxygen tubing intranasally during recovery but failing to turn on the oxygen flowmeter. Skill‐based errors were identified in over 50% of adverse events in a hospital setting (Diller et al. 2014). Medication errors can often be categorized as skill‐based errors as they represent a routine task that is susceptible to attention or memory failure (Cooper et al. 1984; Mahajan 2010). They are often described as “wrong drug,” “wrong dose,” “wrong route,” “wrong time” (Kohn et al. 2000). Medication errors were identified as the most common error type in a veterinary study, contributing to 54% of 560 reported incidents, and “wrong dose” was the most common example (58%) of medication error (Wallis et al. 2019).


A decision error is when an action goes as planned but the plan is inadequate. A normally good plan is misapplied because of time pressure (procedural error), a bad plan may be applied through an error of judgment or training (poor choice error), or a faulty plan is applied in the face of a novel (or poorly understood) situation (problem‐solving error). Decision errors are extremely common, present in over 90% of adverse events (Diller et al. 2014). Not surprisingly, stage of training, experience, and supervision can play an important role in preventing or limiting these errors. Developing new plans in the face of novelty is prone to error because of limited mental resources, an incomplete or incorrect understanding of the situation and a tendency to fixate on a hypothesis while ignoring contradictory information. Unlike skill‐based errors, decision errors are associated with tasks involving conscious effort and are susceptible to cognitive bias.


A perceptual error occurs when sensory input is limited (e.g. alarm silenced) or incorrect (e.g. visual illusion) and an incorrect response leads to an error. Trying to insert a hypodermic needle into an injection port by feel so as to avoid displacing surgical drapes and potentially disrupt a procedure, rather than using a light or lifting the drapes, may result in a stick injury. Perceptual errors appear to be less commonly encountered in medicine than aviation, identified in 15% of adverse events (Diller et al. 2014).


In contrast to errors, violations are deliberate deviations from safe or standard practice. Routine violations (“bending the rules”) are habitual and often tolerated (e.g. driving a few miles or kilometers per hour in excess of the speed limit) (Wiegmann and Shappell 2005). They may be committed with the goal of improved efficiency, particularly where standard procedures are perceived to, or actually, inhibit efficient workflow (Diller et al. 2014). Examples could include using a smaller gauge needle to perform jugular IV injection and increasing the risk of intra‐arterial injection or tolerating senior staff walking in and out of the OR in outdoor footwear while everyone else wears shoe covers. When routine violations are identified, this should trigger a search for a cause, which may include tolerance by a supervisor (supervisory violation), an absence of, or inadequate, policy (organizational process), and/ or a workplace culture of bending the rules (organizational climate). Routine violations occur frequently, recorded in 80% of adverse events (Diller et al. 2014). In contrast, exceptional violations, which are dramatic departures from standard practice, occur much less frequently. In anesthesia, they are typically associated with intentional violations of well‐established standards of care (Diller et al. 2014). In general, the demographic of those most likely to violate are: young men, those with a high opinion of their skills (relative to others), those who are relatively experienced and not especially error‐prone, those with a history of errors and adverse events, and those less affected by how they are viewed by others (Reason 2008).


Preconditions for Unsafe Acts

While unsafe acts are present at the majority of adverse events (80% in aviation and >90% in medicine), stopping the investigation at this level returns to a person‐based system of blame (Diller et al. 2014; Wiegmann and Shappell 2005). Behind unsafe acts are predisposing preconditions as they apply to the operator (caregiver or anesthetist), personnel and environment (Figure 15.2).


Suboptimal operator performance can be created by an adverse mental or physiological state or physical/mental limitations. Mental fatigue (from sleep loss or other stressors) or distraction can easily predispose an individual to performing an unsafe act (Campbell et al. 2012). Distractions are commonplace, with one survey of 31 hours of anesthetic practice identifying a distracting event occurring on average of once every 4.5 minutes, and 22% of such events having a negative consequence for patient care (Campbell et al. 2012). Adverse physiological states represent illness, injury, or any other alteration (e.g. intoxication) that limits job performance. Adverse mental states are commonly reported (approximately 50% incidence) during adverse event investigations, whereas adverse physiological states are relatively rare (1%) (Diller et al. 2014). Physical/mental limitations refer to situations where the required task exceeds the ability of the individual. This may be as simple as having a hand that is too large to perform manual orotracheal intubation of a cow (leading to delayed intubation and potential aspiration of regurgitated fluid) or it may be a fundamental limitation. Occasionally, individuals that do not have an aptitude for anesthesia are encountered. While there may be organizational/institutional pressure to keep them in the service, this can put strain on other team members.


Personnel factors include team resource management and personal readiness (fitness for duty). Team resource management applies within and between teams. For example, a group of anesthesia providers may work well together, providing smooth, error‐free anesthesia, but this is of limited value if there is poor coordination with other teams so that cases frequently run early or late, cases are presented for anesthesia unexpectedly or critical information (e.g. suspected drug reaction) is not shared. Good communication and coordination are key to good team resource management. Ineffective communication between team members constituted 70% of personnel factors (n = 448), followed by lack of teamwork (7%), and failure of leadership (4%) (Diller et al. 2014). In a review of 444 surgical malpractice claims, of which 258 led to patient harm, 23% (n = 60) involved a breakdown of communication, almost all of which were verbal (Greenberg et al. 2007). Important contributing factors related to hierarchy (e.g. resident communication with more senior surgeon) and ambiguity regarding responsibility or leadership. In the majority of cases, information was either never transmitted (49%) or transmitted but inaccurately received (44%).


The frequency of communication failures in a study of six complex surgeries was high, with approximately one failure every eight minutes (13–48 failures per case) (Hu et al. 2012). Communication between disciplines failed twice as frequently as within a discipline and most failures were associated with absence of key individuals and failure to resolve an issue. Eighty‐one percent of failures resulted in a loss of efficiency. Interestingly, this study was performed in a hospital where the Surgical Safety Checklist (SSC) was performed for all cases. As the SSC has been shown to reduce communication failures, delays, and morbidity/mortality, these findings suggest that failures may have been even more frequent without its use (Haynes et al. 2009; Lingard et al. 2008; Nundy et al. 2008). In veterinary medicine, communication breakdown was the second most common error type (approx. 30%), after medication error (approx. 60%), in contributing to reported incidents (Wallis et al. 2019).


Fitness for duty (which overlaps with adverse physiological states) is the expectation that individuals are able to perform at the expected level. Failures could result from fatigue or failure to adequately prepare for a scheduled case. Fitness for duty is a contributing factor to adverse events in medicine was uncommon (3%) (Diller et al. 2014). This contrasts with the high incidence of contributions from team resource management. Recognition of physical and mental health limitations must be coupled with the freedom to self‐report without stigma or shame (Allnutt 2002).


Environmental factors comprise the physical and technological environment. In medicine and veterinary anesthesia, these apply to both personnel and patients. For example, noise during recovery from anesthesia could serve as a distraction to personnel and potentially affect the quality of recovery. Performing tasks in a darkened environment (e.g. during arthroscopy) can limit patient visibility, potentially leading to a perceptual error. Limited physical access to patients (e.g. surgery of the head/neck) can increase reliance on physiologic monitors and limit assessment of physical indicators of depth of anesthesia. Working in poorly designed environments (e.g. slippery flooring) or with poorly designed equipment (e.g. monitors that provide erroneous readings when heart rates are below 30 bpm) undoubtedly increase the risk of adverse events. Environmental factors were present in approximately 50% of human adverse events, with most instances related to a problem with equipment or environment design, though this may be higher in veterinary anesthesia (Diller et al. 2014).


Unsafe Supervision


Unsafe supervision has the potential to negatively influence quality of care and lead to an adverse event. When supervision is inadequate, there are deficiencies in guidance, training, leadership, and/or oversight. Unfortunately, most supervisors learn supervisory skills through experience alone without the benefit of formal instruction in teaching. As a result, it is common for supervisors, at least in the early stage of their careers, to supervise using the same style they experienced as trainees. This can lead to adverse events through inadequate oversight, excessive workload, and unrealistic expectations. Placing staff in situations without adequate oversight is more likely to lead to violations as they are forced to solve problems against a background of limited knowledge or training (Wiegmann and Shappell 2005). Planned inappropriate operations is the intentional request to increase work rate or type beyond the safe limits of an individual or team. This commonly occurs when emergency procedures are added to a surgery list with the expectation that a full schedule of planned electives are also completed. Equally, staff shortages may lead to an excessive workload or to individuals performing cases for which they are not adequately trained or supervised. A failure to correct a known problem is when known deficiencies are ignored, often in the hope that they will disappear, or to avoid confrontation. This has widespread consequences, including supporting routine violations and continued use of unsafe equipment, which can lead to adverse events. Supervisory violations create an ideal situation for failure. Willfully disregarding established rules and procedures may directly lead to an adverse event or put others in the position of triggering an adverse event. For example, ignoring a clinic policy to have a minimum of three people present during the induction of general anesthesia will increase individual workload, risk, and likelihood of errors. Unsafe supervision contributed to approximately 50% of medical adverse events (Diller et al. 2014).


Organizational Influences


The final level of HFACS are the organizational influences. These undoubtedly contribute to accidents, though a clear chain of events may be difficult to establish and their contribution may be under‐represented (Diller et al. 2014). Nonetheless, a large (n = 869 483 anesthetics), prospective, multi‐center study of anesthetic mortality found that organizational factors contributed to mortality in 11–40% of cases (Arbous et al. 2001). Resource management relates to the use and allocation of human, financial, equipment, and facility resources. Decisions are often based on a conflict between safety and quality (individual care, equipment maintenance and replacement, staffing) versus quantity (caseload, workday length, and intensity). Proposed solutions, such as cross‐training staff to perform multiple roles may not account for effects on team performance and minimal training requirements. The organizational climate is the workplace culture or atmosphere. Where this contradicts official policies or procedures, violations occur, indicating a problem with the policies or procedures themselves, or something affecting personnel behavior (culture, training, supervision, operator condition). Finally, the organizational process is the governance of daily activities, such as the establishment and application of policies, standard operating procedures, and checklists. Where deviation occurs, it indicates a problem in governance (e.g. incomplete checklist) or failure to adhere to standard practice.


Error Investigation


Investigations of adverse events have three goals: explanation, prediction, and remedies. Identifying and correcting problems at the preconditions and organizational levels provide the best value in terms of preventing future adverse events. However, it is important to keep in mind that the majority of organizations will have flaws, leading to the temptation to blame the system without establishing a plausible link to the adverse event. For example, sporadic clusters of cases of purulent nasal discharge following general anesthesia can easily trigger repeated sampling of anesthetic equipment for bacterial contamination, missing relationships between hospital occupancy rates, population demographics, and shared airspace. The HFACS system described above provides a comprehensive framework to understand and investigate how errors occur and identify targets for improvement. Simpler methods have also been described in veterinary medicine (Ludders and McMillan 2017; Pang et al. 2018). Adverse event investigations often take place in the form of morbidity and mortality conferences (M&MCs). When performed properly, these can be a valuable learning experience and drive improvement in care (Pang et al. 2018).


Error Prevention


Developing a Safety Climate


A safety climate may be defined as “the shared perceptions of practices, policies, procedures, and routines about safety in an organization” (Singer et al. 2010). In HROs, this climate permeates an organization at all levels, with a tangible effect on policy and practice. A general awareness of safety in human medicine was triggered by the 1999 Institute of Medicine report, “To Err Is Human.” This was preceded by early recognition in anesthesia practice of the importance of individual components that fall under the umbrella of safety climate (Allnutt 2002; Cooper et al. 1984, 2002; Cooper 1984; Gaba et al. 1987; Kohn et al. 2000; Reason 2005). While considerable progress has been made, most notably with the widespread adoption and implementation of the World Health Organization‘s (WHO) SSC, the differences in safety climate between anesthesia and aviation (a prototypical example of a HRO) remain striking (Bergs et al. 2014; Haynes et al. 2009; Singer et al. 2010). A survey of naval aviators (all personnel, including commanding officers) and healthcare workers (senior managers, physicians, and other workers) in the United States (n = 34 206 respondents) found that the safety climate was perceived as significantly safer by the naval aviators (Singer et al. 2010). The authors suggested that these differences resulted from multiple factors, including the use of standard operating procedures, the existence of mechanisms for reporting safety hazards and adverse events, managerial and administrative support for safety, adherence to guidelines and standards of care for fitness for duty, and continuous training and assessment.


The recognition of safety climate within veterinary anesthesia is in its infancy, and there are few studies of the application or outcome of promoting safe practice (Armitage‐Chan 2014; Hartnack et al. 2013; Hofmeister et al. 2014; McMillan 2014; Menoud et al. 2018). Hofmeister et al. (2014) reported promising results from a pre/ post‐intervention observational study to identify adverse events and near misses over an 11.5‐month period followed by the introduction of targeted interventions to reduce the recurrence of these incidents. Use of an anonymous reporting system identified 74 adverse events or near misses (3.6% of 2028 patients).


Checklists


The WHO SSC is the best known and most widely adopted peri‐anesthetic checklist in use. In a landmark observational study involving eight hospitals in eight countries across the world, prospective data collected before and after implementation of the SSC showed a reduction in mortality and surgical infection from 1.5% to 0.8% and 6.2% to 3.4%, respectively (Haynes et al. 2009). These findings have been repeated, with a recent meta‐analysis supportive of the benefits of using the WHO SSC (Bergs et al. 2014). In addition to reductions in morbidity and mortality, the WHO SSC has resulted in cost savings, improved communication, and improved safety climate (Haynes et al. 2009, 2011; Kearns et al. 2011; Semel et al. 2010).


The WHO SSC comprises three sections, each corresponding to a different phase in the surgical pathway (before induction of anesthesia, before skin incision, before patient leaves operating room). Each section has items that are service specific (e.g. “Is the anesthetic machine and medication check complete?”; “Is the [surgical] site marked?”) as well as those that ensure sharing of information and promote discussion (e.g. “What is the anticipated blood loss?”; “Are there any patient‐specific concerns?”). Common to these sections, and key to the development of checklists in veterinary anesthesia, is specifying the personnel who should be present, the role of a single person responsible for managing the checklist, the requirement for verbal acknowledgment and confirmation of checklist items, and the introduction of all personnel present (name and role, including trainees). These practices are believed to play a critical role in the successful or failed implementation of the WHO SSC in an individual institution and the overall impact on safety climate (Haynes et al. 2009).


There are several considerations key to successful implementation of a checklist system that is relevant to development and adoption for veterinary anesthesia and surgery (World Health Organization 2009; Alidina et al. 2018; Armitage‐Chan 2014). (i) There must be public support from the administration and department/service chiefs for prioritizing safety and use of an SSC. (ii) Forming a team from a core, multidisciplinary group of interested personnel to establish and promote adoption of the checklist. (iii) The checklist should be modifiable. This promotes adaptation to local conditions and encourages team member involvement and support. When modifying the checklist, the principles of checklist development should be applied: the checklist should be concise, focused, brief (less than one minute per section), contain actionable items, be performed verbally, developed collaboratively, tested in a small/limited setting, and integrated into existing processes. (iv) Start small. Implement the checklist and track its use and performance on a limited scale (e.g. single operating room [OR] or with a single surgical team) before expanding its use after any necessary modifications are made. (v) Track process and outcome measures to monitor performance and identify problems early. The tools used in clinical audit would be ideal for this (Rose and Pang 2021). (vi) The checklist coordinator should be carefully selected. They must be able to work with key team members as they have the responsibility for ensuring completion and adherence to the checklist, including the power to stop progress to the next phase of the procedure.


As with veterinary anesthesia safety climate, there are few studies on the application or outcome of promoting safe practice, such as checklists (Hofmeister et al. 2014; McMillan 2014; Menoud et al. 2018). Menoud et al. (2018) used a consensus discussion (Delphi method) among veterinary anesthetists to develop a peri‐anesthetic checklist based on the WHO SSC (Menoud et al. 2018). This work highlighted common obstacles encountered when attempting to introduce change associated with a checklist, including resistance to change, concerns regarding usefulness and relevance, and the time required to complete a checklist. Taken together, these underline the importance of garnering support, raising awareness and education of potential users. Such support is essential to achieving successful implementation of checklists and reaping the potential benefits in patient safety (Pickering et al. 2013).


Pre‐anesthesia Checkout Procedure


The pre‐anesthesia checkout (PAC) procedure comprises a protocol for checking components of anesthetic equipment paired with a checklist. In most instances, a physical checklist is used to prompt completion of all checks. Anesthetic equipment faults are relatively common, so a PAC is a simple, effective way to prevent a fault resulting in an adverse event (Barthram and Mcclymont 1992; Kendell and Barthram 1998). Anesthetic machine checks identify 30–60% of faults and approximately one‐fifth of these may be serious, posing a direct risk to the patient (Barthram and Mcclymont 1992; Kendell and Barthram 1998). Additionally, completion of a PAC is associated with a decreased risk of anesthetic morbidity and mortality (odds ratio 0.64, 95%CI 0.43–0.95) (Arbous et al. 2005). Using the imagery of the Swiss cheese model, performing a PAC may be visualized as closing a hole in a defensive layer (Figure 15.1). As anesthesia equipment becomes increasingly complex and equipment varies considerably between clinics, it is impossible to provide a comprehensive, universal PAC guide (Association of Anaesthetists of Great Britain and Ireland et al. 2012; Hartle 2013). Manufacturer guidelines should be followed for individual pieces of equipment. Table 15.2 presents an outline of key checks that should be completed, based on the most recent American Society of Anesthesiologists and Association of Anesthetists of Great Britain and Ireland guidelines (Anesthesiologists Society of Anesthesiologists 2008; Association of Anaesthetists of Great Britain and Ireland et al. 2012

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Nov 6, 2022 | Posted by in EQUINE MEDICINE | Comments Off on Accident and Error Management

Full access? Get Clinical Tree

Get Clinical Tree app for offline access