8: Error Prevention in Veterinary Anesthesia


CHAPTER 8
Error Prevention in Veterinary Anesthesia



To give safety a future, we should not see people as a problem to control, but as a solution we can harness. We need to move from counting negatives to understanding what makes an organization normally successful. And we need the courage to question common wisdom and industry standards—confronting fiction with facts, and faith with enlightenment.


Sydney Dekker, Safety Differently: Human Factors for a New Era, 2013



Tell me and I forget, teach me and I may remember, involve me and I learn.


Benjamin Franklin



Insanity is doing the same thing over and over again and expecting different results.


Albert Einstein


There are two ways in which safety can be viewed as a process, a goal to be achieved. The classical view approaches it reactively in that when an error occurs actions are taken to prevent further errors by stopping bad stuff from happening, that is, by eliminating the negative. This approach can be very effective, as demonstrated by the aviation industry with its standards and guidelines and record of safety. This approach addresses safety incidents by introducing control measures, such as guidelines and standard operating procedures (SOPs), in the hope of altering the system so as to reduce the chances of similar errors occurring in the future. Indeed, when implemented appropriately, standardization plays a key role in developing safe practices because collective expertise and experiences are recorded and formally passed on to those involved with the process. But poorly thought out and implemented standard operating procedures can give the impression of furthering safety in one area while actually increasing the risk of unsafe acts occurring in another. Box 8.1 gives an example from a university veterinary teaching hospital that serves to make this point.


As the example in Box 8.1 suggests, the reactive approach, at least in medicine, is probably not always the best approach, in part because of the uncertainty and ambiguity prevalent throughout clinical medicine, surgery, and anesthesia. Situations arise in which the patient does not fit the circumstances that a control measure or standard operating procedure was designed to address. Guidelines or standardized processes cannot account for all conditions and circumstances that a veterinarian may encounter when anesthetizing a patient because, at the very least, anesthesia always perturbs a patient’s normal physiology and every patient and each anesthetic is different.


An alternate view of safety as a goal is one that is proactive and strives to maximize the chances of success, accentuating the positive to ensure that “good stuff happens.” Safety in this light is the ability to succeed under varying conditions, regardless of ambiguity or uncertainty, so that the number of intended and acceptable outcomes is as high as possible (Hollnagel 2014). Success in anesthesia is not merely having an awake and alive patient at the end of the anesthetic, but means that all processes in the anesthesia procedure were managed with attention to patient safety. Given this definition of success, unsafe practices, even when the outcomes are “successful,” are still unsafe and unacceptable; success following unsafe practices may be due to nothing more than modern anesthetic drugs and equipment, or worse yet, mere chance. The number and severity of near misses that may have occurred as a result of unsafe practices may, on another day and under different circumstances, become harmful hits. Success in safety terms means that an outcome was achieved by actively striving for patient safety throughout the procedure. How do we achieve this?


A first step is to acknowledge that errors do occur in anesthesia and then focus on patient safety as a goal of the organization, of individuals within the organization, and with regards to technical factors. We also need to recognize that most of us tend to be overconfident in our cognitive abilities while often denigrating the use of cognitive aids, such as checklists, calculators, standard operating procedures, and guidelines; often we hear that such aids are only “for poor clinicians.” Once these realities are recognized and accepted it becomes easier to take actions that focus on achieving patient safety in the daily practice of medicine, surgery, or anesthesia. The next sections present some general and some specific strategies for achieving this goal. The general strategies include suggestions for bringing about changes in behaviors and habits that foster patient safety, and the attributes of effective anesthetists. More specific strategies include: identifying elements that should be part of a “safety culture”; minimizing distractions; cognitive forcing strategies; breaking our reliance on memory by using cognitive aids such as checklists and mnemonics; strategies for improving communication and teamwork; and methods for evaluating the processes of anesthesia and redesigning them with safety in mind.


General strategies for error prevention


Changing habits: getting away from “We’ve always done it that way”


Although the idea of improving safety (minimizing error) by changing our practices would appear to be a no-brainer, it is easier said than done. Enforcing change through a top-down approach rarely works; it puts our collective backs up and breeds resentment. In the face of change too often we hear: “What’s wrong with how we’re doing things now?” “What’s the point of this? It’s a waste of time,” “We didn’t have a problem before, what’s the issue now?”, “If it ain’t broke, don’t fix it,” or even “Who are they to tell me how to do my job!” All are common retorts whenever anything new gets introduced to a well-entrenched system. Experience and studies have shown that changes are more likely to be integrated into practice if the people performing the tasks are involved in the decision-making and implementation processes (Roberts et al. 2005; Vogus & Hilligoss 2015). Most people want and need to know why something has to be changed before they will accept that it should be changed. So the process starts by informing staff as to why change is necessary, by describing the problems that are being encountered. Openness and reporting of real data—facts on the ground—are key components of this process.


Once frontline staff recognize there are problems within the system in which they work the next step is to involve them in the change process. This can be achieved by encouraging staff to take ownership of the work by allowing them to be responsible for making improvements in their areas of expertise. There are processes by which this can be accomplished, one of which is outlined in the Theoretical Domains Framework (Michie et al. 2005) that can be used to assess a target group’s knowledge, skills, beliefs about their capabilities, motivation, goals, and behavior; it presents questions the group should ask itself when considering implementing a change in some X process, which could be a procedure or a protocol (Table 8.1). Of crucial importance is that this framework is not used solely by management as it considers making a change, but is used by the target group itself as it assesses its attributes and abilities to make a change. It is not a top-down process, it is an inclusive process. The answers to the questions in the framework help guide the group as to how to effect the necessary changes.


Table 8.1 The following can serve as a guide for implementing an evidence-based practice. The domains and questions that should be asked and answered help to make sure that processes both favoring and opposing implementation are identified so that the change in practice can be implemented successfully.










Domains Questions to ask
Knowledge
Skills
Social/professional role and identity
Beliefs about capabilities
Beliefs about consequences
(anticipated outcomes/attitude)
Motivation and goals
Memory, attention, and decision processes
Environmental context and resources (environmental constraints)
Social influences (norms)
Emotion
Behavioral regulation
Nature of the behavior
Do they know about the guideline?
What do they think the evidence is?
Do they know how to do X?
How easy or difficult do they find performing X?
What is the purpose of X?
Do they think guidelines should determine their behavior?
Is doing X compatible or in conflict with professional standards/identity? (prompts: moral/ethical issues, limits to autonomy)
How capable are they of maintaining X?
What do they think will happen if they do X? (prompt re themselves, patients, colleagues, and organization; positive and negative, short- and long-term consequences)
What are the costs of X and what are the costs of the consequences of X?
How much do they want to do X?
Are there other things they want to do or achieve that might interfere with X?
Does the guideline conflict with others?
Are there incentives to do X?
Is X something they usually do?
Will they think to do X?
How much attention will they have to pay to do X?
Will they remember to do X?
To what extent do physical or resource factors facilitate or hinder X?
Are there competing tasks and time constraints?
Are the necessary resources available to those expected to undertake X?
To what extent do social influences facilitate or hinder X? (prompts: peers, managers, other professional groups, patients, relatives)
Does doing X evoke an emotional response? If so, what?
To what extent do emotional factors facilitate or hinder X?
What preparatory steps are needed to do X? (prompt re individual and organizational)
Are there procedures or ways of working that encourage X?
What is the proposed behavior (X)?
Who needs to do what differently when, where, how, how often, and with whom?
Is this a new behavior or an existing behavior that needs to become a habit?
Can the context be used to prompt the new behavior? (prompts: layout, reminders, equipment)
How long are changes going to take?
Are there systems for maintaining long-term change?

Adapted from: Michie, S., Johnston, M., Abraham, C., et al. (2005) Making psychological theory useful for implementing evidence based practice: a consensus approach. Quality & Safety in Health Care 14(1): 26–33. With permission of the publisher.


Attributes of an effective anesthetist


An effective anesthetist is one who understands that “to err is human” and realizes that errors and accidents will occur in anesthesia; one who recognizes that anesthetists work in complex settings and situations. But what are the characteristics of an effective anesthetist, one who functions effectively in such circumstances? To answer this question we draw on the work of Klemola and Norros, and Reason. The former two have identified essential characteristics of effective anesthetists. They believe that an anesthetist’s perception of a given situation within a clinical context is inseparable from the anesthetist’s history and behavioral profile (Klemola 2000; Norros & Klemola 1999). They contend that because anesthesia is filled with inherent uncertainty it is necessary to consider the situated character of human activity and that anesthetists’ habits of action should be explored within those particular circumstances in which they use their resources, that is, within the operating room. Because anesthesia has inherent uncertainty, Klemola and Norros do not believe it is an activity that can be governed by general rules or rigid guidelines. Indeed, they believe that to do so ignores the dynamic nature of anesthesia and a patient’s responses during anesthesia (Klemola 2000; Klemola & Norros 1997, 2001; Norros & Klemola 1999).


Klemola and Norros believe that to cope with the uncertainties of anesthesia, the anesthetist must use judgment based on efficient interpretation and use of situational information (Klemola & Norros 1997; Norros & Klemola 1999). Klemola further argues that the use of training techniques such as those used in the aviation industry, may be inappropriate in anesthesia because they are based on the assumption that anesthetists and pilots use similar “mental models,” an assumption Klemola believes is unfounded (Klemola 2000). Furthermore, the belief that general rules can be used to guide the anesthetist in the practice of anesthesia is possible only when the human mind is viewed as an information processing mechanism (a computer) that follows computational rules (Cook & Woods 1994), a view with which Klemola disagrees (Klemola 2000). The problem of likening the brain to a computer ignores the complexity of the brain and the reality that our brains, unlike computers, are affected by many factors, such as emotions, fatigue, distractions of all sorts, and factors that can degrade our short-term memory and affect our perceptions of and interactions with the real world. Furthermore, unlike transforming computer code into an application, it is difficult to transform knowledge into practice because neither general rules nor specific clinical recommendations include instructions on how to apply them in the everyday fuzzy and unruly situations so often encountered in anesthesia. The nature of knowledge is also a problem in that a valid statistical fact does not say much about a particular patient, especially when there is so much inherent uncertainty that cannot be governed by general rules (Norros & Klemola 1999). A single statistic certainly does not describe the total context within which an anesthetist works.


The effective anesthetist must detect and respond to an incident, and it is the dynamic complexity of anesthesia that sets specific requirements for the anesthetist’s activities including the manner in which he or she views the patient (Klemola & Norros 1997). In their studies of the clinical behavior of expert anesthetists, Klemola and Norros identified two distinct behavioral profiles (Klemola & Norros 1997; Norros & Klemola 1999):



  1. The interpretive profile in which the anesthetist clearly and efficiently uses situationally relevant information based on insights of the patient’s physiological responses to anesthesia, especially during the induction phase of anesthesia. The anesthetist’s actions are guided by an understanding of the uniqueness and uncertainty of actual situations, and effectively and skillfully uses anesthetic drugs or monitor-derived information.
  2. The objectivistic profile in which the anesthetist views the patient as a natural object and uncertainty is not recognized; the anesthetist demonstrates a reactive habit of action that is based on a preoperative plan, one that is deterministically implemented and in which relevant factual knowledge of drugs is not fully exploited. Furthermore, available patient information provides only a minor basis for regulation of the patient, as if the patient and the information concerning him or her is not related. Some might describe this profile as “cookbook anesthesia” or “anesthesia by numbers.”

Based on studies of anesthetists working in their clinical surroundings, Klemola suggests that attempts to improve education and practice should be based on evidence from the real world of anesthetic practice. Learning how to deal with crises through drills with simulators are of practical use, but the educational focus, Klemola (2000) states, should be on developing the intellectual skills of anticipation and making sense of events, both of which are best learned during clinical work. We suggest that anesthesia training programs must foster an interpretive mindset, one that views as unique both the patient and the patient’s response to anesthesia.


As already discussed (see “Individual responsibility within an organization” in Chapter 2), there are other mindsets or mental attitudes that anesthetists should possess if they are to successfully prevent or manage complications during anesthesia, including preparation for the unexpected, early recognition of complications, and an attitude and approach that favor problem-solving, that is, analytical thinking (Klein 1990). Preparation includes a thorough history and physical examination of the patient so as to detect any conditions that may affect anesthesia or that anesthesia may affect. More important, preparation reflects a mental state, one of preparedness or anticipation, that plays a major part in achieving excellence in many activities including anesthesia (Reason 2004). The anesthetist who practices preparedness demonstrates several important characteristics (Reason 2004):



  • Accepts that errors can and will occur.
  • Assesses the local factors that can cause errors—Reason’s “bad stuff” (Reason 2004)—before embarking upon a course of action.
  • Has contingency plans ready to deal with anticipated problems.
  • Is prepared to seek more qualified assistance.
  • Does not let professional courtesy get in the way of checking colleagues’ knowledge and experience, particularly when they are strangers (e.g., see Case 6.1).
  • Appreciates that the path to adverse incidents is paved with false assumptions.

Reason provides some general guidelines that are applicable to the training of veterinary anesthetists, especially training in error prevention (Reason 1990):



  • Training should teach and support an active exploratory approach in which trainees are encouraged to develop their own mental models of the system that they work in, and to use “risky” strategies to investigate and experiment with untaught aspects of the system. This approach recognizes that effective error management is not possible when training is structured according to a set of programmed learning principles, ones that the trainee must follow without question.
  • The trainee should have the opportunity to make errors and recover from them. Errors must be viewed as opportunities for learning and discovery so that the trainee overcomes the tendency to view errors as signs of stupidity, lack of intellect, or incompetence. The strategies for dealing with errors have to be both taught and discovered.
  • Error training must be introduced at an appropriate phase of training. Introducing it at the beginning of a training program when a trainee is struggling to consciously learn every aspect of a system, may overwhelm the trainee and be counterproductive. Error training may be better introduced at the middle phase of training.

The use of simulators


In-clinic training and experience is crucial, but simulators can be a part of the training process, especially for teaching technical skills such as intubation, intravenous catheterization, epidural or spinal techniques, cardiopulmonary resuscitation (CPR), and teaching strategies for problem-solving. Simulators have also been developed for teaching and improving anesthetists’ non-technical skills, such as reactions in a stressful setting, learning, attitudes, behavior, teamwork, and communication skills. High fidelity simulators, those that simulate the real patient, have been developed for use in veterinary medicine, especially emergency medicine (Fletcher et al. 2012). Students exposed to this type of training commented that the simulations allowed them to practice communication and teamwork skills better than paper-based, problem-oriented learning opportunities and lectures (Fletcher et al. 2012). This is all to the good and complements the essential hands-on clinical training.


Morbidity and mortality rounds (M&Ms)


Processes used to identify errors and near misses, such as morbidity and mortality rounds, should be used as positive, non-threatening educational opportunities to further the organization’s patient safety effort (see “Focus groups: morbidity and mortality rounds (M&Ms)” in Chapter 3). They should be used to evaluate the anesthetist’s attitude toward errors, and his or her problem-solving skills. In writing about debiasing strategies, Croskerry states that morbidity and mortality rounds “may be a good opportunity for…learning, provided they are carefully and thoughtfully moderated. These rounds tend to inevitably remove the present case from its context and to make it unduly salient in attendees’ minds, which may hinder rather than improve future judgment” (Croskerry et al. 2013). It is not only the context that may be removed from the discussion, but also the current state of the caregiver at the time of the incident. These are important shortcomings that can be overcome by an effective moderator, one who is knowledgeable about anesthetic processes, able to lead group discussions so that all participants are heard, and do so in a non-judgmental manner. The moderator also must be sensitive to emotional issues that may come to the fore during a case discussion, and be able to recognize and work through individual and group cognitive processes that may make it difficult to get to the root causes of the case under discussion.


Specific strategies for error prevention


Developing a safety culture


Although “safety culture” can be a somewhat nebulous concept, it can be defined as the ideals and beliefs held by an organization toward risk and accidents (safety) and how they influence the thinking and actions of people within the organization. The essence of a safety culture is multifaceted, but revolves around three key concepts:



  1. The people performing frontline tasks (those where error most commonly manifests and has impact, such as veterinarians, nurses, and techs on the hospital floor) feel comfortable reporting safety issues to those in charge, specifically to their bosses and upper management.
  2. A system is in place to appropriately analyze these reports and management is willing to examine every aspect of the organization and its systems in order to find latent factors or causes of errors.
  3. There is a desire and determination to change the organization in order to improve safety.

To achieve a safety culture a number of subcultures need to be developed (Reason 2000); a safety culture needs to be open, just, informed, and flexible, and needs to encourage reporting, learning, and resilience.


An open subculture


Openness means that staff feel comfortable discussing safety incidents and issues during normal working situations rather than only after an incident has occurred or only during a formal investigation. To be successful, openness must extend from the upper echelons of management down to the frontline workers. Senior staff members play vital roles in developing an open work environment because the behavior of those in positions of authority influences the behavior of others. More specifically, for team members to be open about safety issues and “their errors and mistakes” means that team leaders must be open about their errors and mistakes. Including errors and safety issues in routine clinical discussions brings the subject out into the open—makes it transparent—and demonstrates that “fallibility” is not something to hide. In this way error and safety become a subject for broad discussion, not just for discussions behind closed doors, a management approach that excludes those on the frontline where the errors and accidents occur. Openness keeps safety at the forefront of the organization. Openness also includes transparency and feedback. Staff should know what will happen if and when they report an error and they should be kept informed of where their report is in the analysis process.


Openness does not develop overnight; it is an ongoing process that requires establishing trust and trusted lines of communication between all members of the frontline team, senior staff, and members of management. Although often easier said than done it is a goal worth striving for. To ensure continued development, openness itself needs to be assessed. Face-to-face discussions, surveys, formal interviews, and focus groups can be used to assess the current openness “climate” as viewed by frontline workers, and their current attitudes and concerns about raising safety issues.


A just subculture


When an error occurs, what is the organization’s reaction? Is it to focus on discovering who was responsible and punishing or disciplining that person or persons? Or is the organization more lenient and ensures that the people who made the error are given additional training? In either case the focus is on the individual as the root cause of the error, an approach that is often unfair, inappropriate, and counterproductive to achieving a just culture.


When an incident occurs, a “just culture” focuses on the many factors that are responsible, not on who is responsible. It’s an organizational culture that does not look for “the culprit,” but uses processes that strive to ensure the same error does not occur in the future. A just culture’s central tenet is to treat staff fairly and understand that any member of staff at any level of the organization can be involved in a safety incident. The response of a just organization will be to support the individual(s) involved in an incident. This support is intended to help them deal with the consequences of major incidents, to listen to their concerns, and provide an empathetic response while working with them to try to avoid similar problems occurring in the future (see “Analysis of the person(s) at the sharp end: accountability” in Chapter 3, and Figure 3.5).


Superficially this approach may not appear to achieve justice. If someone has done something wrong, that is, made an error, then they should be punished otherwise where is the accountability? As pointed out previously, this approach tends to treat errors as moral issues and is based on the assumption that bad things happen to bad people—the just world hypothesis (Reason 2000). But in Chapters 4 through 7 we have seen how technically competent, knowledgeable, and caring people, good veterinarians and technicians, made errors; disciplining those individuals at the time would not have prevented errors from being made by them or others in the future. Rather, sanctions and punishments breed fear and reduce the likelihood of an individual disclosing and reporting an error, thus driving errors underground.


Accountability should mean encouraging people to be accountable for reporting an incident, instilling in them the importance of sharing their experiences, views, and personal expertise. Accountability means encouraging all members of a team to actively engage in thinking about safety and what can be done about problems that arise and who should be accountable for implementing changes and assessing their effectiveness. This can be considered as forward looking accountability (Dekker 2012).


It is important to recognize that this is not a no-blame culture. An organization should attempt to identify and separate safety incidents involving error (where the events evolved adversely despite the best of intentions) from incidents where staff are deliberately negligent, willfully reckless, or where behavior is not of a required standard. In the latter case not taking action can be seen as unjust and it certainly is a failure of management.


A reporting subculture


In the absence of frequent bad outcomes, knowledge of where the edge lies in regards to safety can only come from persuading those at the human-system interface to report errors (Reason 2000). As discussed in Chapter 3, reporting safety incidents is a powerful tool for gaining information that allows safety improvement strategies to target specific causes of error. Having an open and just culture is fundamental to developing a culture that favors reporting incidents.


However, in and of itself this is insufficient for developing a high rate of reporting in an organization. First and foremost staff must be aware that they are able to report, that there is a reporting system, and that they should use it. Then they must be made aware of what should be reported, how data will be recorded, and how these data will be used. Ensuring that all staff have ready and easy access to the system is also important. Staff need to have confidence that reports will be read and analyzed appropriately, and that they will receive constructive feedback.


A learning subculture


A learning culture means that an organization is able to learn from its errors and that it makes changes in order to reduce the chances of similar errors happening again. This requires the organization as a whole to commit to learning from the incidents that are reported and remembers them over time—keeps them in institutional memory.


An informed subculture


In order to be informed the organization needs to collect and analyze relevant data, and actively distribute to the entire staff the safety information generated from the data. This requires a formal system for distributing safety information. An informed organization also recognizes the importance of prospectively assessing risk, specifically examining and identifying risks in clinical processes before they materialize as incidents.


Flexibility and resilience subcultures


Safety cultures do not come passively into being; they require commitment and effort. They evolve reactively in response to incidents, but more importantly they evolve proactively in response to risk assessment and outside influences. They develop resilience, which is the intrinsic ability of a system to adjust its functioning in response to changes in circumstances so that it can continue to function successfully, even after an adverse incident, or in the presence of continuous stress; that is, the organization is constantly engineering and remolding in the face of new demands. To do this the organization and people within it must be flexible and possess the ability and willingness to continually redesign and manipulate processes where risk is identified. And both must ensure that adequate control measures and barriers are in place.


Minimizing distractions


Distractions are interruptions that are frequently encountered within most healthcare settings, and anesthesia is no exception. Distractions are common causes of broken concentration and in the very least can lead to stress (see “Distractions and stress” in Chapter 2), but at worst can readily lead to error and patient safety incidents. Most often distractions are ordinary events that occur at an inappropriate time. In a busy practice or operating room environment, machines beeping and alarming, phones and pagers ringing and pinging, case discussions, and conversations about the weekend are all commonplace. This is especially pertinent in a teaching hospital where the presence of students and teaching requirements of staff can often lead to impromptu seminars and in-depth explanations. Managing distractions is a key professional skill that is part of the tacit knowledge of anesthesia (Campbell et al. 2012).


Most often this hubbub of noise and activity causes little problem and can be tuned out. But there are particular times in various medical processes and procedures that require more concentration than others, when multiple tasks are being performed simultaneously or in rapid succession, during which distractions can have serious consequences.


In a recent study of distractions (Campbell et al. 2012) during 30 anesthetics that spanned 30 hours of observation time, 424 distracting events (about one distraction every 4–5 minutes) were observed; distractions in the recovery period occurred most commonly, occurring at about one distraction every 2 minutes. Most of the distractions came from team members and colleagues, while smaller proportions were associated with equipment, workspace, and noise. More specifically, distractions included unrelated conversations, paperwork, being asked questions unrelated to the case, inappropriately timed procedures (including the World Health Organization’s Surgical Safety Checklist), overcrowding and space limitations in the workspace, forgotten equipment and drugs, inappropriately set alarms, broken or unchecked equipment, and mobile phones and pagers (Campbell et al. 2012). Although a majority of the distractions were of little or no consequence for patients, 92 were judged to have a direct negative effect on anesthetic management. Interestingly, 14 events had positive effects in that they facilitated the procedure or patient safety (Campbell et al. 2012). Negative effects included deterioration in a patient’s physiological variables, having to repeat procedures, delays in procedures, and periods when the patient was left unattended. This study clearly shows that distractions are common in anesthetic practice and pose a real and significant threat to patient safety. Some distractions, however, are sometimes less obvious and more difficult to observe. Feeling uncomfortable, pain, hunger, being too cold, too hot, unwell, and various emotional states, all can act as distractions and affect our cognitive abilities.


One simple way to help manage distractions is to develop “quiet times,” a strategy that has its analogy in aviation, specifically the sterile cockpit rule that prohibits non-essential activities during critical phases of flight, especially takeoff and landing, phases analogous to induction of, and emergence from, anesthesia (Broom et al. 2011). These are timeouts or pauses at key points of a process, or when multiple tasks are being performed simultaneously. Key points in the process of anesthesia include not only induction of anesthesia, the start of the procedure whatever it may be, and recovery, but also moving/transporting patients, crises, and patient hand-offs. Distraction during any one of these phases in the process likely will lead to safety critical steps being missed or vital information concerning a patient not being passed on in an appropriate fashion.


Cognitive forcing: general and specific techniques


Just as some pieces of equipment have design features that prevent their incorrect use (forcing functions), so too are there cognitive forcing strategies. These are specific debiasing techniques or strategies that attempt to minimize influences of irrational decision preferences by introducing self-monitoring into decision-making processes (Croskerry 2003; Stiegler & Ruskin 2012; Stiegler & Tung 2014). Croskerry proposes teaching both generic and specific cognitive forcing strategies in clinical decision-making (Croskerry 2003). An example of a generic approach is to teach that one should conduct a secondary search or survey once a positive finding has been made. In other words, once the most spectacular injury has been identified and attended to, a search for a less obvious injury or condition should be made (see Case 6.1). As has been stated in emergency medicine, “the most commonly missed injury in the emergency room is the second” (Stiegler & Tung 2014).


Croskerry has also identified steps to help trainees develop these strategies (Croskerry 2003). First, metacognition as a tool, not a theory, should be taught in which the trainee learns the process of thinking about thinking. In practice this requires that the trainee learns to step back from the immediate situation and consider or reflect upon his or her thought processes in the given setting and circumstances, whatever they may be (Croskerry 2003). Are there biases at play in the decision-making process? If so, what are they? The second step is to consider the cognitive errors likely to be made within the given situation, such as an anchoring bias, error of omission, or premature closure (see “Pattern matching and biases” in Chapter 2, and Table 2.3). The third step requires that the trainee imagine the scenario in which a given cognitive error is likely to occur. For example, if an anesthetist is managing emergency anesthesia of a small dog that has been attacked by a larger dog, what biases might be influencing the anesthetist’s decision-making in managing this patient? Might a bias or several biases be obscuring his or her diagnostic and management strategy? If so, what would the cognitive error look like? What cognitive forcing strategy should the clinician select?


Anesthesia as a process can be stressful for the anesthetist, and stress can degrade cognitive processes thus fostering the making of errors (see “Distractions and stress” in Chapter 2). An important aspect of training is to teach coping skills that will assist a trainee to overcome stress-induced error-generating tendencies, such as coning of attention and reversion under stress, and start exercising executive-level problem-solving and decision-making skills, that is, the analytical mode of cognition. This can be achieved in part by teaching and reinforcing the fundamentals of anesthesia, such as those techniques that loosen coupling among critical physiological components/systems. Some are very simple techniques and safeguards, including: preoxygenating patients, especially critical patients, prior to induction; assuring that each patient has a patent airway and that the patient is breathing spontaneously if not being mechanically ventilated; rehydrating dehydrated patients prior to anesthesia and maintaining adequate hydration during anesthesia so as to support perfusion of vital organs; keeping patients warm during and after anesthesia; and providing adequate analgesia intra- and post-operatively so as to reduce pain-induced patient stress thus facilitating healing.


Cognitive forcing strategies and the Rule of Three


Stiegler presents four decision-making tools, three to help guide diagnostic and therapeutic intervention, and one to facilitate risk assessment (Stiegler & Ruskin 2012). The Rule of Three is one of the tools suggested to help guide clinical reasoning and decision-making (Stiegler & Ruskin 2012). When an anesthetist encounters a problem and the initial and subsequent interventions are unsuccessful, the anesthetist must generate at least three diagnostic possibilities that may explain the cause of the problem before a third intervention is attempted. For example, if a patient is hypotensive and the anesthetist’s initial intervention is to lighten the plane of anesthesia, and a few minutes later the second intervention also involves lightening the plane of anesthesia and administering a bolus of fluids, all without correcting the problem, then three other diagnostic possibilities must be considered before a third attempt is made to correct the hypotension (Stiegler & Ruskin 2012). Stiegler points out that the Rule of Three not only forces consideration of alternatives but also prevents specific biases, including premature closure, anchoring, sunk costs, framing, and confirmation bias (see Table 2.3) (Stiegler & Ruskin 2012).


Checklists as error-reducing tools


As already discussed, anesthesia is an inherently complex process. When anesthesia is appropriately performed there are a large number of tasks that must be undertaken before a patient can be anesthetized. Many tasks are performed automatically, at the skill-based level, but in a busy practice environment it is inevitable that a task or item will be missed (omission error). The effect of these lapses may seem insignificant to those involved, perhaps only leading to a delay in the progress of the case or a temporary distraction. But as mentioned previously, in an emergent situation such lapses may delay care of the patient, and some steps, of course, are fundamental to anesthesia management, and failure to perform them could have major consequences for patient safety. Checklists are a means for minimizing errors of omission and are now commonplace in most complex workplaces and professions (for a more complete history of checklists see Appendix E).


The role of a checklist is to ensure that the person(s) performing a task or involved in a process will not need to rely on memory. In essence it helps ensure that tasks are performed and by the appropriate time in the process. It is important to recognize that a checklist is not a step-by-step guide or algorithm for performing a task. Although these tools can be useful for novices and inexperienced staff, they tend to be used less and considered less helpful by more experienced members, who tend to ignore steps or perform multiple tasks at the same time. The problem here, of course, is that missing one step can lead to subsequent steps being missed, any one of which might be a safety-critical step, one that if omitted will lead to a near miss or worse a harmful incident.


The essence of a checklist is to include tasks or actions critical to the smooth running and performance of a process; as such it forms the basis of procedural standardization. Tasks on a checklist should be chosen according to their relative importance in terms of whether failing to perform a task or action (at all or appropriately) will compromise safety, and what the potential is for that task being overlooked (i.e., likely not to be checked by some other mechanism). The order of the checklist will typically be that in which the tasks or actions are normally performed. The performance of a checklist signifies the end of one phase of a process and indicates that all the vital and relevant tasks have been completed in order to move safely to the next phase.


A checklist can be used in two ways: (1) the call-do-response (or do-list), and (2) the challenge-response (Degani & Wiener 1993). Using the call-do-response method the checklist items are called out prospectively, each acting as a prompt to perform the specific task. Each task or action is then performed and then confirmed before moving onto the next step. In the challenge-response method the tasks are performed according to memory and the checklist is used retrospectively to ensure that each task or action has been performed. The challenge-response method is generally considered more suitable for most situations as it allows more flexibility in the process and is an acknowledgment that tasks may not be performed in the order designated by the checklist.


Checklist design requires consideration of content, format, and timing; as such checklists should (Degani & Wiener 1993):



  • Provide a standard foundation for verifying that a process is or has been carried out in a thorough and appropriate fashion in an attempt to defeat any impairment to a team’s psychological and physical condition.
  • Provide a sequential framework to tasks.
  • Allow mutual supervision (cross-checking) among team members.
  • Identify and assign the duties of each team member in order to facilitate optimum team coordination as well as logical distribution of workload.
  • Enhance a team approach through effective communication ensuring that each team member at each phase is kept in the loop.

Checklists should be tested, and those testing them should have the ability to provide feedback and make suggestions as to alterations and adaptations. Ideally, checklists will then be evaluated and tested in a more formal and scientific fashion. When designing a checklist there are a number of key components that must be considered (Nagano 1975):



  • Checklists should have a clear objective.
  • Checklists should be practicable.
  • Every item on the checklist should be a safety-critical step that is at risk of being missed and that inclusion on the checklist can help rectify.
  • Checklist items should be based on sound evidence or be indisputable in terms of their importance to the process.
  • Checklists should be designed to fit in at natural breaks in workflow “pause points” so as not to disrupt the normal process.
  • Checklists should be clear and precise, containing simple, brief items.
  • Checklists should be easy to perform; using simple exact language and a sentence structure designed to be read aloud.
  • Checklists should have a logical and linear progression.
  • Checklists should have fewer than 8–10 items per pause point.
  • Checklists should encourage communication of critical information to team members and facilitate teamwork. (As Leape stated, “[checklists are] a tool for ensuring that team communication happens” (Leape 2014)).
  • Checklists must be well grounded within the “present day” operational environment so that the team will have a sound realization of their importance, and not regard them as a nuisance or antiquated task.

Checklists in medicine


Checklists have been around in medicine for some time in one form or another, although some formats are barely recognizable as checklists. Most anesthetists are familiar with an anesthetic machine checklist and, in a way, filling in an anesthetic chart is a continually cycling checklist of a patient’s vital signs. However, the checklist as a safety tool in medicine was not really heralded until 2004 when a critical care team led by Peter Pronovost developed a set of clinical guidelines and accompanying checklist for reducing central line infections (Berenholtz et al. 2004), guidelines that were validated in 2006 (Pronovost et al. 2006).


It was a simple, evidence-based, pragmatic, and commonsense guideline consisting of six major steps (Berenholtz et al. 2004): (1) hand washing; (2) sterilization of the insertion site; (3) draping the entire patient; (4) using sterile gloves, a mask, hat, and gown; (5) maintaining a sterile field; and (6) applying a sterile dressing to the insertion site. Before the introduction of the checklist, doctors only followed the evidence-based guidelines in 62% of central catheter insertions; as a result, catheter-related infections occurred at a rate of 11.3 per 1000 catheter days. Astonishingly, after the checklist was introduced the rate decreased to 0 infections per 1000 catheter days. It was estimated that 43 catheter-related infections had been avoided and that eight lives had been saved with the added bonus of potentially saving almost US$2,000,000 in additional healthcare costs over a year (Berenholtz et al. 2004).


This checklist was not created in isolation as no checklist in and of itself will guarantee safety. Four other separate and concurrent interventions were implemented with the central line checklist:



  1. ICU staff were educated about the importance of catheter site infections and evidence-based guidelines.
  2. A “catheter insertion cart” was created that contained all the equipment needed to perform catheterization according to the guidelines.
  3. As part of the daily ICU rounds clinicians were asked whether catheters could be removed, thus removing a source of infection when it was no longer vital to patient care.
  4. Nurses were empowered to challenge doctors and stop the catheter being inserted if a violation of the checklist was observed.

The most heralded and well-publicized of all healthcare checklists is the World Health Organization’s Safe Surgical Checklist (Haynes et al. 2009; Safe Surgery Saves Lives Programme Team 2009). A team of multidisciplinary experts led by Dr Atul Gawande were tasked with developing interventions that could improve safety for surgical patients. (The full story behind this checklist is reported by Atul Gawande in his book The Checklist Manifesto, Profile Books, 2010.) Based upon available evidence and expert opinion, 10 universal factors regarding surgical safety were recognized (Safe Surgery Saves Lives Programme Team 2009) (Box 8.2).

Aug 14, 2022 | Posted by in SUGERY, ORTHOPEDICS & ANESTHESIA | Comments Off on 8: Error Prevention in Veterinary Anesthesia

Full access? Get Clinical Tree

Get Clinical Tree app for offline access