In effect, all animals are under stringent selection pressure to be as stupid as they can get away with. P.J. Richardson and R. Boyd in Not by genes alone: How culture transformed human evolution. University of Chicago Press, 2005. The rule that human beings seem to follow is to engage the brain only when all else fails—and usually not even then. D.L. Hull in Science and selection: Essays on biological evolution and the philosophy of science. Cambridge University Press, 2001. Why read about taxonomy and terminology? They seem so boring and too “ivory tower.” When starting to write this section, I (J.W.L.) recalled a warm September afternoon many years ago when I was a first-year veterinary student at Washington State University. It was in the anatomy lab that my lab partner and I were reading Miller’s Guide to the Dissection of the Dog and thinking how we would rather be outside enjoying the lovely fall weather. At one point, my lab partner, now Dr Ron Wohrle, looked up and said, “I think I’m a fairly intelligent person, but I’ve just read this one sentence and I only understand three words: ‘and,’ ‘the,’ and ‘of’.” Learning anatomy was not only about the anatomy of the dog, cat, cow, and horse, it was also about learning the language of veterinary medicine. Each profession or specialty has its own language—terminology—and the study of errors is no exception. Indeed, words and terms convey important concepts that, when organized into an agreed taxonomy, make it possible for those involved in all aspects of patient safety to communicate effectively across the broad spectrum of medicine. However, despite publication of the Institute of Medicine’s report “To Err is Human” (Kohn et al. 2000) in 2000 and the subsequent publication of many articles and books concerning errors and patient safety, a single agreed taxonomy with its attendant terminology does not currently exist. This is understandable for there are many different ways to look at the origins of errors because there are many different settings within which they occur, and different error classifications serve different needs (Reason 2005). But this shortcoming has made it difficult to standardize terminology and foster communication among patient safety advocates (Chang et al. 2005; Runciman et al. 2009). For example, the terms “near miss,” “close call,” and “preventable adverse event” have been used to describe the same concept or type of error (Runciman et al. 2009). Runciman reported that 17 definitions were found for “error” and 14 for “adverse event” while another review found 24 definitions for “error” and a range of opinions as to what constitutes an error (Runciman et al. 2009). Throughout this book we use terms that have been broadly accepted in human medicine and made known globally through the World Health Organization (WHO 2009) and many publications, a few of which are cited here (Runciman et al. 2009; Sherman et al. 2009; Thomson et al. 2009). However, we have modified the terms used in physician-based medicine for use in veterinary medicine and have endeavored to reduce redundancy and confusion concerning the meaning and use of selected terms. For example, “adverse incident,” “harmful incident,” “harmful hit,” and “accident” are terms that have been used to describe the same basic concept: a situation where patient harm has occurred as a result of some action or event; throughout this book we use a single term—“harmful incident”—to capture this specific concept. Box 1.1 contains selected terms used frequently throughout this text, but we strongly encourage the reader to review the list of terms and their definitions in Appendix B. Terminology in and of itself, however, does not explain how errors occur. For that we need to look at models and concepts that explain the generation of errors in anesthesia. The model often used to describe the performance of an anesthetist is that of an airplane pilot; both are highly trained and skilled individuals who work in complex environments (Allnutt 1987). This model has both advocates (Allnutt 1987; Gaba et al. 2003; Helmreich 2000; Howard et al. 1992) and detractors (Auerbach et al. 2001; Klemola 2000; Norros & Klemola 1999). At issue is the environment of the operating room, which by virtue of the patient, is more complex than an airplane’s cockpit (Helmreich 2000). Furthermore, in the aviation model, pilot checklists are used to control all flight and control systems, and are viewed as a fundamental underpinning of aircraft safety. In contrast, anesthesia safety checklists, although very important, are incomplete as they are primarily oriented toward the anesthesia machine and ventilator, but not cardiovascular monitors, airway equipment, catheters and intravenous lines, infusion pumps, medications, or warming devices (Auerbach et al. 2001). Another factor limiting the applicability of the aviation model to anesthesia is that as a general rule, teaching does not occur in the cockpit whereas teaching is prevalent in the operating room (Thomas et al. 2004). Regardless of the pros and cons of the aviation model, the important concepts are that the operating room is a complex work environment, made more so by the presence of the patient. Thus, by definition, a veterinary practice, be it small or large, is a complex system. But what other features are the hallmark of complex systems and how do errors occur in them?
CHAPTER 1
Errors: Terminology and Background
Error: terminology
Error: background