Introduction to Patient Safety


3
Introduction to Patient Safety


Matt McMillan1 and Daniel S.J. Pang2


1 The Ralph Veterinary Referral Centre, Marlow, Buckinghamshire, UK


2 Faculty of Veterinary Medicine, University of Calgary, Calgary, Alberta, Canada and


Faculty of Veterinary Medicine, Université de Montréal, Saint-Hyacinthe, Québec, Canada


Introduction


Not all harm to patients comes from illness and injury, some comes through healthcare itself. The World Health Organization (WHO) defines patient safety as “the reduction of risk of unnecessary harm associated with healthcare to an acceptable minimum” [1]. In this definition, “harm” encompasses a range of undesirable outcomes for the patient including disease, injury, pain, suffering, disability, and death [2]. “Unnecessary” implies that this harm was preventable, suggesting that it was iatrogenic in origin, caused by error, accident, or neglect rather than being “necessary” to treat the disease (e.g., trauma caused by surgery). Patient safety focuses on the prevention of error and accidental injury in healthcare.


Anesthesiologists have always been leaders in patient safety, perhaps because anesthesia itself has little to no therapeutic value and due to the immediacy of consequences of an error in anesthetic care [3]. These factors tend to make anesthetists risk averse and safety conscious. This chapter introduces patient safety science with special emphasis on safety during the perianesthetic period.


Nomenclature and terminology


The terminology around patient safety can be imprecise, compromising understanding of the subject and the literature [1]. Although there is no universally recognized nomenclature for safety in veterinary medicine, the WHO’s World Alliance for Patient Safety developed a set of key terms which we have adapted and expanded for veterinary purposes and will use throughout this chapter for clarity and consistency (Box 3.1).


The risk of harm from healthcare


The most cited estimate of deaths from human healthcare‐associated harm is the Institute of Medicine 2000 report “To Err is Human: Building a Safer Healthcare System” that describes an incidence of 44,000–98,000 deaths annually in the United States (US) [4]. This is now considered a gross underestimation with more recent studies estimating that between 200,000 and 400,000 patients a year die from preventable healthcare‐associated harm in the US alone [5,6]. This makes healthcare the third leading cause of death in the US, behind heart disease and cancer [5]. The risk of serious non‐fatal healthcare‐associated harm has been estimated to be between 10‐ and 20‐fold greater [6].


In veterinary medicine, information on the incidence of veterinary healthcare‐associated harm is limited. There are some clues in studies investigating perianesthetic fatality. In 1990, Clarke and Hall suggested that 70% of anesthetic deaths had an element of error and that many animals died while not under “close supervision” [7]. In a large prospective study of small animal anesthetics in 2008, Brodbelt et al. reported that a significant proportion of veterinary patients did not receive basic monitoring, and the risk of death was four to five times greater when an animal’s pulses were not monitored [8]. Many of the reported deaths were in healthy animals, suggesting that they could have been preventable through the implementation of simple monitoring techniques.


Looking beyond fatality, safety incidents in veterinary anesthesia occur regularly; studies suggest a rate of between 3.7% and 5.1% in 4140 and 3379 anesthetics, respectively [9,10]. Common safety incidents reported by Hofmeister et al. included medication error (wrong drug, wrong dose, wrong route, and inaccurate labeling), adjustable pressure‐limiting (APL) (i.e., pop‐off) valves left closed, airway complications (including esophageal intubation), and intravenous catheter failures [9]. One case of a wrong site locoregional anesthetic technique and another of wrong site arterial catheter placement were also reported. McMillan et al. cited medication error, equipment failure, oxygen pipeline failure, and APL valve mismanagement as the most commonly reported safety incidents [10]. The minority of these incidents (6.3%) were near misses, 20.7% were no harm incidents, and the remainder were harmful incidents (73.0%).


In a wider view of veterinary healthcare, Wallis et al. analyzed safety incidents reported over a 3‐year period in three veterinary hospitals [11]. Safety incidents occurred in 0.4–0.8% of hospital visits with approximately 54% being medication errors. Medication errors involved “wrong dose” (57.8%), “wrong drug” (18%), “wrong time” (12.2%), “wrong route” (5.8%), and “wrong patient” (3.8%), although the exact proportions varied between hospitals. Communication errors made up 30% of incidents and involved a mixture of source (missing information/incomplete information), transmission (illegible handwriting, inappropriate manner for transmitting information), and receiver (information forgotten or incorrectly interpreted). Other insights can be identified from examining professional litigation cases and closed claims analysis. Oxtoby et al. retrospectively reviewed records of claims made to a veterinary indemnity insurer in the United Kingdom [12]. Identifiable errors were present in 45% of records with the leading causes being cognitive limitation (55%), lack of technical skill or knowledge (15%), and communication (5%). Each of these studies is likely to only reflect a small fraction of the actual incidence of veterinary patient safety incidents and healthcare‐associated harm [13].


Why things go wrong: Human error and the system


Discussion of patient safety must include the thorny issues of human error and human performance. Human error can be identified in incident investigations in all walks of life, including healthcare. It would be simple to assume that people are the problem that needs to be fixed. However, human error cannot be eliminated [4].


Modern views of safety consider that human error is a consequence of the conditions in which an accident occurred, not a cause of the incident. It is inevitable that accident investigations will find fault with human performance since humans are at the sharp end of processes in most industries, that is, people perform or control the task(s), make the decisions, and form the last line of defense. However, human performance is influenced by working conditions and is, therefore, systematically linked to the features of the tasks, tools, and environment of their work. Describing human error helps us to understand how and why it occurs. The psychologist James Reason developed a taxonomy that remains the foundation for describing types of human error today (Fig. 3.1) [14].

A block diagram of taxonomy of human error. In unsafe acts we have two parts 1. Unintended have different parts in basic error type slip (Attentional failures), lapse (Memory failures), and mistake (Rule-based Knowledge-based). 2. Intended have one part Violation (Routine, exceptional, and sabotage).

Figure 3.1 Reason’s taxonomy of human error (1991).


Source: Adapted from Reason [14]. See Box 3.1 for current definitions.


The “process” of anesthesia


Understanding human error requires us to learn about system vulnerabilities, resource constraints, pressures, and contradictions and understand how processes fail or succeed. There are several elements that make healthcare, in particular, anesthesia, susceptible to, and the people working within these areas prone to, error. Anesthesia requires the administration of rapidly acting drugs with significant physiological effects, the monitoring of patients in a vulnerable physiological state using advanced electronics, and the performance of highly technical procedures in a dynamic, unpredictable environment characterized by intense time pressure and high stakes. This situation may be further compounded by a concurrent teaching requirement. Gaba [15] and Reason [16] have discussed these factors at length, and their findings are summarized in Box 3.2.


Models for the development of safety incidents in high‐risk, complex industrial activities such as nuclear power have been adapted to model anesthetic processes. “Normal Accident Theory,” as described by Perrow in 1984, outlines two system attributes that commonly contribute to accidents: (1) the complexity of interactions between system components and (2) the tightness of coupling between these components [17].


A task or process is said to have complex interactions if there are many alternative and interrelated subtasks at any point in its completion. The more complex a system’s interactions, the more likely it is that something will go wrong as the consequence of interactions cannot always be predicted. Gaba et al. subsequently expanded on Perrow’s theory to outline how three different types of complexity (intrinsic, uncertainty, and proliferation) contribute to incidents in anesthesia [18].


Intrinsic complexity occurs due to the interconnected action of a multitude of individual agents within a system or process. The body can be considered an intrinsically complex system; huge numbers of cells and organs interact in a multitude of intricate ways, using a vast array of signals. In such a system, altering one component can have a knock‐on effect on other components, with dynamic and unpredictable consequences.


Uncertainty complexity occurs when cause and effect relationships are not always clear and are difficult to predict. Anesthesia is easily achieved by drug administration; however, the pharmacological, physiological, and pathophysiological interactions that occur are often poorly understood. In addition, devices used for measurement and monitoring the effect of these interactions can be imprecise and inaccurate, often have low signal‐to‐noise ratios, and are prone to error. Prediction of drug effects is based upon our knowledge of what a drug does in the general population, but the effect on a given individual cannot be predicted with certainty. Nor are we able to measure all affected parameters or predict all potential consequences; we only ever have a partial picture of what is happening.


Finally, if a process involves many components and tasks, this can cause proliferation complexity. Although a process and individual steps within it may be simple, the components and tasks may be interconnected in a complex fashion. For example, they may have to be performed in a precise order or at specific times for the process to succeed. Such proliferation is what can lead to lapses such as failing to open an APL valve in a breathing system following a machine leak check. The task is simple, but, because it is one task of many which need to be performed before anesthesia, it is easily forgotten. As more and more components are added to a system, the chance of one of them being missed, or failing, increases. Indeed, anesthetists typically add proliferation complexity to a case to help combat uncertainty; using multiple electronic monitoring devices, each of which has a risk of error, imprecision, and failure [18].


Coupling refers to the way components in a system are linked. In tightly coupled systems, the change in state or function of one component directly affects other components within a short period of time. Tightly coupled systems result in more accidents because even minor errors can become problematic before they can be corrected. In loosely coupled systems, there is redundancy or temporal lag built in the system allowing components to function normally despite changes in other components. The interaction of numerous homeostatic mechanisms means that physiology is inherently loosely coupled. Body systems have intrinsic buffers, which allow normal functioning in a variety of conditions; for example, renal blood flow and glomerular filtration rate are kept stable through a range of perfusion pressures. However, the redundancy and buffer mechanisms responsible for this loose coupling are hindered by the anesthetic process making components more tightly coupled. This has the effect of transferring responsibility for monitoring and maintaining an adequate state of health to the anesthetist, aided by technology and pharmaceuticals.


Reason’s Swiss cheese model


The psychologist James Reason considers complex systems to contain fundamental elements that must work together harmoniously if efficient and safe operations are to occur [14]. This requires certain preconditions to be met, such as well‐maintained and functional equipment, and individuals with appropriate training, skills, and experience. Hazards are prevented from causing harm by a series of barriers, safeguards, and defenses. Some rely on people, while others are engineered into the system (e.g., alarms). In an ideal world, each layer would be intact; however, in reality, each has unintended weaknesses. These failings degrade the integrity of the system leading to vulnerabilities within different layers like holes in a slice of Swiss cheese. Unlike in Swiss cheese, these holes are continually opening, shutting, and shifting their location as conditions in the system change. The presence of holes in any single “slice” does not normally cause an incident or bad outcome. However, when the holes within these layers momentarily align, they allow an error to proceed through the layers of a system to reach the patient (Fig. 3.2).


Human factors


Human Factors (HF), sometimes referred to as “ergonomics,” is a scientific discipline concerned with understanding interactions between people and the other elements of a system. Applying HF approaches in healthcare can enhance the way it is delivered and received. HF views people as a vital component of any system and that their abilities and limitations must be considered when attempting to improve system performance. The basic principle of HF is to analyze tasks, systems, or processes to understand the limitations put upon the humans within them. In essence, this can be achieved through asking: Can this team/person reliably and safely perform these tasks, with this training and information, using this equipment, in this environment, with these constraints and pressures, to the required standard? Through understanding human limitations and how humans process information, workplaces, tasks, and equipment can be designed and engineered to allow for variability in human performance, making it easier for frontline workers to do their job safely and effectively.

A flow diagram of Swiss cheese model. In the chesse we have different terms are there to analyse. From the top Organizational infuence, Unsafe supervision, Preconditions for unsafe acts, Unsafe acts, and Incident.

Figure 3.2 Reason’s Swiss cheese model of incident causation. Additional terms in this figure are described in the Human Factors analysis section (Fig. 3.4 and Table 3.6).


Source: Adapted from Pang [19].


Non‐technical skills


One area that the HF approach has highlighted is the importance of non‐technical skills. These are skills individuals use to perform their work that lie outside of the traditional medical competencies of knowledge and technical skills. Non‐technical skills are cognitive, social, and personal management skills necessary for an individual to perform a role, task, or job safely and effectively [20]. They include situation awareness, decision‐making, task management, teamwork, leadership, communication, and the management of stress and fatigue.


Until recently, little attention has been given to these competencies in medical professions. One of the early proponents of non‐technical skills in anesthesiology were Howard et al., who developed an “Anesthesia Crisis Resource Management (ACRM)” training course with the aim of introducing anesthesiologists to principles of dynamic decision‐making, human performance, and the development of countermeasures aimed at combating and reducing error [21]. More recently, the “Anaesthetists’ Non‐Technical Skills (ANTS)” behavioral marker system was developed following collaboration between anesthesiologists and psychologists [20]. Task analysis was used to define the critical non‐technical skills required for a safely functioning anesthetist, and an assessment scale was developed for ANTS training and observations in operating rooms or simulation settings (Table 3.1). Such training and observation rarely occur in the veterinary sector, but it is likely that non‐technical skills are similarly important for veterinary anesthesia.


Organizational culture


Organizational culture is the collective beliefs, perceptions, and values shared between individuals in a workplace and how these are manifested in their work. Organizational culture sets the tone of the workplace by affecting the conditions, pressures, constraints, and expectations put upon individuals and the tasks they perform. Informally, culture represents “how things are done around here.” Safety culture is those aspects of organizational culture that affect increasing or decreasing risk. Safety culture can be thought of as a “leading indicator” in the assessment of patient safety, as opposed to a “lagging indicator” (e.g., morbidity and mortality). Several organizational culture types are widely discussed in the literature and warrant examination.


Table 3.1 Anesthetist non‐technical skills.


Source: Adapted from Flin et al. [20].









































Category Elements
Task management Planning and preparing
Prioritizing
Providing and maintaining standards
Identifying and utilizing resources
Team working Coordinating tasks with team members
Exchanging information
Using authority and assertiveness
Assessing capabilities
Supporting others
Situation awareness Gathering information
Recognizing and understanding
Anticipating
Decision‐making Identifying options
Balancing risks and selecting options
Re‐evaluating

Blame and no‐blame cultures


A blame culture is one where incidents are blamed on individuals making errors despite those individuals having little or no control over the conditions in which the error occurred [22]. It leads to norms and attitudes characterized by an unwillingness to take risks or accept responsibility for mistakes because of a fear of criticism or repercussions. Blame culture cultivates distrust and fear, with people tending to blame each other to avoid being blamed themselves. This can result in few new ideas and a lack of personal initiative because people do not want to risk being wrong. Safety incidents tend to stay unreported, and investigations tend to be brief, finding culpability in frontline workers. Such cultures tend to evolve in hierarchical, rule‐, and compliance‐based systems but where the work is highly variable, such as healthcare.


A no‐blame culture is in essence the opposite: a culture in which individuals are not held accountable for their actions. This should encourage people to report errors and provide an environment where innovation is encouraged [23]. However, a no‐blame culture is neither feasible nor desirable. Most people desire some level of accountability when a safety incident occurs, and individuals should take ownership of their decisions and actions and be accountable for being part of any solution [23].


Just culture


A just culture is one in which individuals are not punished for human errors if their actions, decisions, or omissions were appropriate to their experience and training, but where negligence, willful violations, and destructive acts are not tolerated [24]. Unlike a no‐blame culture, there is accountability; individuals are accountable for reporting incidents and organizations are accountable for implementing appropriate corrective actions to improve the system and reduce the risk of recurrence of incidents [23].


If staff members perceive that their reports are treated fairly and lead to positive change, and that people who willfully violate safety rules and take unnecessary risks are held accountable, the willingness to report will increase. This encourages reporting and investigation of incidents and improves safety.


Learning culture


Learning culture is concerned with the sustainability of learning from failure through the reporting and analysis of errors and incidents and the implementation of systems level interventions. It requires an atmosphere of psychological safety, that is, a supportive work environment in which team members believe that they can question existing practices, express their concerns, or dissent, and admit mistakes without suffering ridicule or punishment [25]. It requires open communication between all levels of staff within an organization, a flattening of hierarchy, and transparency in incident management.


Senge [26] has described five disciplines which together make a learning culture: (1) self‐mastery, (2) shared mental models, (3) shared vision, (4) team learning, and (5) systems thinking [27]. Self‐mastery involves realistic thinking about one’s abilities. Shared mental models require team members to have a common understanding of the task being performed and of the involved teamwork, whereas a shared vision means all team members are working toward the same goal. Team learning represents a situation where all team members are continuously learning from each other and their successes and failures. Systems thinking, on the other hand, is an attempt to understand the way the system works and how this influences the behavior and performance of individuals working within it.


Safety I versus Safety II


Modern views of safety consider that people bring the innovation, creativity, flexibility, and resilience required for processes to succeed in systems with huge variation [28]. People can adjust what they do to match the conditions in which they work [28]. It can be argued that everyday performance variability provides the adaptations that are needed to respond to varying conditions and is the reason why systems are generally successful. Humans can consequently be seen as a resource necessary for system flexibility and resilience rather than a cause of the problem. Safety I can be viewed as the approach to safety, which concentrates on avoiding things going wrong. Safety II focuses on how things go right under varying conditions, exploring all possible outcomes to understand how flexibility in a system allows success. This concept is still in its infancy within medicine; however, it may give a better overall view of performance within the healthcare system [29].


Assessing organizational safety culture


Assessing safety culture can reveal potential issues in communication, teamwork, resources, and management strategies, delineating areas for targeted improvement efforts. Several validated measurement scales have been developed to assess safety culture in human healthcare organizations. One of these, the “Safety Attitudes Questionnaire” [30], has been modified for use in veterinary medicine. The resultant “Nottingham Veterinary Safety Culture Survey” was developed for use in veterinary practices within the United Kingdom and was subsequently further adapted for use in the US [31,32].


Data gathering techniques


Improving safety relies on the accurate identification of risks to safety. Data can be gathered using numerous techniques with the most common being incident reporting systems, interviews, and morbidity and mortality conferences (M&MCs). To generate the best possible account of the incident, it is common to use more than one approach.


Incident reporting systems


One of the main sources of information about patient safety has come from the systematic collection and analysis of incident reports. Moreover, incident reporting is central to the development of a learning culture, wherein errors are discussed, and systems analysis is applied to improve outcomes. There is also considerable value in reporting near misses and no harm incidents since they are harbingers of potential patient harm. To capture a range of different perspectives and build a more complete picture of an incident, information should be collected from all involved. Often, it is more junior team members (e.g., interns), animal health technicians, and support staff who are most exposed to weaknesses and flaws within a system.


Early reporting is important. As time passes after an incident, memories fade, bias creeps in, and an incident may never be reported as caregivers move on to other tasks. In the view of the authors, reporting should be completed within 12–48 h of an incident occurring, with harmful incidents reported toward the shorter end of this range (by the end of the workday is a good rule of thumb).


Information collected on incidents should focus on creating an account of events, describing the “5 Ws”: “Who” (was involved), “What” (happened), “When” (the incident happened), “Where” (it happened), and “Why” (and how it happened).


Existing reporting systems in human medicine are effectively voluntary, with perhaps the exception of harmful incidents when there is a stronger expectation of reporting. While voluntary reporting systems have their drawbacks, they can be highly effective when properly designed and administered. Importantly, the success of any system is predicated upon its acceptance and use by caregivers. Several concepts to voluntary reporting system design facilitate their adoption and usefulness for incident analysis. These are confidentiality, ease of use, accessibility, independent analysis, and release of findings [33,34].


Confidentiality means individuals should be able to report incidents without the risk of personal or professional consequences [35]. Confidentiality should not be confused with anonymity. While anonymity confers further protection to those reporting, it removes the ability to follow up a report with the individual involved. To fully understand a reported incident, particularly when attempting to collate multiple reports of the same incident or simply to clarify circumstances, some degree of follow‐up is often necessary. Also, under the protection of anonymity, there may be the tendency to use reporting systems as a means of personal attack or to vent frustrations with aspects of the system (e.g., supervisors and administration) drawing focus away from a more complete report of the incident.


Reporting systems must be easy to use, including clear language, use of checkboxes for some data entry, automatic field population, minimal mandatory fields, and a section for a narrative description of events. The latter is important in gathering an account of the incident [36]. The use of open questions encourages descriptions from the reporter’s perspective.


Reporting systems must be easily accessible to all healthcare workers. Online reporting systems should be easy to find and preferably accessed through a “single click,” from computers in and out of the hospital network. There remains an argument for offering a paper‐based system in combination with a secure drop‐off box for staff members that do not have private access to computers.


To reduce bias, reports should be analyzed by independent investigators not involved in the incident. Releasing the findings of investigations, as well as a general report of types and outcomes of incidents reported, “closes the loop,” building faith in the system. This demonstrates that reports are taken seriously and acted upon and allows caregivers to be involved in any changes introduced by providing a period for comment. When an incident reveals significant system weaknesses that are likely to cause further incidents, it is critical that this information can be shared quickly and widely using multiple means (e.g., team meetings, email alerts, website notifications, and notice boards).


Voluntary reporting systems are limited by the quality of material collected and the extent to which they are representative of all incidents. Material collected is largely determined by the design of the reporting system. It is widely recognized that voluntary reports underestimate the true incidence of safety incidents [13]. However, if users understand the role and importance of submitting reports, the types of incidents to report, in combination with a reporting system that is easy to access and use, and can do so without fear of repercussion, uptake is likely to be improved. Education can be provided (e.g., as part of orientation for new employees) and reinforced within the system (e.g., regularly releasing findings emphasizes the value of reporting). There are a huge number of incident reporting systems in medicine with one of the first being the “Australian Incident Monitoring System (AIMS)” [36]. There are far fewer incident reporting systems in veterinary medicine, most being developed using digital forms and cloud‐based software [11,37].


Interviews


A senior member of the clinical team who was not involved with the incident may interview individuals involved in an incident. The primary advantage to one‐on‐one interviews is the opportunity to elicit responses without the potential for pressure from discussing an incident in a group environment. However, this advantage is countered by several important disadvantages including the inherent power dynamic of the interview process and the potential to create significant bias through the format and scope of the questioning. The use of structured or semi‐structured interview techniques, such as critical incident technique, and the use of open, non‐accusatory language can help reduce this bias.


Morbidity and mortality conferences


Morbidity and mortality conferences (M&MCs) can promote learning from incidents and drive improvements in patient safety [38]. However, despite being in existence for well over 50 years, it is unclear to what extent they take place in veterinary medicine [39]. Conceptually, M&MCs can serve as a forum for collaborative review and investigation of incidents without fear of negative personal or professional consequences. When done well, M&MCs represent an opportunity to improve patient safety and maximize learning from an incident through open reflection and discussion [4044].


In human medicine, M&MCs have successfully driven improved outcomes in patient care and management, including improving safety culture and care quality and reducing mortality and malpractice claims [45]. For M&MCs to fulfill their potential, key considerations are case selection, duration and frequency, roles of moderator and presenter, presentation format, incident analysis technique, and outcome and follow‐up.


All cases of mortality resulting from error or potential error should be reviewed through an M&MC. In larger clinics/hospitals, where the number of cases presenting exceeds the capacity to review cases in a timely manner, it may be necessary to screen and select cases for M&MCs. This can be based on the number of systemic factors implicated, the number of future patients that may benefit, and educational value. Screening can be performed as part of the voluntary reporting system process.


Duration of M&MCs varies widely, from 20 to over 60 min, probably reflecting time available in many instances, as well as case complexity [4042]. Similarly, the frequency of M&MCs is variable though once monthly is commonly reported in the literature [38]. There is a case to be made for having a regular schedule to emphasize that M&MCs are a normal and accepted part of clinical governance, rather than special events reflecting personal failings. Cases selected for M&MCs should be presented as soon as possible as a timely acknowledgment of the incident and to maximize the likelihood of collecting all pertinent information.


Moderators set the tone of M&MCs and must be familiar with the M&MC format, have a good understanding of analysis techniques, have sufficient experience and expertise to guide the presenter and audience, and be respected by the attendees [40,46,47]. Cases should be presented by someone directly involved as they are best placed to present the events and answer questions. The presenter can be a senior or junior team member. It is helpful to have senior members intermittently present to show their investment in the process and to demonstrate to junior/new team members the expected format and standard. Involvement of junior members as presenters is invaluable as an educational tool for critical evaluation of case management and in maintaining a just and learning culture. Audiences for M&MCs should reflect the personnel of the clinic (i.e., all members of staff should be invited). Attendance by senior team members shows support for the process and helps create a positive, productive discussion [39]. A multidisciplinary audience can enrich the discussion by bringing different perspectives and disseminate lessons learned more widely [39].


Standardized presentation formats help ensure that information is presented in a systematic, organized way, minimizing the risk of bias [39,41,42,48]. One such model is the “Situation, Background, Assessment, Recommendation (SBAR)” format (Table 3.2), a structured method for efficiently transferring information [41,42]. This method has been effectively used when information is being passed between personnel who occupy different positions in a hierarchy (e.g., senior clinician and intern) [49,50].


Table 3.2 Situation, Background, Assessment, Recommendations (SBAR) format for morbidity and mortality conferences (M&MCs).


Source: Adapted from Pang et al. [38].






















SBAR component Elements
Situation: brief statement of problem Diagnosis at admission, statement of procedure, and patient safety incident
Background: clinical information pertinent to adverse event History, indication for procedure, diagnostic studies, procedural details timeline of care, description of incident (recognition, management, outcome)
Assessment and analysis: evaluation of adverse event (what and why) What: sequence of events
Why: analysis, using preferred methodology*
Review of the literature: evidence‐based practice Relevant literature
Recommendations: prevention of recurrence Identify how event could have been prevented or better managed
Identify learning outcomes and recommendations

* See text for different analysis methods.


Analysis techniques


Analysis of patient safety incidents has three goals: to explain what happened, identify the roles and contributions of associated factors, and identify means to prevent similar incidents. Historically, the focus on investigating a safety incident begins and ends with what happened, with limited consideration of why it happened. This rarely brings effective change into the system and invariably results in the person(s) closest to the event shouldering the responsibility and blame. Modern investigation analysis techniques have moved away from a person‐based (“blame and shame”) approach to a structured‐system‐based (Human Factors) approach. While there are numerous analytical frameworks available, those used in human healthcare are largely based on the work of the psychologist, James Reason [14].


Root cause analysis


Root cause analysis (RCA) is a generic term used to describe a range of techniques which aim to identify problems and then work toward establishing the problem’s root cause(s). A common misconception of RCA springs from the word “cause” appearing as singular, leading to the belief that a problem results from a single cause, whereas multiple causes are almost always involved, particularly in healthcare [51]. The simplest RCA method is the “five whys” technique, which involves asking “why?” five times, with each question following on from the previous answer to identify root causes of a problem. However, this technique promotes linear thinking, in that other contributing factors may be overlooked. Consequently, such techniques have now been replaced with more structured techniques, which ensure that the entire system is considered within the analysis.


Human Factors analysis techniques


Modern incident analysis techniques involve investigating the role of the entire system in the evolution of an incident. The “London Protocol” is a model of healthcare incident evolution based upon Reason’s accident model and Human Factors approaches (Fig. 3.3) [52,53]. Reason’s “unsafe acts” are termed “care delivery problems” and are affected by contributory factors, which can be classified into patient, task, individual team, work environment, and organizational/cultural levels. The framework provides a systematic and conceptually driven approach to accident investigation and to risk assessment in healthcare.


Another system commonly used for incident analysis in healthcare is the “Human Factors Analysis and Classification System (HFACS)” (Fig. 3.4). This system, based on Reason’s model and developed for the investigation of aviation accidents [54], has been adapted to the healthcare setting [51,55]. Used correctly, this approach is believed to have the “potential to identify actionable systemic causes of error, focus specific performance improvement efforts, and ultimately improve patient safety” [51].


A fishbone diagram can be used as a visual representation of an analysis and an accessible entry into a system‐based approach for incident investigation (Fig. 3.5) [34,38,56].


Patient safety evidence in anesthesia


Cooper et al.’s study of anesthetic “mishaps” was the first to really examine the causes of anesthetic safety incidents beyond the patient and inherent risk of anesthetic drugs [57]. Through structured interviews, it was established that human error was involved in 82% of the 359 investigated incidents, but many other associated factors were also identified (Table 3.3). Interestingly, many of the incidents were considered “representative of the kinds of error that residents are considered prone to commit in the training process” and “that most of the errors and associated outcomes could be averted by a more structured approach to preparing residents for the environments into which they are often suddenly immersed.”

A block diagram of accident causation in healthcare. The contributing factors, Latent error in that Organization culture management from this we get error and violation producing conditions like work environment, team, individual task / technology, patient, from all this care delivery problems. of (unsafe acts) Errors, slips, lapses, mistakes, in active failures, violations. From all this end with accident.

Figure 3.3 Vincent’s model of accident causation in healthcare.


Source: Adapted from Vincent et al. [52].

Four flow chart of Human factors analysis and system components. 1. Organizational infuences it includes, resource management, organizational climate, and organizational process. 2. Unsafe supervision it includes, inadequate upervision, planned inappropriate operations, failure to correct known problem, and supervisory violations. 3. Preconditions for unsafe acts it includes, environmental factors also includes physical environment and technological environment, conditions of operators includes adverse mental states, adverse physiological states, and physical/mental limitations, Personal factors, it includes crew resource management and personal readiness. 4. Unsafe acts includes errors includes skill-based errors, decision errors, and personal errors, Violations includes routine and exceptional.

Figure 3.4 Human Factors Analysis and Classification System components.


Source: Adapted from Pang [19].

A flow diagram of investigation of a closed APL valve. It is like a fishbone arrow. It inculeds following steps in the above personnel, procedure, equipment, in the below environment, organisation, patient and in the middle incident.

Figure 3.5 Example of a fishbone diagram made during an investigation of a closed APL valve.


Source: Pang et al. [56], with permission of John Wiley & Sons.


Table 3.3 Factors involved in 359 anesthetic “mishaps” as identified through the critical incident technique.


Source: Adapted from Cooper et al. [57].



























































































Contributing factor Relative frequency
Inadequate total experience 21.4%
Inadequate familiarity with equipment/device 12.5%
Poor communication with team, lab, etc. 7.5%
Haste 7.2%
Inattention/carelessness 7.2%
Fatigue 6.7%
Excessive dependency on other personnel 6.7%
Failure to perform a normal check 6.1%
Training or experience – other factors 6.1%
Supervisor not present enough 5.0%
Environment or colleagues – other factors 5.0%
Visual field restricted 4.7%
Mental or physical – other factors 4.5%
Inadequate familiarity with surgical procedure 3.9%
Distraction 3.6%
Poor labeling of controls, drugs, etc. 3.3%
Supervision – other factors 3.3%
Situation precluded normal precautions 2.8%
Inadequate familiarity with anesthetic technique 2.8%
Teaching activity under way 2.5%
Apprehension 2.2%
Emergency case 1.7%
Demanding or difficult case 1.7%
Boredom 1.4%
Nature of activity/other factors 1.4%
Insufficient preparation 0.8%
Slow procedure 0.8%
Other 0.8%

The first “Confidential Enquiry into Perioperative Deaths” reviewed just over half a million anesthetics and reported 4034 perioperative deaths within 30 days of surgery, 410 of which were considered associated with anesthetic management [58]. The factors identified by the investigators in these cases are shown in Table 3.4. It is noteworthy that lack of knowledge (15.1%) was identified less often than the failure to apply knowledge (75.1%), and the organizational factor (24.9%), such as lack of appropriate staffing levels, was the third most common factor identified.


Table 3.4 Factors identified as being associated with anesthetic deaths in the Confidential Enquiry into Perioperative Deaths.


Source: Adapted from Buck et al. [58].































Contributing factor Relative frequency
Failure to apply knowledge 75.1%
Lack of care 30.0%
Failure of organization 24.9%
Lack of experience 23.7%
Lack of knowledge 15.1%
Drug effect 9.5%
Failure of equipment 1.7%
Other 2.7%

Runciman et al. published analysis of the first 2000 incidents to be reported through the “Australian Incident Monitoring System (AIMS)” (Table 3.5) [59]. Factors associated with the wider system were identified as contributing to 26% of incidents, and this increased to 81% if human behavior was included in the system. Interestingly, the system was considered a mitigating factor in 56% of incidents, and systems‐based corrective strategies were suggested as solutions in 65%.


Neuhaus et al. applied the HFACS to 50 anesthetic incidents reported to a single center’s incident reporting system. Investigations revealed 81 unsafe acts, 113 preconditions for unsafe acts, 39 instances of unsafe leadership, and 22 organizational influences [54]. Errors were identified 64 times and most commonly these were decision‐based errors, followed by skills‐based errors and, less commonly, perceptual‐based errors. There were also 17 violations identified, most of which were considered as exceptional. A mean of 5.1 factors from the HFACS were identified as contributing to each incident (Table 3.6).


Data from veterinary anesthesia literature are limited. From 163 voluntary reported safety incidents analyzed using the London Protocol, individual factors were identified in 123 (70.7%), team factors in 108 (62.1%), organizational and management factors in 94 (54.0%), task and technology factors in 80 (46.0%), work environmental factors in 53 (30.5%), and animal and owner factors in 36 (20.7%) incidents [10]. Importantly, factors from team, work environmental, and organizational and management categories were identified concurrently in 89.4% of incidents where individual factors were involved. The 14 most commonly identified contributing factors are reported in Table 3.7.


Table 3.5 Contributing factors identified in the first 2000 incidents reported to the Australian Incident Monitoring System (AIMS).


Source: Adapted from Runciman et al. [59].



































































Contributing factor Relative frequency
Error of judgment 16%
Failure to check equipment 13%
Fault of technique 13%
Other factors 13%
Other equipment problem 13%
Inattention 12%
Haste 12%
Inexperience 11%
Communication problem 9%
Inadequate pre‐operative assessment 7%
Monitor problem 6%
Inadequate pre‐operative preparation 4%
Unfamiliar environment or equipment 4%
Inadequate assistance 3%
Fatigue 3%
Drug label 3%
Other stress 2%
Lack of facility 2%
Staff change 1%
Illness 1%

Table 3.6 Contributing factors identified in 50 anesthetic incidents through the application of the Human Factors Analysis Classification System (HFACS).


Source: Adapted from Neuhaus et al. [54].





































































HFACS category HFACS contributing factor Relative frequency (%)
Organizational influences Resource management 6%
Organizational climate 20%
Organizational process 18%
Supervision Inadequate leadership 8%
Inappropriate planned operation 44%
Failure to correct problem 14%
Leadership violation 12%
Preconditions for unsafe acts Physical environment 22%
Technical environment 52%
Operator mental state 18%
Operator physiological state 4%
Operator chronic performance limitation 8%
Communication, coordination, planning 64%
Fitness for duty 0%
Unsafe acts Error, skills‐based 36%
Error, decision‐based 82%
Error, perceptual 10%
Violation, routine 12%
Violation, exceptional 22%

Patient safety interventions


Patient safety interventions should be aimed at specific system weaknesses identified during analysis of incidents and processes. There are, however, several general patient safety interventions, which have become universally accepted, such as cognitive aids (including checklists and algorithms), communication tools, simulation‐based training, and engineering solutions.


Table 3.7 The 14 most common contributing factors identified following the systems analysis of 163 veterinary anesthesia patient safety incidents.


Source: Adapted from McMillan and Lehnus [10].
























































Systems category Contributing factor Relative frequency (%)
Patient and owner Animal condition 30%
Task and technology Failure to follow SOP 51%
Equipment check 28%
Individual Decision‐making 65%
Experience 44%
Health, stress, fatigue 28%
Task management 24%
Team Supervision 48%
Written communication 38%
Verbal communication 33%
Work environmental Distraction 42%
Organizational Staffing level 53%
Poor scheduling 32%
Culture and priorities 28%

Checklists


Checklists are an organized list of action items or criteria that a user can record as present/absent as each item is considered or completed [60]. Fundamentally, the purpose of a checklist is to reduce error and improve performance [60]. A checklist achieves this by reinforcing accepted safety practices and fostering communication and teamwork, reducing the risk of error, and improving patient outcomes [6164].


Much of the literature on checklists use comes from aviation, reflecting its early acknowledgment of the relationships between humans and complex systems. In aviation, checklists are a standard part of flight protocol; not completing checklists, or completing a checklist from memory, are considered violations [65]. By comparison, the widespread use of checklists is relatively recent in medicine but has been shown to be highly effective. This is exemplified by two well‐known examples: the “Keystone Intensive Care Unit (ICU) Project” and the “Surgical Safety Checklist (SSC)” [61,62]. In the “Keystone ICU Project,” catheter‐related bloodstream infections were reduced from a mean baseline rate of 7.7 infections per 1000 catheter days to 1.4 infections per 1000 catheter days at 16–18 months after the introduction of a simple five‐point checklist. These improvements were sustained over 5 years of follow‐up, with estimated savings of $2–3 billion annually, and prevention of 30,000–60,000 deaths [62,63]. The introduction of the “SSC” in a diverse patient population from eight hospitals in eight countries, representing both high‐ and low‐income settings, reduced mortality within 30 days of non‐cardiac surgery from 1.5% to 0.8% and the overall complication rate (including surgical site infections, sepsis, and pneumonia, among others) from 11.0% to 7.0% [61]. The positive outcomes from this study led to its adoption by the WHO, its use in over 120 countries, representing 90% of the world’s population, and over 230 million surgeries per year. The checklist comprises three sections: “sign in” (before induction of anesthesia), “time out” (before skin incision), and “sign out” (before patient leaves operating room), with five to seven checklist items per section [66]. The mechanisms underlying the improvements achieved were “likely multifactorial,” reflecting both systems and behavioral changes [61]. These can be described in terms of technical and adaptive (cultural) aspects. Technical aspects encompass education and evaluation, and adaptive aspects encompass engagement and execution [62,63,66,67]. All must be considered for a checklist to be successful in design and implementation [63].


Table 3.8 Considerations when modifying the World Health Organization’s Surgical Safety Checklist (SSC).


Source: Adapted from WHO [66].
























Focused Keep the checklist concise, focused on critical items and those not checked by other means. Limit of five to nine items per checklist section.
Brief No more than 1 min to complete each section. Longer checklists risk impeding the flow of care.
Actionable Each listed item must link to a specific, unambiguous action to avoid confusion as to what should be done.
Verbal Promoting verbal interactions among team members is a core component. The checklist is likely to be less successful if limited to a written instrument.
Collaborative Considered modifications should be collaborative, involving representatives from groups involved in its use. This contributes to promoting “ownership” of the checklist.
Tested Any modified checklist should be tested in a limited setting before wider adoption. Testing may include, a simulation, over single day, and by a single team.
Integrated The checklist is not intended to be comprehensive and modification to integrate into existing safety processes is encouraged provided brevity and simplicity are not sacrificed. Integration could reflect specific procedure requirements.

The WHO encourages SSC modification by local teams, provided that safety steps are not removed, and checklist modification is supported by the finding that meaningful improvements can be achieved without completing all of the original 19 checklist items [61]. Table 3.8 outlines modification considerations.


It is important to recognize that the SSC has not been universally successful. A major reason for failures when attempting to adopt checklists is underestimating the importance of adaptive components while focusing on technical components [6770]. Of these, technical barriers are generally easier to overcome and may include provision of key equipment to facilitate checklist item completion and recognizing that checklists can be modified to reflect local conditions (e.g., redundancy with other safety checks). Cultural barriers are common in human medicine, with evidence that they exist in veterinary medicine [60,67,69,70]. There can be a tendency to feel that checklists infringe on clinical judgment and autonomy, reflect weakness by not relying on memory to perform tasks, and reflect a lack of knowledge or skill [60,67]. There can also be reluctance to accept direction to adopt checklists without discussion from administrators or those less actively involved in clinical practice [67,70]. Recognizing and addressing these potential barriers was an important part of the success of the “Keystone ICU Project” and the “SSC.” Unfortunately, steps taken to overcome these barriers were not emphasized in the SSC report, which probably explains some of the challenges encountered by others in attempting its adoption [60,69]. Specific instructions on how to introduce and implement the WHO SSC have been published [66].


While it appears that versions of the SSC are in use in veterinary medicine, published reports are limited and results mixed [69,71]. Contrasting the reports of Bergström et al. [71] and Menoud et al. [69] is instructive in the context of technical and adaptive (cultural) barriers. In both cases, the WHO SSC was adapted for local use; however, the methods underlying adaptation were not described by Bergström et al., while a Delphi method was used by the anesthesia team in Menoud et al. The length of the resulting checklist was comparable in each case (23 and 24 items). In the Bergström et al. study, checklist use was restricted to surgical procedures whereas in the Menoud et al. study, the checklist could be used for any anesthetic procedure. Bergström et al. provided oral instruction and practical training in checklist use over a 2‐week period before implementation with a specified individual responsible for checklist completion. By contrast, Menoud et al. reported that an investigator was available to assist users but did not describe a formal instruction or training process. Bergström et al. focused on outcome measures as a marker of checklist impact, finding a greater number of complications in the pre‐checklist group (52 complications identified in 300 dogs and cats) compared with the post‐checklist group (15 complications identified in 220 dogs and cats). No process audit to track checklist use or compliance was reported. Menoud et al. performed two process audits by direct observation of procedures. The first (n = 69 anesthetized cases) found that the checklist was used in 32% of cases, not printed for use in 41% of cases, and printed but not used in 27% of cases. The second audit (n = 64 anesthetized cases) found that the checklist was printed for all cases and used in 45%, with no significant difference in use between audits. Menoud et al. concluded that difficulties faced in checklist use reflected a failure to designate someone responsible for managing the checklist, a lack of printed copies of the checklist, and attempting to apply the checklist to cases that underwent anesthesia but not surgery. Overall, they described the situation as one in which “users did not feel involved” and identified that introduction of the checklist could have been better managed [69].


An “Anaesthetic Safety Checklist” was produced by the Association of Veterinary Anaesthetists in 2014 and is freely available through the association website [72]. Fig. 3.6 shows the checklist used during the spay‐neuter teaching laboratory at the University of Calgary. Hofmeister et al. quantified the incidence of APL valves unintentionally closed and esophageal intubations over approximately 1 year [9]. A focused checklist (added as checkboxes on the anesthesia record) was designed to address these two items, which resulted in a decrease in incidence for both over the subsequent year: APL valve closures from 20 to 5 occurrences; and esophageal intubations from 16 to 4 occurrences [9].


Other cognitive aids


There are many other cognitive aids that can be used to improve human performance and safety. Evidence for their use in settings outside of medicine is compelling and has led to much interest especially in acute medicine, surgery, and anesthesia [73]. Their use in emergency situations has perhaps received the most attention as it appears that failure to adhere to best clinical practice often occurs when time for decision‐making is limited [74]. Many cognitive aids have been developed for emergency situations ranging from single‐treatment algorithms and drug charts to large sets of checklists and cognitive aids grouped together in a “crisis manual.”


Some of the earliest cognitive aids developed and put to widespread use were cardiopulmonary resuscitation (CPR) algorithms. In CPR, adherence to Advanced Cardiovascular Life Support protocols is associated with increased patient survival, whereas non‐compliance and omissions of indicated steps are associated with decreased survival [75]. Error rates during CPR may be halved by using cognitive aids, at least in a high‐fidelity simulator setting, suggesting that the use of cognitive aids may have significant effects on outcomes [76].


The Reassessment Campaign on Veterinary Resuscitation (RECOVER) developed a set of consensus guidelines for veterinary CPR alongside cognitive aids encompassing a CPR algorithm, a post‐cardiac arrest care algorithm, and a quick reference chart of CPR drugs and doses [77]. Little evidence has been published as to the success of these guidelines to date; however, it appears the guidelines have changed CPR practices [78], although they still vary considerably [79]. One study of 141 dogs suffering cardiac arrest demonstrated that dogs resuscitated using the RECOVER guidelines and cognitive aids had better outcomes than dogs resuscitated using traditional CPR procedures, with improved return to spontaneous circulation (43% versus 17%) and higher survival rates (5% versus 0%) [80].

A snapshot of Pre-induction checklist. It includes confirm verbally, I V acess, airway equipment, A P L valve, oxygen.

Figure 3.6 Pre‐induction checklist in use at the Faculty of Veterinary Medicine, University of Calgary.


Source: Dr. Daniel Pang, with permission.


Several crisis manuals have now been developed for use in human anesthesia and surgery. Based upon information from the “Australian Incident Monitoring System (AIMS),” Runciman et al. developed a set of 24 crisis algorithms for managing anesthetic crises [81]. It is claimed that 60% of 4000 incidents reported to AIMS would be addressed in 40–60 s using the manual [81]. In a pilot trial of a surgical crisis manual developed by collaboration between medical and aviation experts, two surgical teams were exposed to eight crisis simulations, four with and four without the checklist [82]. Manual use led to a sixfold reduction in failure to perform critical steps and tasks [82]. Later, 17 operating room teams were exposed to 106 surgical crisis simulations. All teams performed better when using the manual and there was almost a 75% reduction in omissions of critical steps during crisis management [83].


Communication tools: briefings, debriefings, and patient hand‐offs


Communication is a significant component of perioperative checklists such as the WHO SSC, and in many organizations, the surgical “time‐out” has developed into a miniature pre‐procedural briefing. A recent study including over 8000 surgical procedures investigated outcomes before and after the introduction of a structured intraoperative briefing and found that mortality, unplanned re‐operations, and prolonged hospital stays were all reduced [84].


A point of weakness for transfer of critical patient‐related information is during hand‐offs (handovers) [85,86]. Hand‐offs occur whenever a transfer and acceptance of patient care responsibility between two caregivers or teams is required, for example, when a patient is moved from the operating theater to the recovery suite or ward. Effective hand‐offs require passing safety‐critical and patient‐specific information from one caregiver to another and are key in facilitating the continuity and safety of the patient’s care. Unstable recovering patients, competing demands upon caregivers, multitasking, and time limitations make postanesthetic hand‐offs particularly vulnerable to error [86]. Such errors lead to fragmented postoperative care, delays in treatment and diagnosis, and cause significant patient harm [86]. A systematic review of the literature identified broad strategies to reduce risk around postanesthetic hand‐offs which are summarized in Box 3.3 [85].

May 1, 2025 | Posted by in SUGERY, ORTHOPEDICS & ANESTHESIA | Comments Off on Introduction to Patient Safety

Full access? Get Clinical Tree

Get Clinical Tree app for offline access