Chapter 24
Clinical Reasoning Skills
Jill Maddison
Royal Veterinary College, UK
Introduction
Veterinarians must make rapid decisions every day about diagnostic and treatment options for their patients. Clinical reasoning skills form the cornerstone of those decisions, as well as providing a sound knowledge base that is appropriate to the case. The latter also must include an understanding of important pathophysiological principles relevant to the patient’s clinical problem. However, knowing the facts is not the same as knowing what to do. Knowledge is only useful if it can be accessed, formulated, and applied to the problem at hand. For example, a good understanding of the mechanics of an automobile engine is highly useful if the problem to be solved is a malfunctioning car, but not as useful when learning to drive.
Clinical Reasoning Education
It would seem self-evident that a competent clinician needs clinical reasoning and interpersonal skills, as well as knowledge and understanding of diseases and their prevention or management. However, the emphasis of medical and veterinary education is usually more heavily weighted toward knowledge-building than the development of professional and communication skills. Over the past couple of decades, the attitudes of veterinary educators to the social science aspects of veterinary practice have shifted substantially. Teaching related to professional, interpersonal, and business skills has become an integral part of the veterinary curricula in many countries (Heath, 2006). Despite this, although clinical problem-solving is regarded as a core competency expected of a new graduate (AVMA, 2015), explicit teaching in clinical reasoning is not as well established. The reasons for this are multiple, and are perhaps primarily due to the belief that such skills will naturally develop as students acquire greater depth and breadth of knowledge, supplemented by exposure to clinical cases (May, 2013). Yet veterinary graduates report that they feel ill equipped in relation to clinical reasoning when they graduate (May, 2013).
The relative lack of formal training in reasoning may also be because the process by which clinicians reason clinically remains the subject of much debate. Many studies and reviews discuss clinical reasoning strategies, and the literature has been elegantly reviewed and critiqued by May (2013). However, there is a paucity of practical guidance about how to help students develop robust clinical reasoning skills. The general assumption seems to be that how experts “do it” is the gold standard, although how to get to that exalted state is rarely explained. If all clinicians eventually developed superb clinical reasoning skills with experience, then it perhaps would be reasonable not to worry too much about specifically educating undergraduate students in this area – the skills would come with time. However, diagnostic error is an important healthcare issue (Del Mar, Doust, and Glasziou, 2008), and stress and anxiety arising from difficult medical cases is reported by even experienced veterinarians. While the reasons for this are multifactorial and include issues such as financial constraints and owner compliance, concern about the veterinarian’s clinical reasoning abilities are also cited (Mant, 2014). Every clinical specialist is aware that poor clinical decision-making occurs in a proportion of cases referred to specialists by general practitioners. And, of course, clinical specialists remain at risk of making diagnostic errors themselves.
Comparing Medical and Veterinary Graduates
It is important to review studies in the literature on clinical reasoning development in medical students and novices with care when applying the results to veterinary student education. Graduates from medical school spend a considerable time after graduation in supervised clinical training programs before they are able to practice. Veterinary students, in contrast, are deemed fit to practice as soon as they graduate. Although they may seek further supervised clinical training in the form of internships (if they practice in countries where these are available), these are not a requirement to practice, and only a small proportion of those who plan to enter general practice will do so. Thus, a novice medical graduate will still be in a clinical training program, whereas a novice veterinary graduate will usually not be. The level of support and mentorship that a veterinary graduate may experience in their first year of practice varies enormously and may influence their whole career.
Another difference to note between medical and veterinary clinical education and practice is that a greater proportion of a graduating class in veterinary medicine will enter general practice than of a graduating medical class. Most veterinary graduates will never become clinical specialists, although they may well become expert general practitioners (May, 2013). The breadth of the skills required for a successful general veterinary practitioner is huge. The veterinary general practitioner must not only be an astute diagnostician (usually for more than one species), but also a surgeon, anesthetist, radiologist, pharmacist, and more. Access to specialist care is influenced by client finances as well as geographical location. Referral of difficult cases is an option in some countries but not others, in some geographical areas within a country, state, or city but not others, and, in almost every practice where referral services are accessible, for some clients but not others. Veterinarians in general practice therefore are constantly faced with the challenge of patients with problems that require complex knowledge and decision-making skills, who have owners with high expectations of a successful outcome for their animal. Even if referral to a specialist is an option, the choice of specialist requires sufficient clinical reasoning skills to recognize when referral is indicated and to which specialist.
Diagnostic Errors
The literature relating to problem-solving in human medicine is broadly driven by the desire to enhance clinical teaching and/or to understand the decision-making process in order to reduce diagnostic errors (Graber, Gordon, and Franklin, 2002; Graber, Franklin and Gordon, 2005; Berner and Graber, 2008; Graber, 2009; Norman and Eva, 2010; May, 2013; Mamede et al., 2014). Diagnostic errors have been monitored and reported for a range of different specialties and clinical environments in humans. The estimate is that the rate of diagnostic error in clinical medicine is around 15%, thus affecting almost one in seven patients (Berner and Graber, 2008). The level of diagnostic error in veterinary medicine has not, to the author’s knowledge, been estimated, but we would be foolish as a profession to believe that our diagnostic accuracy was any better than that of our medical colleagues.
Of most relevance to this chapter is that cognitive skill errors (processing biases) are reported to be a far more common reason for diagnostic error than errors caused by knowledge gaps (Graber, Franklin, and Gordon, 2005; Berner and Graber, 2008; Norman and Eva, 2010). Although experts may reach the correct diagnosis more often and more quickly than novices, no level of expertise confers zero risk of diagnostic errors.
Clinical Reasoning Models
As in many other situations where the science involved is more closely related to the social sciences than the physical sciences, there are no definitive or unequivocal results from many of the studies on clinical reasoning. Clinical reasoning is a complex process that varies enormously depending on the clinician’s preferred thinking and learning styles (of which they are often unaware), their past experiences and expertise, the clinical problem itself, and the context in which that problem is encountered. It is not at all surprising that “measuring” reasoning strategies is difficult, and that study methods and results may be vigorously debated.
The current understanding (which has evolved considerably since the 1980s) is that the clinical reasoning strategies used by physicians can be broadly classified as Type 1 (nonanalytic) and Type 2 (analytic). A blended approach or triangulation of both types to cross-check clinical reasoning and diagnostic conclusions is advocated for successful diagnostic decision-making (Eva, 2004; Bowen, 2006; Graber, 2009; Coderre et al., 2003; Vandeweerd et al., 2012; May, 2013). Although some authors believe that the risk of bias and diagnostic error is higher with nonanalytic reasoning than analytic reasoning, improving and supplementing nonanalytic reasoning, rather than replacing it, is believed to reduce diagnostic error (Norman, Young, and Brooks, 2007).
Nonanalytic Clinical Reasoning
Nonanalytic reasoning occurs quickly and subconsciously, and primarily relies on the clinician accessing knowledge and patterns from past experiences that can be applied to the present case. It is often referred to as “pattern recognition,” and relies on the clinician having developed a number of illness scripts for a particular presentation. Because of limited previous case exposure, pattern recognition is inherently weaker in novice medical students compared to more experienced clinicians. However, there is disagreement about whether this means that students should disregard nonanalytic methods in their clinical reasoning, or whether some use of intuition and pattern recognition improves diagnostic accuracy in novices (Norman and Brooks, 1997; Coderre et al., 2003; Eva, 2004; Cockcroft, 2007; Norman, Young, and Brooks, 2007; Smith, 2008). Surprisingly, evidence from both nonclinical problem-solving studies (Pretz, 2008) and clinical studies (Norman and Eva, 2010) demonstrates that explicit, analytic thought may be most appropriate for experienced individuals, whereas holistic, intuitive problem-solving may be more effective for novices. Other educators have exactly the opposite view: “Given the amount of expertise required, this type of diagnostic reasoning strategy [pattern recognition] is generally unavailable to novice medical students” (Coderre et al., 2003 p. 695).
Use of pattern recognition as the primary mode of clinical reasoning has positives and negatives. It works well for many common disorders and has the advantage of being quick and cost effective, provided that the diagnosis is correct. Use of pattern recognition as the major form of clinical reasoning is also effective if a disorder has a unique pattern of clinical signs; if there are only a few diagnostic possibilities that are simply remembered or can easily be ruled in or out by routine tests; and if the clinician has extensive experience (and thus a rich bank of illness scripts to recall), is well read and up to date, reviews all of the diagnoses made regularly and critically, and has an excellent memory.
However, nonanalytic reasoning based on pattern recognition can be flawed and unsatisfactory when the clinician is inexperienced (and therefore has access to few illness scripts) and/or only considers or recognizes a small number of salient factors in the case (incomplete problem presentation). Even if the clinician is experienced, use of pattern recognition as the primary clinical reasoning process can be problematic for uncommon diseases, for common diseases presenting atypically, when the patient is exhibiting multiple clinical signs that are not immediately recognizable as a specific disease and may or may not be related to one diagnosis, or if the pattern of clinical signs is suggestive of certain disorders but not specific for them. In addition, for the experienced clinician, the success of pattern recognition relies on a correct diagnosis being reached for the pattern observed previously. This may not always be the case, especially in general practice, where the clinician must often form a provisional diagnosis and make treatment decisions in the absence of complete knowledge or data, without ever having the diagnosis confirmed (May, 2013). This will be reinforced by the presumption that the diagnosis was correct if the patient clinically improves with treatment.
Nonanalytic reasoning to solve diagnostic puzzles involves a wide variety of heuristics (subconscious rules of thumb or mental shortcuts to reduce the cognitive load and speed resolution of problems), which can be powerful clinical tools but also predispose to diagnostic bias. They tend to be viewed more favorably in some disciplines, for example emergency care, than others, such as internal medicine. Even experienced clinicians are vulnerable to bias in nonanalytic reasoning. Such bias is generally subconscious (although some authors suggest that an awareness of bias can help one avoid such errors). These biases have been clearly described (Croskerry, 2003; Berner and Graber, 2008; Norman and Eva, 2010; McCammon, 2015) and are outlined in Table 24.1. Often diagnostic error can involve a combination of biases. Physician overconfidence is believed to be a major factor contributing to diagnostic error and bias, even (or perhaps especially) among specialists (Berner and Graber, 2008).
Table 24.1 Diagnostic biases in clinical medicine
Bias | Description |
Availability bias | A tendency to favor a diagnosis because of a case the clinician has seen recently. |
Anchoring bias | Where a prior diagnosis is favored but is misleading. The clinician persists with the initial diagnosis and is unwilling to change their mind. |
Framing bias | Features that do not fit with the favored diagnosis are ignored. |
Confirmation bias | When information is selectively chosen to confirm, not refute, a hypothesis. The clinician only seeks or takes note of information that will confirm their diagnosis, and does not seek or ignores information that will challenge it. |
Premature closure | Narrowing the choice of diagnostic hypotheses too early. |
Enhancing Nonanalytic Clinical Reasoning Skills in Veterinary Students
By nature of its automaticity, pattern recognition is not something that can be taught (or suppressed). A new case will inevitably trigger memories and suggest possible diagnoses if similar patients have been seen previously. Proponents of using (rather than ignoring) pattern recognition in clinical reasoning recognize that for this to be useful, a large bank of illness scripts is required. There may be different views in the literature about whether or not students can effectively solve clinical problems using pattern recognition, but nevertheless there are practical strategies recommended to assist students in strengthening their nonanalytic reasoning skills.
Using pattern recognition responsibly has two requirements: the patterns need to be in place (as many correct illness scripts as possible); and there needs to be a learned process for acknowledging, then double-checking, the favored illness script. Strengthening students’ pattern-recognition skills and development of illness scripts requires recognition of the typical presentation for a problem, as well as the variations and atypical presentations (Bowen, 2006). Formation of patterns used for illness scripts can only be constructed by each learner based on the patients they have seen (Fleming et al., 2012) and the knowledge they have accumulated.
Educational strategies that are recommended to assist with the development of illness scripts are to ensure that students are exposed to patients with common problems, ideally with prototypical presentations, followed by similar problems to provide them with an appreciation of atypical or subtle findings (Bowen, 2006). In other words, students must have adequate exposure to pedagogically useful cases. Teachers need to recognize that complex and elaborate cases may be suboptimal as teaching tools (Eva, 2004), unless efforts are made to “convert” them to useful teaching cases. Rare diseases that may make exciting problem-solving opportunities for postgraduate training scholars and clinical specialists, especially if they lead to publication, may not be particularly useful teaching cases unless there are relevant “teaching points” that can be utilized. These usually relate to logical and analytic decision-making rather than pattern recognition. In the author’s experience, relatively complex (but not bizarre) medical cases can be powerful learning tools, provided that the clinical teacher clearly identifies the key learning features of the case, those learning points are transferable and core to good medical practice and clinical reasoning, and the teacher does not get bogged down in clinical minutiae of questionable relevance.
If a student is almost exclusively exposed during their clinical training to secondary- and tertiary-level referral cases, there is limited opportunity for them to build a bank of illness scripts relevant to general practice. It can be difficult for students to transfer clinical reasoning experienced in one context, for example a high-level specialist hospital, to another, such as first-opinion practice (Del Mar, Doust, and Glasziou, 2008). One unfortunate consequence is that the new graduate, exposed in their first year of general practice to a barrage of clinical situations that they have rarely encountered, may dismiss as irrelevant the “academic” learning in their clinical student years, and place a much higher value on their practical, extramural work-based placements in general practice. As a result, the rich learning resource provided by the intellectual academic rigor and expertise in clinical teaching hospitals may be wasted.
Analytic Clinical Reasoning
In cases where nonanalytic reasoning is not helpful, analytic reasoning is required. An analytic approach to clinical reasoning is also needed to double-check presumptive diagnoses based on pattern recognition – the clinical reasoning safety net.
In contrast to nonanalytic reasoning, analytic reasoning is reflective and systematic, permitting hypothesis formation and abstract reasoning (May, 2013). Analytic reasoning is less prone to bias than nonanalytic reasoning (May, 2013), but is limited by working memory capacity, unless strategies are developed to provide the clinician with a logical, methodical, and memorable process through which to problem-solve any case presentation. The most pertinent question, though, is how to do it? There is almost a complete lack of guidance in the literature on how to teach analytic reasoning. Audétat et al. (2011) do, however, provide a very practical overview of clinical reasoning difficulties that students may experience, why they may occur, and remediation strategies to help overcome them.
Analytic reasoning can be deductive, inductive, or abductive. Deductive reasoning (also referred to as hypothetico-deductive) is guided by generated hypotheses that the clinician then tests. This is the basis of the “rule out” approach – that is, “I will use diagnostic tests to rule out differentials until I am left with the most correct one.” The intellectual process used in deductive reasoning, where there are many potential diagnoses, has been described (Cockcroft, 2007) and makes interesting, if exhausting, reading. It appears to be a very useful basis for the development of computer programs to generate differential diagnoses taking into account the clinical data provided, all possible diagnoses, and the ranking of probabilities of disease (probabilistic reasoning based on Baye’s theorem). However, as an accessible method of clinical reasoning the method can be “time-consuming and laborious” (Cockcroft, 2007). It is certainly not feasible in the short time available during a first-opinion consultation, especially for a novice clinician who cannot formulate a relatively short and accessible list of alternative hypotheses.
Inductive reasoning makes broad generalizations from specific observations. From these observations, patterns and regularities are detected, then tentative hypotheses formulated, leading to general conclusions or theories. Inductive reasoning, by taking a more exploratory, “diagnosis open” approach, avoids eliminating the appropriate hypothesis too early, but is overwhelming without specific advice on how to do it. Coderre et al. (2003) describe scheme-inductive diagnostic reasoning. In this approach there is an organized structure for the learning and use of decision trees. The clinician seeks information from the patient that will distinguish between categories of conditions. It is very similar to the “small worlds” hypothesis proposed by Kushniruk, Patel, and Marley (1998), where expert clinicians select relatively small sets of plausible diagnostic hypotheses (small worlds) and focus on the most relevant medical findings that distinguish them. Neither author, however, provides any real guidance about how to develop this decision-making approach or create these “small worlds.” It is on this aspect that we will focus in the problem-based clinical reasoning approach covered later in the chapter.
The third form of reasoning described is abductive reasoning. This usually starts with an incomplete set of observations and proceeds to the most likely possible explanation for the group of observations. It is based on making and testing hypotheses using the best information available. It often entails taking an educated guess after observing a phenomenon for which there is no clear explanation. We recognize, and should make students aware, that this reasoning process occurs, and that it will be used more often in general practice than specialist practice. Abductive reasoning is descriptive, but by its nature cannot be explicitly taught.
Enhancing Analytic Clinical Reasoning Skills in Veterinary Students
As discussed, the literature on clinical reasoning provides an insight into how clinicians think, even if not all studies agree, but very little about how to specifically teach and develop students’ clinical reasoning skills. Vandeweerd et al. (2012) concluded from their study of clinical reasoning strategies used by Belgian veterinarians that students should be made aware of the reality of the decision-making strategies that clinicians use, but this does not mean that they should not be taught a rational decision-making model: “Teaching must not only train students to behave as current practitioners do, but to behave more as veterinarians ideally should.” (Vandeweerd et al. 2012 p. 147) Of particular importance in clinical education is the recognition that no two clinical students can ever have exactly the same clinical experiences – they will see different cases, reflect on different aspects of their experience, and as a result gain different insights. It is for this reason that students need to be guided to multiple strategies to enable them to work through clinical problems (Eva, 2004).
Teaching a clinical reasoning structure explicitly can be challenging. Experienced clinical teachers may use both nonanalytic reasoning and analytic reasoning to solve clinical cases. They often do this very quickly. Their clinical reasoning process may be opaque for students, as it is based on the clinician’s accumulated experience and wisdom, which are vastly different from those of the students, as well as on subconscious analytic checking of their clinical instincts. This is one reason why some experts may not necessarily be good teachers (May, 2013). Of course, the teacher may provide a rich learning experience by articulating, for example, that they discarded several differentials because the clinical signs did not fit expectations, outline the key features of the case that led them to the diagnosis, and discuss any atypical features of relevance. They almost certainly have developed clinical reasoning strategies that include what were originally analytic processes, but have now become part of their nonanalytic reasoning. Yet the failure to remember what it was like “not to know” can be an impediment to explicitly teaching and enhancing students’ critical reasoning skills. Practitioners not accustomed to working in a collegial or scholarly environment may not be skilled in articulating their clinical reasoning process, even when it is well constructed and analytic.
The aim of the clinical reasoning approach described in the remainder of this chapter is to help veterinary students and practicing clinicians, particularly those involved in clinical teaching, develop and articulate a structured and pathophysiologically sound approach to the diagnosis of common clinical presentations. It has been most extensively described for clinical signs in dogs and cats (Maddison, Volk, and Church, 2015), but the principles are applicable to other clinical disciplines. The method aims to avoid the student having to remember long lists of differentials (as is often the result when other methods of analytic reasoning have been taught), and allows them to place their knowledge (which is often scaffolded by body system in the typical veterinary curriculum) into an appropriate problem-solving context.
The method is similar in some aspects to clinical algorithms, which can be found in some textbooks (e.g., Ettinger and Feldman, 2010) and have been suggested by Safi, Hemmati, and Shirazi-Beheshtiha (2012), who used the approach to the vomiting dog as an example. These algorithms can be very helpful in guiding decision-making for specific clinical problems. Their potential drawback is that the key decision points are usually clinical sign specific, and thus not transferable to atypical or complex clinical scenarios or other clinical signs. The principles of the problem-based inductive reasoning approach described in this chapter can be applied to virtually any clinical problem or combination of problems in a clinical scenario. It explicitly articulates key clinical reasoning steps that an expert diagnostician almost always will use, even if subconsciously, so that the student can “see” the clinical reasoning brain of their mentors and teachers at work. An additional benefit is that the key questions reinforce fundamental pathophysiological principles and understanding.
The Minimum Database
Before specifically discussing strategies to develop a problem-based inductive clinical reasoning approach, it is appropriate to include just a few words about the other fall-back that the clinician may use when nonanalytic reasoning is not available, helpful, or needs to be supplemented. That is the use of the minimum database – or colloquially, “I’ll do bloods.”
Routine diagnostic tests such as hematology, serum biochemistry panel, and urinalysis can be enormously useful and often essential in progressing understanding of a patient’s clinical condition. However, relying on a minimum database to give more information about the patient before the clinical reasoning brain is engaged may be reasonable for disorders of some body systems, but totally unhelpful for others. Serious, even life-threatening disorders of the gastrointestinal tract, neuromuscular system, pancreas (especially in cats), and heart rarely cause significant diagnostic changes in the routine hematological and biochemical parameters measured in general practice. In addition, diagnostic tests are rarely 100% sensitive and 100% specific. Using blood testing to “screen” for diagnoses therefore can be misleading, as the positive and negative predictive value of any test is very much influenced by the prevalence of a disorder in the population.
Abnormal or even normal results in an unwell patient can create confusion rather than clarity if they are not critically reviewed as an integral part of the clinical assessment of all data relevant to the patient and related to the presenting problem(s). There is a tendency for physicians (and clients, which further biases the clinician) to overestimate the information that is gained from laboratory and imaging results (Del Mar, Doust, and Glasziou, 2008). This risk is exacerbated if the fundamentals – a comprehensive history and a thorough clinical examination – are bypassed in favor of tests. Overreliance on empirical testing to steer the clinician in the right clinical direction can also be problematic when the results do not clearly confirm a diagnosis. The clinician can waste much time and the client’s money searching without much direction for clues to the patient’s problem – formally referred to as “information gathering” (Del Mar, Doust, and Glasziou, 2008), less formally as a “fishing expedition.” And, of course, the financial implications of nondiscriminatory testing can be considerable. Many clients are unable or unwilling to pay for comprehensive testing. And in many parts of the world where small animal practice is emerging, access to clinical pathology is currently very limited.
Many veterinary students spend considerable time during their undergraduate course in referral specialist hospital settings. Here they are exposed to the model of clinical practice where comprehensive and sophisticated diagnostic testing is integral to the management of patients, because the focus of the specialist is almost exclusively on reaching an accurate diagnosis and instituting rational therapy as soon as possible (May, 2015). This is not to say that the learning experience for students from such cases cannot be deep and rich. However, it is easy for the student to form the belief that such a level of testing is essential for all cases. They then may enter general practice and find to their horror that they are faced with many cases where the problems are ill defined, access to comprehensive diagnostics is much more limited, and owners’ expectations may be more focused on resolution than a specific diagnosis (May, 2015). If students’ clinical reasoning skills are weak, then their ability to influence clients to follow recommended diagnostic and treatment paths is impaired, there is further disillusionment with the value of their clinical education, and they face higher stress levels when they encounter any complex or chronically unresolved clinical case.
Problem-Based Inductive Clinical Reasoning
In problem-based inductive clinical reasoning, each significant clinicopathological problem is assessed before being related to the patient’s other problems. Using this approach, the pathophysiological basis and key questions for the most specific clinical signs the patient is exhibiting are considered before a pattern is sought. This ensures that the clinician’s mind remains more open to other diagnostic possibilities than what might seem to be initially the most obvious, and thus helps prevent pattern-based diagnostic bias.
The Problem List
An aspect of clinical reasoning that can be overlooked is the importance of clinical presentation and problem formulation. Problem formulation means structuring the elements of the clinical presentation into a recognizable form. This involves realizing what in the patient’s data is important, giving it a recognizable shape, and formulating a structure of concepts linked by relations (Auclair, 2007). In problem-based inductive clinical reasoning, the formation of a prioritized problem list and ensuring that the problem or problems have been appropriately defined are the key steps.
The initial step in problem-based inductive reasoning is to clarify and articulate the clinical problems with which the patient has presented. This is best achieved by constructing a problem list. Constructing a problem list (either mentally, orally, or in written form) helps make the clinical signs explicit to the clinician’s current level of understanding, transforms vague presenting information to specific problems, and helps the clinician determine which are the key clinical problems (“hard findings”) and which the “background noise” (“soft findings”). Most importantly, it helps prevent the clinician overlooking less obvious but nevertheless crucial clinical signs and becoming overwhelmed with information. Problems do need to be prioritized and those that are most specific/diagnostically useful will act as “diagnostic hooks.” This concept is described in more detail elsewhere (Maddison, Volk, and Church, 2015).