Assessing the Assessment Process

Chapter 20
Assessing the Assessment Process


Kent Hecker1 and Jared Danielson2


1Faculty of Veterinary Medicine, Cumming School of Medicine, University of Calgary, Canada


2College of Veterinary Medicine, Iowa State University, USA


Introduction


Whether at work or in school, we are assessed continuously regarding our performance. In a school environment, assessment is primarily thought of in the context of student performance and learning, typically referred to as “assessment of student learning” and “assessment for student learning.” However, “assessment” can also be thought of in terms of course-, curriculum-, program-, or institutional-level evaluation. This chapter builds on the concept of student assessment and casts a reflective lens on evaluating the assessment process that has been put in place, whether within a course, across the clinical year(s), or across the curriculum. We also provide information about why and how a formal and explicit assessment structure should be created and ultimately evaluated. Note that while efforts are occasionally made to differentiate between “evaluation” and “assessment,” we follow the convention of treating these as equivalent and will use them interchangeably throughout the chapter (Scriven, 2003).


Definitions of Assessment


To provide context for the chapter, we begin by providing two definitions of assessment. Assessment of student learning has been defined as follows:



Assessment is the process of gathering and discussing information from multiple and diverse sources in order to develop a deep understanding of what students know, understand, and can do with their knowledge as a result of their educational experiences; the process culminates when assessment results are used to improve subsequent learning. (Huba and Freed, 2000, p. 8)


Assessment of a course, curriculum, program, or institution in higher education has been defined as follows:



Assessment is the process of providing credible evidence of:



  • Resources
  • Implementation actions, and
  • Outcomes


Undertaken for the purpose of improving the effectiveness of



  • Instruction
  • Programs, and
  • Services

in higher education. (Banta and Palomba, 2015, p. 2)


Accountability in Assessment Practices


This chapter is also meant to provide ideas about how best to capture information and evidence to answer questions regarding your assessment practices. Accountability in relation to how and why we assess our students is not a bad thing; by documenting our processes we provide transparency regarding our decisions, and the ability to reflect and improve on our assessment practices for the betterment of our students and schools. Questions that can be asked when evaluating assessment methods/programs include:



  • Do the assessment methods capture data relevant to the course/program objectives/competencies of interest?
  • Are the assessment methods organized/utilized in an effective manner to capture student performance?
  • Are summative and formative assessments utilized?
  • Do the assessment methods allow for the provision of feedback to the learner? In other words, in the assessment cycle, can performance information be provided to students to foster learning?
  • Are assessment questions linked to specific course/program outcomes?
  • Are the assessment methods feasible? Can all stakeholders understand and use the methods as they were intended given their respective constraints?
  • Are scores from the assessment methods reliable and valid? Stated differently, is there evidence to support decisions (e.g., pass/fail, promotion, remediation, removal) made from the assessment scores?
  • What are the perceptions of the participants involved in the assessment process? How do students, raters, and instructors perceive what is assessed, how assessment occurs, and how feedback is provided?

Evaluation of Assessment Practices


In order to begin the evaluation/assessment process, one of the authors (KH) starts by creating a document like the one provided in Table 20.1. This captures the evaluation questions of interest (which later can be ranked or revised), identifies the information required, where the information comes from, what method will capture the data, how sampling will occur (if required), the schedule, and the proposed analyses. Further columns can be included that reflect the validity evidence that we think is being collected, or the evaluation framework categories that are being targeted. These topics will be covered in this chapter.


Table 20.1 Template for evaluation questions, sources of data, and proposed analyses





























































EVALUATION QUESTION INFORMATION REQUIRED SOURCE OF INFORMATION METHOD SAMPLING SCHEDULE ANALYSIS
Assessment questions
1 Are OSCE scores predictive of clinical rotation performance? OSCE performance in Years 1, 2, and 3 and clinical rotation performance scores Raters, students, and clinical preceptors Exam data from the respective years, clinical rotation information All students in the DVM program Exam data for each year and clinical rotation data from the final year Descriptive, inferential, and multivariate
2 What clinical rotation-based assessments will be acceptable by faculty, students, and clinical partners? Scores from the mini clinical examination (mCEX), direct observation of procedural skills (DOPS), and the in-training evaluation reports (ITERS) Clinical rotation instructors, students, raters Data from the clinical rotation assessment tools, surveys, and interviews Clinical rotation instructors Preliminary survey work Descriptive, inferential, and qualitative
Impact of assessment on student learning
3 What is the impact of simulation and virtual training on learning clinical and professional skills? Performance on simulations and final performance data across courses and clinical rotations Student performance data Comparison study of performance using different high-fidelity and low-fidelity simulations All DVM students To be determined Descriptive and inferential; multivariate once there is a number of cohorts to analyze
4 How satisfied are the UCVM students with the assessment program and what are the strengths and weaknesses? Student perceptions of their undergraduate experience Students Interview Focus groups As required by the Office of the Associate Dean Curriculum Qualitative analyses to determine common themes
5 How satisfied are the UCVM faculty with the assessment program and what are the strengths and weaknesses? Faculty perceptions of the assessment of student performance Faculty Survey or interview Focus groups As required by the Office of the Associate Dean Curriculum Qualitative analyses to determine common themes

How to Use Assessment Data


Data from the assessment of student performance is used differently given the interests of each group (for instance, students, instructors, or administrators). For students, assessment is about how they performed, whether against their peers and/or against a set program standard. Ultimately, the output of student assessments (marks) is meant to demonstrate students’ achievement and learning; this leads to a recognition that the necessary knowledge, skills, and attitudes have been attained so that they can practice independently, or, more specifically, are licensed to practice in veterinary medicine.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Oct 15, 2017 | Posted by in GENERAL | Comments Off on Assessing the Assessment Process

Full access? Get Clinical Tree

Get Clinical Tree app for offline access