Скачать книгу

to cecal intubation, compared with conventional endoscopy [4]. With regard to training, research indicates that use of an imager may enhance learners’ understanding of loop formation and loop reduction maneuvers [5].

      Simulation‐based training provides a learner‐centered environment for learners to master basic techniques and even make mistakes, without risking harm to patients [6,7]. Mastery of basic skills in a low‐risk controlled environment, prior to performance on real patients, enables trainees to focus on more complex clinical skills [6]. Additionally, within the simulated setting, learners can rehearse key aspects of procedures at their own pace, training can be structured to maximize learning, and errors can be allowed to occur unhindered, with the goal of allowing trainees to learn from their mistakes [8].

      However, it is important to recognize that simply providing trainees with access to simulators does not guarantee that they will be used effectively. Instead, there are clearly a number of best practices in simulation‐based education – including feedback, repetitive practice, distributed practice, mastery learning, interactivity, and range of difficulty – which must be employed by the educator to optimize learning [9–13]. Additionally, feedback must be carefully deployed at the end of the simulation with the intention of promoting successful procedural mastery [10,12]. Indeed, terminal feedback, defined as feedback given by a trainer to a trainee at the end of task completion, is more effective than both feedback given during task performance (which can lead to overreliance on feedback by the learner) and/or withholding feedback, which has been shown to handicap learning [14,15]. In short, the simulated setting allows educators to employ a number of strategies, including terminal feedback, which can be detrimental to patient safety when teaching in the clinical setting.

      Training the pediatric endoscopy trainer

      Assessment of endoscopic procedural performance is ideally an ongoing process that should occur throughout the learning cycle, from training to accreditation to independent practice. This requires thoughtful integration of both formative and summative assessments to simultaneously optimize learning and certificate functions of assessment. Formative assessment is process focused. It aims to provide trainees with feedback and benchmarks, enables learners to self‐reflect on performance, and guides progress from novice to competent (and beyond) [21,22]. In contrast, summative assessment is outcome focused. It provides an overall judgment of competence, readiness for independent practice and/or qualification for advancement [22]. Summative assessment provides professional self‐regulation and accountability; however, it may not provide adequate feedback to direct learning [22,23].

      Over the past two decades, there has been a profound shift in postgraduate medical education from a time‐ and process‐based framework that delineates the time required to “learn” specified content (e.g., a two‐year gastroenterology fellowship) to a competency‐based model that defines desired training outcomes (e.g., perform upper and lower endoscopic evaluation of the luminal GI tract for screening, diagnosis, and intervention [24]) [25–27]. Assessment is an integral component of competency‐based education as it is required to monitor progression throughout training, document trainees’ competence prior to entering unsupervised practice, and ensure maintenance of competence.

      Nevertheless, procedural assessment in pediatric gastroenterology continues to focus predominantly on the number of procedures performed by a learner, as well as a “gestalt” view of their competence by a supervising physician [28]. This type of informal global assessment is fraught with bias inherent in subjective assessment and is not designed to aid in the early identification of trainees requiring remediation. A major limitation to using procedural numbers to determine competency is a demonstrated wide variation in the rate at which trainees acquire skills [29,30]. Furthermore, there are a host of factors which have been shown to affect the rate at which trainees develop skills, including training intensity [29], the presence of disruptions in training [31], the use of training aids (e.g., magnetic endoscopic imagers [3]), the quality of teaching and feedback received, and a trainee’s innate ability [32]. Reflective of these concerns, current pediatric credentialing guidelines outline “competence thresholds” as opposed to absolute procedural number requirements. A “competence threshold” is the minimum recommended number of supervised procedures a trainee is required to perform before competence can be assessed [33].

      Assessment based on quality metrics

      Current pediatric endoscopy training programs are increasingly requiring learners to monitor quality measures, such as independent terminal ileal intubation rate and patient comfort, to be used as part of a global or summative assessment of trainees. Additionally, quality metrics are being used by practicing endoscopists as formative assessment tools to help promote improvement in care delivery [42]. However, the application of quality metrics to pediatric endoscopy requires pediatric‐specific measures, which have yet to be formally developed. Currently, there are limited data on the applicability of adult‐derived quality metrics to pediatric practice and their impact on clinically relevant outcomes. For example, with regard to cecal intubation rate, the reported successful completion rate for pediatric endoscopists varies from 48% to 96% [43–48]. Perhaps of even more pertinence to pediatric procedures, the reported terminal ileum intubation rate varies from 11% to 92.4% [43,45–49]. Additional research is required to help further delineate and define pediatric‐specific quality indicators that can be used for assessment and quality assurance purposes and validate them in a longitudinal prospective fashion [50].

      Direct observational assessment tools

      In recent years, accreditation bodies and endoscopy training and credentialing guidelines have all placed greater emphasis on the continuous assessment of trainees as they progress towards competence. To this end, direct observational assessment tools have emerged to support a competency‐based education model that defines desired training outcomes. It is critical to ensure that assessment tools are psychometrically sound and have strong validity evidence.