Search engine for discovering works of Art, research articles, and books related to Art and Culture
ShareThis
Javascript must be enabled to continue!

Assessment of Medical Student Achievement of Competency‐based Objectives through Clinical Case Presentations

View through CrossRef
IntroductionThe Medical College of Georgia expects medical students to demonstrate competence in six domains (Medical Knowledge, Patient Care, Practice‐based Learning, Communication, Professionalism, and Systems‐based Practice). This study aims to assess medical student performance in three competency domains (Medical Knowledge, Practice‐Based Learning, and Communication) by evaluating student achievement during clinical case presentations, a self‐directed, small group activity that requires students to research a medical topic and present their findings to the class. We hypothesized that peer evaluators would rate their colleagues' achievement of these competencies higher than faculty evaluators and that student groups would perform satisfactorily on the Medical Knowledge competency but may struggle with competencies where they have less experience, such as Practice‐based Learning and Communication.MethodsStudent (n=8–10) and faculty (n=3) evaluations of first‐year student clinical case presentation groups (n=39) from academic year 2017–2018 were analyzed. Student performance was assessed using a grading rubric in which components were categorized based on these competencies. The data was analyzed using two‐tailed, two‐sample t‐tests.ResultsStudents performed satisfactorily in all competencies assessed by the clinical case presentations. Overall average scores from faculty and peer evaluations on rubric elements addressing the Medical Knowledge competency were 84.6% ± 5.1% and 93.7% ± 2.6%, respectively. Overall average scores from faculty and peer evaluations on rubric elements addressing the Practice‐based Learning competency were 84.3% ± 7.8% and 95.3% ± 2.1%, respectively. Overall average scores from faculty and peer evaluations on rubric elements addressing the Communication competency were 84.7% ± 9.0% and 92.2% ± 5.0%, respectively. Students gave significantly higher scores compared to faculty for all categories under each competency (p<0.05).DiscussionMedical students performed satisfactorily and similarly across all competencies assessed by clinical case presentations as evaluated by both faculty and peers. Yet, peer evaluators consistently gave significantly higher scores compared to faculty, suggesting over‐inflation. However, our previous research shows that the themes of narrative feedback are similar among student and faculty evaluators, even though faculty provide more constructive narrative feedback and student evaluators provide more positive feedback. Therefore, peers may recognize different levels of achievement but may not be able to effectively use the grading rubric or may not understand differences between levels of achievement within each competency. Clinical case presentations provide a needed opportunity for medical students to learn expected levels of achievement for each competency, and they provide opportunities for self‐monitoring and self‐reflection. Future plans include additional training for students on how to use the grading rubric, which may combat over‐inflation of scores that may be due to a misunderstanding of how to use the rubric.This abstract is from the Experimental Biology 2019 Meeting. There is no full text article associated with this abstract published in The FASEB Journal.
Title: Assessment of Medical Student Achievement of Competency‐based Objectives through Clinical Case Presentations
Description:
IntroductionThe Medical College of Georgia expects medical students to demonstrate competence in six domains (Medical Knowledge, Patient Care, Practice‐based Learning, Communication, Professionalism, and Systems‐based Practice).
This study aims to assess medical student performance in three competency domains (Medical Knowledge, Practice‐Based Learning, and Communication) by evaluating student achievement during clinical case presentations, a self‐directed, small group activity that requires students to research a medical topic and present their findings to the class.
We hypothesized that peer evaluators would rate their colleagues' achievement of these competencies higher than faculty evaluators and that student groups would perform satisfactorily on the Medical Knowledge competency but may struggle with competencies where they have less experience, such as Practice‐based Learning and Communication.
MethodsStudent (n=8–10) and faculty (n=3) evaluations of first‐year student clinical case presentation groups (n=39) from academic year 2017–2018 were analyzed.
Student performance was assessed using a grading rubric in which components were categorized based on these competencies.
The data was analyzed using two‐tailed, two‐sample t‐tests.
ResultsStudents performed satisfactorily in all competencies assessed by the clinical case presentations.
Overall average scores from faculty and peer evaluations on rubric elements addressing the Medical Knowledge competency were 84.
6% ± 5.
1% and 93.
7% ± 2.
6%, respectively.
Overall average scores from faculty and peer evaluations on rubric elements addressing the Practice‐based Learning competency were 84.
3% ± 7.
8% and 95.
3% ± 2.
1%, respectively.
Overall average scores from faculty and peer evaluations on rubric elements addressing the Communication competency were 84.
7% ± 9.
0% and 92.
2% ± 5.
0%, respectively.
Students gave significantly higher scores compared to faculty for all categories under each competency (p<0.
05).
DiscussionMedical students performed satisfactorily and similarly across all competencies assessed by clinical case presentations as evaluated by both faculty and peers.
Yet, peer evaluators consistently gave significantly higher scores compared to faculty, suggesting over‐inflation.
However, our previous research shows that the themes of narrative feedback are similar among student and faculty evaluators, even though faculty provide more constructive narrative feedback and student evaluators provide more positive feedback.
Therefore, peers may recognize different levels of achievement but may not be able to effectively use the grading rubric or may not understand differences between levels of achievement within each competency.
Clinical case presentations provide a needed opportunity for medical students to learn expected levels of achievement for each competency, and they provide opportunities for self‐monitoring and self‐reflection.
Future plans include additional training for students on how to use the grading rubric, which may combat over‐inflation of scores that may be due to a misunderstanding of how to use the rubric.
This abstract is from the Experimental Biology 2019 Meeting.
There is no full text article associated with this abstract published in The FASEB Journal.

Related Results

Hydatid Disease of The Brain Parenchyma: A Systematic Review
Hydatid Disease of The Brain Parenchyma: A Systematic Review
Abstarct Introduction Isolated brain hydatid disease (BHD) is an extremely rare form of echinococcosis. A prompt and timely diagnosis is a crucial step in disease management. This ...
Breast Carcinoma within Fibroadenoma: A Systematic Review
Breast Carcinoma within Fibroadenoma: A Systematic Review
Abstract Introduction Fibroadenoma is the most common benign breast lesion; however, it carries a potential risk of malignant transformation. This systematic review provides an ove...
Chest Wall Hydatid Cysts: A Systematic Review
Chest Wall Hydatid Cysts: A Systematic Review
Abstract Introduction Given the rarity of chest wall hydatid disease, information on this condition is primarily drawn from case reports. Hence, this study systematically reviews t...
Manajemen Kesiswaan dalam Meningkatkan Prestasi Peserta Didik di MAN 5 Jombang
Manajemen Kesiswaan dalam Meningkatkan Prestasi Peserta Didik di MAN 5 Jombang
Student management is needed to improve student achievement. Students will achieve maximum achievement with good management. Therefore, it is necessary to have effective student ma...
Assessment of core teaching competency of health professional educators in Ethiopia: an institution-based cross-sectional study
Assessment of core teaching competency of health professional educators in Ethiopia: an institution-based cross-sectional study
ObjectivesUnderstanding the competency of educators is key to informing faculty development, recruitment and performance monitoring. This study aimed to assess the core teaching co...
Clinical Skills at the Undergraduate Level: What are we trying to assess?
Clinical Skills at the Undergraduate Level: What are we trying to assess?
The attainment of clinical skills is essential for the development and achievement of competence as a clinician. Transparency about what is being assessed and how and what should b...
Students’ numerical ability on minimum competency assessment in junior high school
Students’ numerical ability on minimum competency assessment in junior high school
The aim of this study was to describe the numeracy abilities of junior high school students in Bengkulu City in solving math problems based on minimum competency assessment questio...
Exploring Large Language Models Integration in the Histopathologic Diagnosis of Skin Diseases: A Comparative Study
Exploring Large Language Models Integration in the Histopathologic Diagnosis of Skin Diseases: A Comparative Study
Abstract Introduction The exact manner in which large language models (LLMs) will be integrated into pathology is not yet fully comprehended. This study examines the accuracy, bene...

Back to Top