2019 HSC Section 2 - Practice Management
ARTICLE IN PRESS
TABLE 8. Results — Feedback Quality
SIMPL In-Person Possible Range of Scores Mean SD Range Mean SD Range
Resident survey
10-50 6-30
47.74 3.00 37 50 45.33 4.77 32 50 23.40 3.75 16.5 29 22.25 5.94 9 29.5
Third-party assessment
inherently different than feedback given in person. With SIMPL, the feedback is one-way and removed in time and place from the case and resident. It is thus logical that feedback via SIMPL be more directive and presumptuous as there is no opportunity for a two-way conversation that includes the residents’ thoughts, perceptions, and questions. On the other side, feedback in person allows for a back-and-forth exchange that is more conducive to questions and reflections. It is also possible that other social factors could explain the differences we saw sur- rounding the exchange of formative operative feedback; without having to look into the eyes and speak directly to the person one is giving feedback to, one may find it eas- ier to give more critical, pointed, and direct feedback as we saw with the SIMPL feedback. Conversely, when pro- viding feedback in person, one might try to more gently guide the resident toward an end recommendation or assessment. Similarly, without the opportunity for the res- ident and attending to have a back-and-forth conversation in real time, there naturally will be a tendency for fewer words to be used, which is consistent with the lower number of total utterances observed in this study with SIMPL feedback compared to in-person feedback. Importantly, even though the way surgeons spoke and delivered the feedback changed, both the resident sur- vey and the third-party assessment showed no difference in the quality of the feedback between in-person feed- back and feedback given via the SIMPL application. Though there was a statistically significant difference in the scores of descriptor 4 of the resident survey, “focused on personality” vs. “focused on behaviors,” this difference is not clinically significant as both delivery methods still strongly trended toward the recommended feedback technique of “focused on behaviors.” There are several limitations to this study. Firstly, there is a potential for bias in our results due to the Haw- thorne Effect, or the awareness of being observed. The 4 included surgeons were aware of the study of their SIMPL transcripts and a research team member audio- recorded in-person feedback encounters in the operat- ing room with their knowledge that a recording device was being used. Thus, the feedback transcripts and recordings may be of higher quality and not be represen- tative of the true feedback encounters that occur between surgeons and residents, some of which do not
occur in this setting. The true impact of the Hawthorne Effect on our results is unknown, however, it likely impacts the in-person feedback more strongly as the researchers were physically present for those feedback encounters. Importantly, this inequity of bias makes our final conclusion that SIMPL is a good alternative to in- person feedback stronger. Secondly, there could also be a ceiling effect in play. The surgeons who agreed to be part of this study were generally known in the department to be good educators. It is more difficult to differentiate quality of feedback when they are consistently scoring at the top of the scales we are using. In our study, the ceiling effect is present when analyzing the resident surveys. Our data shows that the mean of the resident survey scores for the SIMPL and in-person feedback encounters was 47.74 and 45.33, respectively, out of a possible high score of 50. By these data, an argument could be made that a true difference between the quality of feedback exists, but that the scale we chose was not sensitive enough to detect this differ- ence. Of note, this possibility was anticipated during the creation of our study design and one of its strengths was the use of multiple validated tools to measure the quality of the feedback from different perspectives. The Third- Party Feedback Assessment Form was also utilized, which with means of 23.40 and 22.25 out of 30 for SIMPL and in-person, respectively, does not show evidence of a ceil- ing effect. Thus, we are confident in our conclusion that the quality of the feedback is not different between SIMPL and in-person. Our study also contained a myriad of surgical procedures that varied in complexity. This was not explicitly accounted for in our study design, however, both of our feedback qual- ity scoring systems have difficulty of case inherently built in. For example, in the resident survey, “Gave right amount of feedback” is listed as a recommended feedback tech- nique and in the Third-Party Assessment, feedback that is “concise yet comprehensive” earns the encounter 5 points. The items on these scoring systems have purposeful subjec- tivity to them that allows for the third-party or resident to take into consideration the numerous variables of each indi- vidual encounter, one of which would be case complexity. Another limitation of the present study is its small sample size, which precludes more robust analysis of other variables of potential interest. Each surgeon in our
Journal of Surgical Education Volume 00 /Number 00 & 2018
200
Made with FlippingBook - Online magazine maker