WTF: Where’s the Feedback?
With the conclusion of every quarter comes a flurry of emails and reminders from professors and mailing lists to complete end-of-quarter evaluations. Unfortunately, most of these emails go unread, which leads to low response rates. This is not an isolated phenomenon, as voluntary-participation-based polls and studies typically have low turnout. Even in the case of election poll cycles for presidential and midterm years, there is an extremely large demographic of participants missing from the equation.
Of course, even with that context in mind, it remains to be seen whether feedback forms and end-of-quarter evaluations are effective at all. It’s common to hear disgruntled students complain about classes and professors as early as the end of their first lecture. The fact is, although we trust students to give honest criticism of their classes, there is still the danger of them relying too much on emotion in their evaluations. There are many ways to improve communication between students and professors but they rely on both sides to take initiative for anything to change. So, even though course evaluations aren’t a perfect measure of a professor’s teaching ability, they’re a good starting point for communication with students. Increasing course evaluation participation rates would offset skews that are indicative of a smaller sample size.
It helps to incentivize filling out course evaluations. Making it worth a couple of points towards your final grade — or even better, extra credit — can go a long way because it’s not intensive.
Another way to ensure participation would be to simply mandate it or have students opted in automatically and let them choose to opt out. Then, we could use a randomized multiple interval sample method to get a better read on what students think about a particular class. This would be done online through a few questions that don’t take too long to answer. The poll will show up at a random login and implement a “system lockout” of sorts. This means that in order for the student to proceed, they’d have to answer the questions to get the functionality of the page back. For example, the first evaluation sample would be anytime a student logs into Canvas or EEE from weeks two to four. The next sample could be taken at a random login for a student during weeks six to eight. Finally, an end-of-quarter evaluation could be given before or after the final examinations in week 10, for example.
By implementing a system like this where students are asked at random points during the quarter about a course, it should be easier to trace the average impression that the student has of it. Aggregating this data could help reduce outliers that may occur from random selection during the polls. This mode of retrieving data is not entirely new because a variation of it has been done in studies on adolescents. These adolescents were told to report their feelings/emotional states during random points of the study as they went about their daily lives. This enabled researchers to get a sense of trends that were common among students. The feedback can be beneficial for professors as the multiple samples could give an indication of what points during the course the class, on average, felt could be improved and extrapolate from there.
All in all, the end goal is to ensure that professors can get valuable feedback from students to improve their courses and for students to feel comfortable approaching them or even disclosing their criticisms or praises for the courses they’re taking. Those who make complaints but don’t actively try to voice them where it matters should consider the disjunction in their speech and behavior. Yes, online evaluations can feel tedious but they are, at the moment, critical to helping improve the education future classes will receive.
Eashan Kotha is a second-year biological sciences major. He can be reached at email@example.com.