Teaching Evaluations

It is  widely documented that student surveys (or student evaluations of teaching, SET) are one of the simplest and most widely used tools to evaluate teaching performance in higher education [1]. They are the most utilized  teaching metric in North America (94% of four-year liberal arts colleges [2]) and Australia. They are also  used widely in Asia and Europe and have attracted considerable attention in the Far East [3]. Among the reasons behind the wide use of these surveys are  cost, the ability to easily obtain feedback from stakeholders, and the assumption that students (being the recipients of those classes) have the ability to measure teaching effectiveness [1]. Authors have also noted that teaching evaluation have  formative aspects and can be leveraged by instructors to adjust and improve their teaching skills  [4].  However, these surveys often only represent student opinions of teaching capability [5], instead of being a valid  measure of faculty instructional effectiveness and/or student learning [6]. Moreover, the numerical results of these surveys are influenced by a number of factors that are unrelated  to the instructor’s teaching effectiveness [7], including the type of course, the low response rate [8], the students’ own interpretation of the questions [9, 10], the instructor’s attractiveness [11], personality [12, 13], gender identity [14-16], race [17], accent [11], and attire [18] as well as the predominant gender of their department [19] and the incentives in place - from chocolates to grade inflation [20, 21]. These factors also permeate the “comments” section of the SETs, which can not only be biased but also include malicious and abusive remarks [14, 21-24], further compromising the overall value of the surveys towards decisions about hiring, firing, merit pay, and promotion [1, 5, 21, 25]. It is also worth mentioning that one additional challenge linked to SETs is the low response rate [8, 26, 27], limiting the value of the data collected. While several guidelines are discussed in the literature [28, 29] (including mandatory participation [30]), these approaches should be carefully implemented, as they can further decrease the validity of the results. 

 

These findings present a particular problem for institutions of higher education that need to balance the need to reliably assess one of their most important tasks (teaching and learning) against many other constrains (cost, time, simplicity, etc.). As documented by multiple organizations, Clemson is not unique to this problem, and the Faculty Senate has specifically noted that evidence of bias extends beyond that of the student evaluators to include those responsible for assessing effective teaching (i.e., TPR committee, department chairs, and administrators) [31]. Aiming to address these aspects and recognizing that the Evaluation of Teaching Effectiveness is an important process requiring a multifaceted approach, Clemson has recently adopted a model where teaching effectiveness must include feedback from instruction and course evaluation forms completed by students, where no single quantifier from these forms may substitute for a wide-ranging review of the responses and where (Chapter V, Section E.2.e.), and that requires the inclusion of at least two of the following metrics (Faculty manual, Chapter VI, F.2.k.i):

  1. Evidence-based measurements of student learning (such as pre- and post-testing or student work samples) that meet defined student learning outcomes
  2. Evaluation (by peers and/or administrators) of course materials, learning objectives, and examinations
  3. In-class visitation by peers and/or administrators
  4. A statement by the faculty member describing the faculty member’s methods and/or a teaching philosophy
  5. Exit interview/surveys with current graduates/alumni
  6. Additional criteria as appropriate for the discipline and degree level of the students
  7. A statement by the faculty member of methods or philosophy that also describes and documents how feedback from student rating of course experiences or evaluation instruments above were used to improve teaching.

 

Additional metrics to support the evaluation of teaching activities may also be acceptable and faculty are strongly encouraged to develop the corresponding assessment plans in consultation with their department chairs. The combination of sources should strike a balance between the needs of faculty members and the need for an objective evaluation. Examples of these metrics include, but are not limited to:

  1. Receiving peer- evaluation training and participating in peer-review evaluation of other faculty members across the college and university.
  2. Mentoring other faculty members in pedagogy, course organization, student engagement and other relevant teaching approaches.
  3. Teaching scholarship (teaching publications, participation in teaching workshops, teaching awards, development of teaching materials, etc.)
  4. External expert evaluations (teaching portfolios, teaching methodologies, teaching materials)
  5. Learning outcome measures, formative assessments [32], or evidence of inclusion of research-supported teaching and active learning methods.
  6. Additional evidence-based teaching activities (g. inclusion of previous feedback, development of new courses, flipped courses, or inclusion of course modules that support the goals of Clemson Elevate. Examples of the latter are, for instance, integrating activities aimed at highlighting inclusive excellence [33], global engagement, service and/or experiential learning, community engagement, communities of practice, etc.)

 

Besides providing much broader options for faculty to measure the teaching and learning impact of their courses, the new strategy also aims to minimize the potential biases introduced when a single assessment method is used, as it is unlikely that multiple metrics will be biased in the same way [34, 35]. It is also important to mention that Clemson’s CAFLS has already implemented a similar approach to measure Teaching Effectiveness (https://www.clemson.edu/cafls/teaching/index.html). Furthermore, and regardless of the amount of literature supporting the importance of using multiple methods when assessing teaching performance, it is critical to highlight the role of faculty engagement [25, 36] during the transition period.

 

In this context, the FACULTY ADVANCEment OFFICE aims to facilitate access to research focused on teaching and learning, to help both faculty and administrators understand how other universities have implemented similar multi-faceted teaching assessment programs and how to navigate the implementation-evaluation-feedback loop to address instructional needs. In this regard, our Office is able to facilitate discussions related to SETs and their assessment [37, 38] as they pertain to faculty development and advancement as well as supporting the University’s efforts to improve teaching portfolios and develop teaching assessment plans. The Office is also actively seeking collaborative efforts to support the scholarship of faculty, interested in improving teaching and learning experiences for Clemson students.

 

 

References

  1. Uttl, B., C.A. White, and D.W. Gonzalez, Meta-analysis of faculty's teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, 2017. 54: p. 22-42 https://doi.org/10.1016/j.stueduc.2016.08.007
  2. Berk, R.A., Start Spreading the News: Use Multiple Sources of Evidence to Evaluate Teaching. Journal of Faculty Development, 2018. 32(1): p. 73-81 https://www.schreyerinstitute.psu.edu/pdf/UseMultipleSourcesSRs_Berk_JFacDev1-11-2018.pdf
  3. Chen, Y. and L.B. Hoshower, Student Evaluation of Teaching Effectiveness: An assessment of student perception and motivation. Assessment & Evaluation in Higher Education, 2003. 28(1): p. 71-88 https://doi.org/10.1080/02602930301683
  4. Hobson, S.M. and D.M. Talbot, Understanding Student Evaluations: What All Faculty Should Know. College Teaching, 2001. 49(1): p. 26-31 https://doi.org/10.1080/87567550109595842
  5. Hornstein, H.A., Student evaluations of teaching are an inadequate assessment tool for evaluating faculty performance. Cogent Education, 2017. 4(1): p. 1304016 https://doi.org/10.1080/2331186X.2017.1304016
  6. Spooren, P., B. Brockx, and D. Mortelmans, On the Validity of Student Evaluation of Teaching: The State of the Art. Review of Educational Research, 2013. 83(4): p. 598-642 https://doi.org/10.3102/0034654313496870
  7. Constantinou, C. and M. Wijnen-Meijer, Student evaluations of teaching and the development of a comprehensive measure of teaching effectiveness for medical schools. BMC Medical Education, 2022. 22(1): p. 113 https://doi.org/10.1186/s12909-022-03148-6
  8. Luo, M.N., Student Response Rate and Its Impact on Quantitative Evaluation of Faculty Teaching. The Advocate. 25(2) https://doi.org/10.4148/2637-4552.1137
  9. Clayson, D.E. and D.A. Haley, Are Students Telling Us the Truth? A Critical Look at the Student Evaluation of Teaching. Marketing Education Review, 2011. 21(2): p. 101-112 https://doi.org/10.2753/MER1052-8008210201
  10. Clayson, D.E., Student evaluation of teaching and matters of reliability. Assessment & Evaluation in Higher Education, 2018. 43(4): p. 666-681 https://doi.org/10.1080/02602938.2017.1393495
  11. Murray, D., et al., Exploring the personal and professional factors associated with student evaluations of tenure-track faculty. PLoS One, 2020. 15(6): p. e0233515 https://doi.org/10.1371/journal.pone.0233515
  12. Shevlin, M., et al., The Validity of Student Evaluation of Teaching in Higher Education: Love me, love my lectures? Assessment & Evaluation in Higher Education, 2000. 25(4): p. 397-405 https://doi.org/10.1080/713611436
  13. Clayson, D.E. and M.J. Sheffet, Personality and the Student Evaluation of Teaching. Journal of Marketing Education, 2006. 28(2): p. 149-160 https://doi.org/10.1177/0273475306288402
  14. Okoye, K., et al., Impact of students evaluation of teaching: a text analysis of the teachers qualities by gender. International Journal of Educational Technology in Higher Education, 2020. 17(1): p. 49 https://doi.org/10.1186/s41239-020-00224-z
  15. Mengel, F., J. Sauermann, and U. Zölitz, Gender Bias in Teaching Evaluations. Journal of the European Economic Association, 2019. 17(2): p. 535-566 https://doi.org/10.1093/jeea/jvx057
  16. Keng, S.-H., Gender bias and statistical discrimination against female instructors in student evaluations of teaching. Labour Economics, 2020. 66: p. 101889 https://doi.org/10.1016/j.labeco.2020.101889
  17. Storage, D., et al., The Frequency of “Brilliant” and “Genius” in Teaching Evaluations Predicts the Representation of Women and African Americans across Fields. PLOS ONE, 2016. 11(3): p. e0150194 https://doi.org/10.1371/journal.pone.0150194
  18. Oliver, S., et al., Fitted: the impact of academics’ attire on students’ evaluations and intentions. Assessment & Evaluation in Higher Education, 2022. 47(3): p. 390-410 https://doi.org/10.1080/02602938.2021.1921105
  19. Aragón, O.R., E.S. Pietri, and B.A. Powell, Gender bias in teaching evaluations: the causal role of department gender composition. Proceedings of the National Academy of Sciences, 2023. 120(4): p. e2118466120 https://doi.org/10.1073/pnas.2118466120
  20. Stroebe, W., Student Evaluations of Teaching Encourages Poor Teaching and Contributes to Grade Inflation: A Theoretical and Empirical Analysis. Basic and Applied Social Psychology, 2020. 42(4): p. 276-294 https://doi.org/10.1080/01973533.2020.1756817
  21. Lakeman, R., et al., Playing the SET game: how teachers view the impact of student evaluation on the experience of teaching and learning. Assessment & Evaluation in Higher Education, 2022: p. 1-11 https://doi.org/10.1080/02602938.2022.2126430
  22. Lakeman, R., et al., Appearance, insults, allegations, blame and threats: an analysis of anonymous non-constructive student evaluation of teaching in Australia. Assessment & Evaluation in Higher Education, 2022. 47(8): p. 1245-1258 https://doi.org/10.1080/02602938.2021.2012643
  23. Cunningham, S., et al., First, do no harm: automated detection of abusive comments in student evaluation of teaching surveys. Assessment & Evaluation in Higher Education, 2023. 48(3): p. 377-389 https://doi.org/10.1080/02602938.2022.2081668
  24. Kreitzer, R.J. and J. Sweet-Cushman, Evaluating Student Evaluations of Teaching: a Review of Measurement and Equity Bias in SETs and Recommendations for Ethical Reform. Journal of Academic Ethics, 2021. 20(1): p. 73-84 http://dx.doi.org/10.1007/s10805-021-09400-w
  25. Marshik, T., et al., New frontiers in student evaluations of teaching: university efforts to design and test a new instrument for student feedback. Assessment & Evaluation in Higher Education, 2023: p. 1-14 https://doi.org/10.1080/02602938.2023.2190060
  26. Paolo, A.M., et al., Response Rate Comparisons of E-Mail- and Mail-Distributed Student Evaluations. Teaching and Learning in Medicine, 2000. 12(2): p. 81-84 https://doi.org/10.1207/S15328015TLM1202_4
  27. Zumrawi, A.A., S.P. Bates, and M. Schroeder, What response rates are needed to make reliable inferences from student evaluations of teaching? Educational Research and Evaluation, 2014. 20(7-8): p. 557-563 https://doi.org/10.1080/13803611.2014.997915
  28. Cone, C., et al., Motivators, barriers, and strategies to improve response rate to student evaluation of teaching. Currents in Pharmacy Teaching and Learning, 2018. 10(12): p. 1543-1549 https://doi.org/10.1016/j.cptl.2018.08.020
  29. Ahmad, T., Teaching evaluation and student response rate. PSU Research Review, 2018. 2(3): p. 206-211 http://dx.doi.org/10.1108/PRR-03-2018-0008
  30. Aoun Bahous, S., et al., Voluntary vs. compulsory student evaluation of clerkships: effect on validity and potential bias. BMC Medical Education, 2018. 18(1): p. 9 https://doi.org/10.1186/s12909-017-1116-8
  31. Clemson University Faculty Senate, Jan 11, 2022 - Meeting Report 2022.
  32. Trumbull, E. and A. Lash, Understanding Formative Assessment: Insights from Learning Theory and Measurement Theory. WestEd. 2013, 2013 https://www.bhamcityschools.org/cms/lib5/AL01001646/Centricity/Domain/131/Understanding%20formative%20assessments%202013.pdf
  33. Hernandez, R., Discipline-Based Diversity Research in Chemistry. Accounts of Chemical Research, 2023. 56(7): p. 787-797 https://doi.org/10.1021/acs.accounts.2c00797
  34. Esarey, J. and N. Valdes, Unbiased, reliable, and valid student evaluations can still be unfair. Assessment & Evaluation in Higher Education, 2020. 45(8): p. 1106-1120 https://doi.org/10.1080/02602938.2020.1724875
  35. Kreitzer, R.J. and J. Sweet-Cushman, Evaluating Student Evaluations of Teaching: a Review of Measurement and Equity Bias in SETs and Recommendations for Ethical Reform. Journal of Academic Ethics, 2022. 20(1): p. 73-84 https://doi.org/10.1007/s10805-021-09400-w
  36. Williamson, A.L. and I.G. Wang, Redesigning a Course Evaluation Instrument: Experience, Practical Guidance, and Lessons Learned. Journal of Management Education, 2023. 47(4): p. 388-416 https://doi.org/10.1177/10525629231167296
  37. Linse, A.R., Interpreting and using student ratings data: Guidance for faculty serving as administrators and on evaluation committees. Studies in Educational Evaluation, 2017. 54: p. 94-106 https://doi.org/10.1016/j.stueduc.2016.12.004
  38. Cornes, S., et al., When students’ words hurt: 12 tips for helping faculty receive and respond constructively to student evaluations of teaching. Medical Education Online, 2023. 28(1): p. 2154768 https://doi.org/10.1080/10872981.2022.2154768

 

 

Additional Resources: