Abstract
Much has been written about myths and facts concerning teaching surveys completed by students at academic institutions. The myths usually emerge among faculty members based on their subjective point of view, personal experience, or intuitive interpretation of events. The following case study is the first of its kind, and it follows how academic faculty members evaluate performance measurement of academic teaching and teaching surveys in particular, while examining the change that has occurred over the last decade. The case analysis, examining instructors’ evaluation of teaching surveys completed by students, is based on a group of senior faculty members in Israeli academic institutions. The current study is unique in examining how academic faculty members perceive alternative options for measuring their overall performance in academic teaching as manifested in teaching surveys. One hundred eighty-two questionnaires were collected from senior faculty members at academic institutions, comprised of open-ended questions concerning suggestions for alternative teaching evaluation surveys and their structure. The research findings show that the instructors mention “professional” alternatives and perceive teaching surveys as an unprofessional and populist tool. Assuming that students’ voices and their opinion of teaching are important, professional alternatives for evaluating and improving teaching should find expression—and instructors relate significantly to professional elements at academic institutions as potentially helpful factors.
Introduction
The purpose of the study is to explore the dominant tool for examining students’ satisfaction with courses, from a 10-year perspective, when the academic world is changing completely with regard to the role of teaching and teaching methods in the work of the faculty, as well as in nomination and promotion processes. The world is changing, where students are becoming clients and teaching is becoming a product. Therefore, there is also considerable competition for students’ attention. The purpose was to explore, through a case study in a university as a research body, the faculty’s perceptions of teaching surveys completed by students. Teaching and learning portray a trend regarding the future of high education. Is the instructor responsible for placating the students?
This study has at its foundation an article by Hativa (2008) on the topic of myths and facts concerning teaching surveys completed by students of higher education in Israel. The myths usually emerge among faculty members as a result of their subjective point of view, personal experience, or intuitive interpretations of events.
According to Hativa, these myths are very harmful because they create among faculty members negative feelings toward teaching surveys and lead to resistance to use of these surveys. The most harmful, is that these feelings serve faculty members as justification for objecting to the surveys and for rejecting their results. In this study we concentrated on three dominant myths.
Studies conducted by various researchers in different places and using different methods, which generated similar results: These studies refuted these myths. Below we present the myths (Hativa, 2008) and their contradictions, as evident from the literature, in order to answer the following research questions.
Q1: Do instructors believe that concessions on course requirements or high course grades affect the ratings that they receive from their students? If yes, how?
Q2: Do instructors believe that low ratings to instructors influenced by statements of disappointed students and if yes, to what degree?
Q3: Do instructors believe that low ratings to instructors are influenced by feedback from low-achieving students?
However, Sela et al. (2006) found a positive correlation between the percentage of positive verbal comments and the instructor’s rating, and a negative correlation between the percentage of negative comments and the instructor’s rating. Most of the verbal comments written about highly rated instructors were positive, while the contrary was evident among low rated instructors. Hence, for low rated instructors most of the open-ended comments are indeed negative, but this is not true of the other instructors.
Ten years later—we conducted the current study, which examines how academic faculty members perceive alternative options for measuring their overall performance in academic teaching, as manifested in teaching surveys.
Theoretical Framework
We employ the Expectancy Theory (ET) (Vroom, 1964), which is the most comprehensive motivational model that seeks to predict or explain task-related effort (Lewis et al., 1995). Based on this theory, individual behavior or performance highly depends on the strength of individual expectancies (Rifai & Hasan, 2016). In this study, these are the students’ expectations of the teacher’s performance (Wong & Chiu, 2019), and their ratings in the teaching survey as a response to the teacher meeting their expectations (Lama et al., 2015).
Through well-established principles of the psychology of learning and the psychology of cognition, ET within a social learning framework may provide a more articulated representation of behavior motivations than has been provided elsewhere (Jones et al., 2001). Furthermore, since expectancy attitudes relate to measures of effort and performance (Lawler & Suttle, 1973), which are the purpose of the teaching survey, ET is the optimal theoretical base for this study.
Methodology
Measurement Tool
In order to receive as many valuable solutions as possible, we used open-ended questions, which do not inhibit responses (Roberts et al., 2014). The responses to these questions were analyzed using qualitative content analysis, which often provides valuable insights (Ben-Hador & Eckhaus, 2018) and may offer indications for areas that need to be attended to (Eckhaus & Sheaffer, 2018). A questionnaire was used for data collection, with two open-ended questions: “Do you think the teaching evaluation survey format should change, and how?,”“In your opinion, are there alternatives for the teaching evaluation survey, in order to assess teaching? What are they?” along with demographic questions.
Initial Sample
Questionnaires were distributed online using Google Docs to the senior faculty of seven academic institutions, and responses were anonymous. One hundred eighty two completed questionnaires were collected (Eckhaus & Davidovitch, 2019). The respondents were from Ariel University (91), Ben-Gurion University (21), and the Jezreel Valley Academic College (20); seven respondents were divided between four other institutions, and the rest did not identify their institution. The response rate was 74%. This relatively high response rate provides adequate representativeness for the population. Of all respondents, 47.9% were female and 52.1% male. Respondents’ age ranged from 22 to 39 (17.1%), 40 to 49 (41.4%), and 50+ (41.4%). Students’ evaluations in these institutions are based on a survey distributed to the students 3 weeks before the end of the semester. These surveys consist of one closed-end question ranked on a scale of 1 to 5 on satisfaction with the course instructor and two open questions that inquire about points for improvement and points for preservation.
Data and Analysis Framework
We follow Gale et al.’s (2013) Framework Method, a well-known procedure employed by hundreds of researchers (Bonello & Meehan, 2019).
Data collection
First, we used an online anonymous survey, which may eliminate social desirability bias (Kwak et al., 2021). Therefore the first transcription stage in the Framework Method (Gale et al., 2013) was not necessary.
Coding
As Gale et al. (2013) suggested, we carefully read all the texts and labeled anything that might be relevant. According to Gale et al. (2013), it is vital to look for the unexpected, and gain a holistic impression of what was said. Accordingly, we also identified important insights that did not directly touch upon the hypotheses, in order to offer a more holistic perspective that includes interesting solutions.
Developing a working analytical framework
The researchers agreed on a set of codes to apply to all transcripts. However, “the analytical framework is never ‘final’ until the last transcript has been coded” (p.5), and therefore coding was completed only after all the texts were manually read and tagged.
Applying the analytical framework
The working analytical framework was then applied by indexing texts using the existing codes. Each code was assigned a number for easy identification.
Charting data into the framework matrix
A spreadsheet was used to generate a matrix and the data are “charted” into the matrix.
Interpreting the data
Gale et al. (2013) suggested that “If the data are rich enough, the findings generated through this process can go beyond description of particular cases to explanation” (p. 5). The rich data allowed us to discover applicable suggestions for survey alternatives that might improve instructors’ performance measurement. These suggestions are also presented in this study.
Analysis and Results
The following are instructors’ suggested solutions to existing problems with teaching surveys. For simplicity, lecturer ranked instructors are simple mentioned as “instructor.” The list of themes and the number of times a theme was substantiated are presented in Appendix Tables A1 and A2 for the first and second open-ended questions respectively.
However, although this myth was disproved by Hativa (2008), 10 years later, in the current study, 19 faculty members still think that the claim is true. This can be easily understood by ET, which underlines the direct relationship between expectations and rewards. That is, when students’ expectation for reward are met, they are motivated to provide a positive feedback as a reward to the instructor.
Here is a detailed example in the description provided by the following respondent (#62): an instructor from the discipline of Engineering, with 4.5 years of teaching experience: I believe that teaching surveys should not be taken seriously. They most often express students’ feelings about their chances of passing the course and help challenged students hold the instructor responsible for this, while students with no such claims do not hurry to complete teaching surveys. The direct result is one: instructors tend to adapt the level of the contents studied to students’ basic abilities and the price of this is paid mainly by the top students.
Another respondent (#162), a senior lecturer with 20 years of teaching experience, added with regard to instructors’ attempts to improve their ratings: “A worthy question is whether students feel that they received good value or whether the instructor ‘wasted time’ and ‘played up to’ the students.”
Although Hativa refuted this myth, 10 years later, in the current study, 47 faculty members still think that open answers in the teaching surveys are often not to the point and not related to the teaching, and are sometimes insulting and hurtful. Based on applying social-learning theory to training supervisors, in an interesting research, Latham and Saari (Latham & Saari, 1979) suggest that individuals should be coached to make evaluative comments in a positive manner, rather than negative comments. This approach may improve the feedback, or in term of ET, the reward. In addition, teachers have to consider their own attitude to students, as an important factor. That is, teachers’ negative comments, and talking down to students (Agnew, 1989) may result in a similar response from the students, when they are able to express themselves. In that regard, instructional coaching for instructors may be beneficial (John, 2013).
The solution proposed by the instructors is to focus open-ended questions on more matter-of-fact answers, rather than allowing free and unrestricted expression. For example respondent #27, an instructor in the Department of Social Sciences, with 10 years of teaching experience: In my opinion, the open-ended part often becomes a type of superficial and aggressive talkback. I think that students should be directed to be more matter-of-fact, for instance: to require them to note one main strength of the course/instructor and one weakness.
Respondent #45, an associate professor from the natural sciences, with 25 years of teaching experience: “The verbal questions should be specific,”“To restrict the survey to keywords and more matter-of-fact things.”
Respondent #53, an instructor from the Department of Engineering, with 6 years of teaching experience: “To require students’ teaching surveys to relate to the course of the lesson and the material that they would have liked to study but did not receive in the course”; Respondent #51, a senior lecturer from the Department of Computer Science, with 15 years of teaching experience: Maybe they should be asked how many times they came to the instructor’s office hours, did they approach the instructor with questions about the material, did they do their homework independently, and about their background (did they fail a pre-course?), would they prefer to take an easy course with a high average grade but little contribution to the profession (with regard to knowledge) or a difficult course with a low average grade but high contribution to the profession. This would mean that there is some indication of the student’s ’sincerity’ and the objectivity of the survey.
Respondent #129, an instructor from the Department of Engineering, with 2 years of teaching experience: “The student provides an answer on how the instructor vivified the course, whether he posted new exercises, whether the material changed and is adapted to innovations in the industry”; Respondent #21, a senior lecturer from the Department of Political Science, with 10 years of teaching experience: “To what degree it sets high standards and requires effort, to what degree it challenges students and requires them to do their best.”
In addition to grasping the surveys as not to the point, another claim is that of
Alternately, some instructors suggest revoking anonymity as a tool for enhancing the sincerity of the survey: Respondent #141, with 25 years of teaching experience: “An open conversation with the students, not anonymous.” And another respondent #147, an instructor with 4 years of teaching experience from Behavioral Sciences, explained: I think that the survey should include identification by name rather than being anonymous. The names should not necessarily be given to the instructor, but there should be an external examination of the association between the student’s grade on the course and his answers to the teaching survey.
In this case as well, although Hativa (2008) refuted the myth that faculty members state that weak students blame their instructors for their failures at school and on exams and therefore are more motivated than others to utilize this platform to express their negative view of the instructor (Davidovitch & Notzer, 2004), this claim too was refuted in Hativa’s (2008) study. This claim is supported by the approach that deficits and declines were found to be also functions of non-cognitive factors such as level of education, training, health, and speed of response (Merriam & Caffarella, 1999)
In the current study as well, 10 years later, and inline with studies that underline the importance of students’ motivation for learning (Kyndt et al., 2011) and participation in the classroom (Ahlfeldt et al., 2005), 30 faculty members note one of the most significant problems they perceive with teaching surveys, which is the presence or absence of students who complete satisfaction surveys. Instructors claim that students who did not attend class complete surveys (to receive incentives).
Another respondent (#134), from the Department of Computer Science, with 4 years of teaching experience, referred to the incentives students receive for completing the survey: “They should not be given the benefit of early registration for the next semester. It only generates ‘neutral surveys’. Some people give a rating of 3 and note all kinds of reasons in their comments.”
Some instructors suggest relinquishing technological progress in order to improve and to solve the problem: Respondent #140, an instructor with 12 years of teaching experience from the Faculty of Social Sciences: “It is possible to enter the classes and hold the surveys in person. That way only those who are present can complete them”; Respondent #64, an instructor with 10 years of teaching experience from the Faculty of Social Sciences: “Surveys were once administered at the end of the course, in class, and then those who were present answered, rather than those who harm good instructors and maybe even their chances of promotion at the press of a key.”
One of the respondents (#9), a senior lecturer with 13 years of teaching experience from the Faculty of Natural Sciences, suggested that: “The grades of the evaluating student should be taken into account. Weaker students give lower ratings to instructors who challenge them academically.”
Respondent #51, a senior lecturer with 15 years of teaching experience from the Department of Computer Science, provided further details: Maybe they should be asked how many times they came during office hours, did they approach the instructor with questions about the material, did they do home exercises independently, and about their background (did they fail a pre-course?), would they prefer to take an easy course with a high average grade but little contribution to the profession (with regard to knowledge) or a difficult course with a low average grade but high contribution to the profession. This would give some indication of the student’s “sincerity” and the objectivity of the survey.
Respondent #62, an instructor with 4.5 years of teaching experience from the Department of Engineering, further portrayed the frustration stemming from this situation: I believe that teaching surveys should not be taken seriously. They most often express students’ feelings about their chances of passing the course and help challenged students hold the instructor responsible for this, while students with no such claims do not hurry to complete teaching surveys.
Respondent #137 raised another idea: Only “regular” students whose grade average is above a reasonable level (such as 70) will receive permission to complete surveys. Alternately, similar sized groups of students from different levels (average of 90+, average of 75–90, average of less than 75) should be required to complete the survey.
Respondent #84, a senior lecturer with 13 years of teaching experience from the Biology department, stated: “The surveys or their analysis must take into account the student’s place with regard to academic level relative to the class.”
Respondent #138, an instructor with 10 years of teaching experience from the Psychology department, added in the same vein: “To divide the courses into groups by students’ perceived difficulty. And then compare the surveys within groups of each (perceived) difficulty level.”
And respondent #162, a senior lecturer with 20 years of teaching experience, suggested: “It would also be good to add a question on the student’s evaluation of himself as an outstanding, good, medium, or weak student.”
According to the Constructivist Learning theory (Hein, 1991), teachers have to focus on the learner in thinking about learning, and not on the subject or lesson to be taught. In accordance, teachers argue that the learner experience is a crucial factor in the ratings, with no regard to the knowledge and lesson. Based on ET, the educators which form expectations deriving from outbound factors such as the course level, are lowering the change to be disappointed from an unpleasant reward of low ratings. As such the following suggestions.
The solutions proposed by the instructors: Respondent #174, a senior lecturer with 8 years of teaching experience from the Department of Social Sciences: “It is necessary to relate to the course’s level of difficulty”; Respondent #10, a senior lecturer from the Department of Engineering: “It may be necessary to distinguish between the different faculties”; Respondent #16, an associate professor with 19 years of teaching experience from the Department of Engineering: “Just as impact factors of journals from different fields are not compared, also surveys from hard and boring engineering courses should not be compared to interesting and fun humanities courses”; Respondent #148, a senior lecturer with 5 years of teaching experience from the Faculty of Social Sciences: “Maybe the calculation method in some departments should be relative to the department.”
Another respondent, respondent #24, an instructor with 8 years of teaching experience from the Faculty of Health Sciences, further proposed: “To what degree students are quick to register to the courses of a certain instructor. To receive from the department office feedback on students’ attitude to the courses during registration.”
Following the same principle, another problem is the claim that large classrooms receive lower ratings.
It is important to note, that according to the transformative learning theory (Mezirow, 1997), when people are faced with a disorienting dilemma, they are forced to reconsider their beliefs in a way that will fit this new experience. While based on ET, instructors expects the reward to be consistent, and related to the lesson learned, students’ point of view may change over time and may derive from other environmental factors which do not necessarily have a direct relationship with the teachers’ instruction.
Ten years later—we conducted the current study, which examines how academic faculty members perceive alternative options for measuring their overall performance in academic teaching as manifested in teaching surveys.
Education leaders recognize the importance of being customer focused by collecting information for performance evaluation and continuous improvement (Beard, 2009). However, often performance indicators in higher education are questionable and debated (Morley, 2001). Performance indicators reduce the complexity of subjective judgments to a single objective measure (Laurillard, 1980, p. 187), and are often little more than socially-constructed floating signifiers (Morley & Rassool, 1999).
The high value that education organizations’ leaders place on the teaching survey tool, with all its weaknesses, may generate frustration among the faculty staff. In higher education, most studies focus on students as the customers, while neglecting teacher work satisfaction (Chen et al., 2006). According to ET, instructors seek reward for their teaching efforts. This can be manifested as the students’ comments in the teaching survey, or the lack of appraisal from management, as evidenced from 45 despondences, arguing about the high weight the teaching survey is given:
Respondent #143, an instructor with 4 years of teaching experience from the Faculty of Health Sciences: “Their significance should be reduced.” Respondent #71, an instructor with 20 years of teaching experience from the Department of Industrial Engineering and Management: “It is not the structure of the survey that should be changed, but rather what is done with it”; Respondent #152, an instructor with 7 years of teaching experience from the Faculty of Social Sciences: “Most important—that they should not be the only means for evaluating the quality of teaching.”
Respondent #171, an instructor with 7 years of teaching experience from the Faculty of Social Sciences, further explained: “It is not the structure of the survey that is the problem but rather the undue weight given to the grade without understanding them in full (everything that happened in the course, the instructor’s demands, discipline, etc.).”
Other claims suggest eliminating teaching surveys completed by students and propose alternatives.
Suggestions for Alternatives to the Surveys Include
Conducting Interviews With Select Students Throughout the Semester
For instance:
Respondent #4, a senior lecturer with 23 years of teaching experience from the Faculty of Health Sciences: “Maybe interviewing several students in a focus group,” Respondent #10, an instructor with 10 years of teaching experience from the Faculty of Health Sciences: “Conducting interviews with select students throughout the semester”; Respondent #29, an instructor with 10 years of teaching experience from the Faculty of Social Sciences: “A focus group of student representatives that will take place during the semester so that it will be possible to make corrections while teaching rather than afterwards”; Respondent #51, an instructor with 10 years of teaching experience: “Consecutive and direct contact with the students themselves. I mostly find out what students really feel about my manner of teaching and the course itself when talking to them in the corridor during recess or breaks.” Respondent #61, a senior lecturer with 18 years of teaching experience from the Department of Engineering: “Maybe it is possible to think about receiving information from the students during the semester?”; Respondent #86, a senior lecturer with 18 years of teaching experience from the Faculty of Social Sciences: “It is possible to reinforce the surveys with in-depth interviews with select students,”“Open conversations between the students and the instructor.”
Respondent #112, an instructor with 3 years of teaching experience from the Psychology department, added: “Maybe sample several students, compensate them, and hear from them in a more detailed way in the form of an interview,” and Respondent #123, an instructor with 6 years of teaching experience from the Department of Health Sciences, explained: There are alternative options. To hold a structured feedback conversation with the instructor. In the last session of the course. The purpose is that the instructor will understand what should be changed in the lectures rather than being in a defensive position versus anonymous students who project comments.
Respondent #124, an instructor with 4 years of teaching experience from the Faculty of Social Sciences: “It is possible to hold an intermediate evaluation that will only be given to the instructor so that he can try and improve.”
This feedback technique of direct conversation is evidently very effective but it cannot serve the management, as testified by Respondent #177, a senior lecturer with 35 years of teaching experience from the Faculty of Social Sciences: I always tell them that I have to replan the course and I would like them to help me improve it. I ask them to indicate things that were helpful and that should be preserved and things that should be changed. This feedback is very enriching. Regretfully, it is not a tool that can be utilized by the management.
Surveys of Graduates, or Completed a While After the Course
For instance, Respondent #178, a senior lecturer with 12 years of teaching experience from the Physics department: “The opinion of the students themselves, but at advanced stages of the degree, once they can
Respondent#90, a senior lecturer with 5 years of teaching experience from the Department of Engineering: “Surveys should be given to course graduates (rather than to students) in order to estimate to what degree the instructor and the course contributed to success at work”; Respondent #178, a full professor with 27 years of teaching experience from the Mathematics department: “It is possible to organize a survey of university graduates or students in their last year of studies in the discipline”; Respondent #51, a senior lecturer with 15 years of teaching experience from the Department of Computer Science: Maybe it would be a good idea to ask graduates which courses they evaluate higher after they have begun work in the field. In my experience, there are many cases where graduates who “suffered” in the course came to me with letters of gratitude once the knowledge they accumulated in the course helped them become admitted/advance in their job or Master’s degree. That is the most objective measure in my opinion.
Professional Observation
For instance, Respondent #36, an associate professor with 18 years of teaching experience from the Faculty of Social Sciences: “Feedback should be given by experts in education and in the academic field taught”; Respondent #169, an instructor with 12 years of teaching experience: “An expert enters several classes, talks to the instructor, checks the exam, the grades, and the syllabus. The additional requirements . . .”; Respondent #147, an instructor with 4 years of teaching experience from the Faculty of Behavioral Sciences: The best evaluation is through observations in the classroom. Not student surveys. Students (not all of them, but most of them) mostly complain. Observations in the classroom, evaluation of teaching in the classroom—is truly important and to the point
Respondent #142, an instructor with 12 years of teaching experience from the Department of Computer Science: “Instructors who have a problem with teaching—their classes should be monitored, each should receive personal guidance, ways and tips for handling their problems”; Respondent #136, an instructor with 9 years of teaching experience from the Psychology department: Videotaping a lesson and then the instructor goes over the lesson with a type of tutor and they note points that should be preserved or improved. Sometime later, another lesson is videotaped and the points to be improved are examined.
Alternately, without videotaping but observing, in the same format. For instance, Respondent #137, an instructor with 10 years of teaching experience from the Physics department: Of course a committee of professionals who attend lectures and report on the level of teaching, which will be part of the rating. Optimally, some of those on the committee should be people from outside the department in order to rule out phenomena of acquaintance and friendship.
Respondent #119, an associate professor with 18 years of teaching experience from the Psychology department: “A team of experts who will come to lectures and provide professional feedback”; Respondent #143, an instructor with 7 years of teaching experience from the Department of Engineering: “There is room to consider an external, objective supervisor who will enter instructors’ classes at least as an observer (30 minutes twice a semester) and give his evaluation on the quality of teaching”; Respondent #22, an instructor with 12 years of teaching experience from the Faculty of Social Sciences: “Maybe it is preferable for someone to come and observe the lesson as utilized in teacher education. An observation by a teaching expert can be much more helpful and professional than student comments.”
Peer Evaluation
For instance Respondent #57, an instructor with 10 years of teaching experience from the Faculty of Social Sciences: “Peer evaluation, evaluation by professional representatives from the Unit for Advancement of Teaching, examining students’ achievements and knowledge acquired in the course”; Respondent #57, an instructor with 10 years of teaching experience from the Faculty of Social Sciences: “Evaluation by peers who attend the lecture”; Respondent #83, a senior lecturer with 12 years of teaching experience from the Faculty of Social Sciences: “Evaluation by the head of department, surveillance of classes by faculty members”; Respondent #93, a senior lecturer with 3 years of teaching experience from the Physics department: “Peer evaluation. Every instructor will perform an evaluation of, say, 4 lectures of peers in each semester, and they in turn will evaluate him.”
Finally, there are also suggestions related to strengthening instructors’ ability and tools, for instance: Respondent #43, a senior lecturer with 7 years of teaching experience from the Department of Engineering: “It is also a good idea to hold workshops for instructors to improve teaching and refresh methodologies”; Respondent #12, a senior lecturer with 10 years of teaching experience from the Department of Health Sciences: “It is most important to provide tools for coping with the results of the surveys!”; Respondent #63, an instructor with 2 years of teaching experience from the Department of Engineering department: “Accessibility and sending instructors instructions on how to access them [the surveys].”
Discussion
The faculty quotes demonstrate a gap between academic teachers’ expectations and estimates regarding their teaching performance, and the feedback received as response from the students. Several teachers expresses disappointment from offensive, disrespectful, or unrelated comment, which may affect motivation. This affect comes as a response to expectation for a different behavior and other norms. Norms affect motivation by specifying shared standards and expectations for appropriate behaviour (Ajzen, 1991). ET is a dominant theory that addresses motivation (Ahmed & Saha, 2014) and argues that the level of effort is rationally assessed if worthwhile according the expected reward (Duzgun & Yamamoto, 2016), therefore motivation decrease is inevitable.
Based on the following, in order to develop higher education, we recommend the development of new classroom norm, one that is affected by a since of belongingness. According to the Social Identity Theory (Tajfel & Turner, 1986), individuals are motivated to describe themselves in terms of their group belongingness. Before joining a group, individuals evaluate the adaptation of group norms (Korte, 2007). A successful implementation of a sense of belongingness in the classroom may therefore have significant effects on both the score of the teaching surveys, and the appreciation of the teachers’ efforts. The social reward received from involvement in educational programs (Christopher et al., 2001) may constitute a recompense at least for part of students’ expectations, therefore, based on ET, improve the approach toward the course and the instructor. This implementation, may bring to the desired “spirit of mutuality between teachers and students as joint inquirers” (Knowles, 1980, p. 47).
Future research on the development of techniques for implementing this norm in the classroom may advance the student’s appreciation of the teaching and in the long run increase the percentage of registrations.
Conclusion
This study attempts to follow up on a study conducted 10 years ago with the purpose of examining what faculty members think of performance measures of academic teaching. It is a case study with a 10 year perspective (2008–2019).
Faculty members, academic instructors in Israel and around the world, have a point of view, albeit subjective, following their personal experience or based on intuitive interpretation of events—but it cannot be disregarded. Hence, the following case study is the first of its kind to follow how academic faculty members evaluate performance measures of academic teaching in general and teaching surveys in particular, with a view of the change that occurred in the last decade.
This study is based on an article by Hativa (2008) on the topic of: myths and facts concerning teaching surveys completed by students of higher education in Israel. According to Hativa, these myths are very harmful because they create among faculty members negative feelings toward teaching surveys and lead to resistance to holding these surveys.
The following is a presentation of the current research findings on instructors’ perceptions of teaching surveys, which express students’ satisfaction or dissatisfaction.
Although Hativa refuted this myth, 10 years later, in the current study, faculty members still think that open verbal answers in the teaching surveys are often not to the point and not associated with the actual teaching, and are sometimes insulting and hurtful. Some instructors even suggest canceling anonymity as a tool for enhancing the sincerity of the survey.
This leads to another problem with teaching surveys, as evident from the instructors’ comments: The attitude of the management and the undue weight given to the surveys.
Other claims suggest eliminating teaching surveys completed by students and propose alternatives:
Conducting interviews with select students throughout the semester.
Surveys of graduates, or surveys completed a while after the course.
Professional observation by the element responsible for academic teaching at the institution, who is the professional authority.
Evaluation by peers from the department who are proficient in the subject—by academic/expert supervisors/peers. However, this might be problematic—subjective—the evaluations might be biased, in either direction, for reasons of internal politics or due to acquaintance and personal relations between the instructor. The instructor might also prepare in advance.
It is also possible to consider passing the responsibility to the instructors by having them prepare a portfolio, a document that reflects all their performance in the field of teaching and learning—and submitting it to the head of department or the dean. The portfolio may significantly advance a teacher’s professional growth and preserve evidence of exemplary teaching (Wolf, 1996). It is a comprehensive tool for reaching decisions about excellence/performance (a basis for evaluation and judgment) concerning the quality of teaching: self-reflection and encouraging thinking by the faculty member—and by the superiors (Lewis, 2001).
The more recent literature on portfolios recognizes different types of portfolios (Leggett & Bunker, 2006), although a use of a multi-purpose document may be adequate (Buckridge, 2008). Future research on the challenges and advantages of the teaching portfolio may enhance the understanding of this possible solution. There is little research that details the emotions of teachers as they engage in the construction of teaching portfolios (FitzPatrick & Spiller, 2010).
This is an opportunity for the faculty member to think about their teaching and to emphasize the association between the goals of the teaching and documented learning by the students.
This study is unique in examining how academic faculty members perceive alternative options for measuring their overall performance in academic teaching as manifested in teaching surveys. The research findings show that the instructors relate to “professional” alternatives and perceive teaching surveys as an unprofessional and populist tool.
Assuming that students’ voices and their opinion of teaching are important, professional alternatives for evaluating and improving teaching should find expression—and the instructors relate significantly to professional elements at academic institutions as potentially helpful factors.
This study indicates the urgent need to examine ways of properly and fairly evaluating the quality of teaching work in academia, in addition to satisfaction surveys completed by students. This evaluation has significance for criteria concerning teaching excellence.
The significance of the study includes several aspects:
The issue of teaching and learning is increasingly evident in academic institutions. Covid-19 has shaken all the teaching methods and the institutions are trying to return their students to the campus. The students’ voice is very important at present, reflecting their wishes.
Both in Israel and elsewhere faculty are rewarded for research but also for teaching. By promoting faculty, by appointing faculty. Therefore, this is a very hot topic, a burning topic for the instructors.
Due to the significance of this topic, institutions are trying to find ways to measure the quality of teaching. This can be through peer evaluations—where the instructor is evaluated by colleagues, but with the disadvantage of being subjective. There are internal politics and evaluations are subjective.
There is the portfolio method, where instructors document and bring proof of their teaching. But this too is not necessarily objective.
The optimal formula for full and objective evaluation of instructors has yet to be found. Instructors are required to work at designing the syllabus, to reexamine teaching methods, and therefore this topic of what the evaluations reflect and what faculty think about the evaluations is more important than ever.
There is a tension between teaching and research. Over the years, the traditional role of academia has been to produce knowledge. But at present students want to acquire a profession and that is not part of the essence of academia. The student sees goals that are not necessarily compatible with the teaching in a course, such that the student’s judgment is not necessarily about the manner of teaching.
All these are supported by the policy of the Council for Higher Education. The council requires the institutions to conduct evaluations. The students have power and are represented in the Council for Higher Education.
Israeli student associations are strong. In order to satisfy students in all institutions the significance of their opinions is stressed. Assuming that students’ satisfaction is important, the question is: What do the evaluations indicate? How should they be treated? The current study presents the considerable anguish experienced by the instructors, a topic that has received little research attention.
The uniqueness of the study is in examining the meaning of the evaluation surveys for faculty members through direct questions. This includes the benefits and the harm. There are little studies on this topic, as students’ voice is heard more often. The instructors also add creative solutions, presented in the study.
Research Implications
The faculty argue that there is room for teaching centers that address teaching, evaluations of faculty that should be provided by experts in education. Hence, it is necessary to strengthen the teaching centers, the experts who have a good grasp of academic pedagogy, who come from academia, in contrast to the teaching surveys administered at present. Students’ voice is important. But the evaluation should be a tool for assessment and criticism, ant not anonymous. In addition, students should be taught how to evaluate the instructor. For example, to use a method such as SWOT. Not to use meaningless sentences in evaluations that hold weight for instructors.
Limitations and Future Research
In this study, our sample of respondents did include instructors from multiple institutions, yet we did not compare instructors’ perceptions and beliefs by type of academic institution.
We therefore recommend to extend the research to compare different types of institutions. This would involve conducting further research in teaching colleges, private institutions, colleges, and universities. A comparison of these institutions’ perspectives would provide insights into possible differences in the beliefs surrounding teaching surveys and how they change over time.
In addition, further studies may extend this research by comparing academic institutions in different cultures. For example, examining perceptions and suggestions in institutions representing various categories in Hofstede’s cultural dimensions (Hofstede & Bond, 1984) may offer interesting insights.
Footnotes
Appendix
List of Codes for the Second Open-Ended Question.
| In your opinion, are there alternatives for the teaching evaluation survey, in order to assess teaching? What are they? | I don’t know | Portfolio | The survey method is not good | It’s not right to have students who are absent from class | Conduct interviewees with a sample of students throughout the semester | Professional observations |
| 40 | 1 | 1 | 7 | 25 | 17 |
| In your opinion, are there alternatives for the teaching evaluation survey, in order to assess teaching? What are they? | Ask students some time after the course | Peer evaluation | Include only students who attended | Use a qualitative rather than quantitative evaluation | Check students’ achievements in the course (only ask high-achieving students) |
| 6 | 18 | 2 | 4 | 6 |
Declaration of Conflicting Interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
