To reduce the subjectivity of class participation grades, a method was devised that combined forced-distribution peer ratings with professor grades. In seven seminar courses, correlations between professor and peer ratings ranged from .83 to .90. Course/teacher evaluations were high and the prof/peer technique was generally perceived as a fair way to evaluate participation.
Get full access to this article
View all access options for this article.
References
1.
CarterK. R. (1977). Student criterion grading: An attempt to reduce some common grading problems. Teaching of Psychology, 4, 59–62.
2.
CederblomD.LounsburyJ. W. (1980). An investigation of user acceptance of peer evaluations. Personnel Psychology, 33, 564–580.
3.
HollanderE. P. (1965). Validity of peer nominations in predicting a distant performance criterion. Journal of Applied Psychology, 49, 434–438.
4.
KaessW. A.WitryolS. L.NolanR. E. (1961). Reliability, sex differences, and validity in the leaderless group discussion technique. Journal of Applied Psychology, 45, 345–350.
5.
KaneJ. S.LawlerE. E. (1978). Methods of peer assessment. Psychological Bulletin, 85, 555–586.
6.
LandyF. J. (1985). Psychology of work behavior (3rd ed.). Homewood, IL: Dorsey.
7.
LoveK. B. (1981). Comparison of peer assessment methods: Reliability, validity, friendship bias and user reaction. Journal of Applied Psychology, 66, 451–457.