Objective: The aim of this work was to quantify the interrater reliability of a set of scales that assess repetition, posture, and force as used on site when examining industrial work. Background: Interrater reliability of observational assessment methods can vary depending on the definition of the methods and situations in which they are used. Method: In several industries, 846 jobs were assessed using pairs of analysts to rate the repetition, force, and posture of the upper limbs. Twelve analysts with varying experience levels participated. Results: Using an interclass correlation coefficient (ICC), force and repetition had reliability values of .60 and .71 before and .82 and .87 after discussion, respectively. After discussion, peak posture ratings had ICCs of .60 to .83. ICCs for average posture ratings ranged from .31 to .51 initially to .55 to .67 in final ratings. Less experienced analysts changed their initial ratings more than did the senior investigators. Conclusion: The high interrater reliability of the repetition and force metrics indicates that a single analyst is appropriate for basic job assessment. Posture ratings benefit greatly from a two-analyst system. Average postures should be assessed across a full range of the scale for interrater reliability assessment. Analyst pairs should be rotated to avoid forming biases. Application: For basic assessments of forceful exertions and repetitive motions, a single analyst can be used, reducing the resource requirements for both industry and large epidemiological studies.