Abstract
Many user studies are unable to produce quality data because, either the designed questions or the desired scale is not comprehended by participants. In the context of in-person climate studies, providing a broad range of climatic conditions and practice with the evaluation scale is an ideal, but rarely utilized procedure, allowing participants to understand the range and granularity of their own perceptions. This study compares three climate scales at different levels of granularity between groups with and without training. Comparisons of comfort ratings of each scale on each scenario were made between their training and no training subsets. Comparisons were also made between the workload and comprehension for each scale for each scenario to their training and no training subsets. The responses between training versus no training subsets were not significant except for a few scenarios and groups. Survey completion time between those were not significant, in this self-paced study. The appearance of significant differences for individual groups and lowered frustration rating indicates that low-resource training documents with examples may be a useful methodology to improve data quality especially for human factor surveys with limited participant availability.
Get full access to this article
View all access options for this article.
