Abstract
Differences in sets of criteria for evaluating microcomputer software are discussed. They are set against the results of three studies in which UK teachers evaluated five programs which were used in reading or English lessons. A comparison of the checklist criteria with the case study data was made using Stake's matrix of evaluation concerns [1]. This suggested a heavy emphasis on antecedents in the checklists and on transactions in the case studies. In general, neither checklists nor case studies devoted great attention to empirically measured outcomes. A possible interpretation of the results is that while the checklists focused on intrinsic evaluation, the case studies themselves focused on practical classroom issues, notably attention and motivation.
Get full access to this article
View all access options for this article.
