Abstract
Sociologists increasingly face choices among competing algorithms that represent reasonable approaches to the same task, with little guidance in choosing among them. We develop a strategy that uses simulated data to identify the conditions under which different methods perform well and applies what is learned from the simulations to predict which method will perform best on never-before-seen empirical data sets. We apply this strategy to a class of methods that group respondents to attitude surveys according to whether they share construals of a given domain. This allows us to identify the relative strengths and weaknesses of the methods we consider, including relational class analysis, correlational class analysis, and eight other such variants. Results support the “no free lunch” view that researchers should abandon the quest for one best algorithm in favor of matching algorithms to kinds of data for which each is most appropriate and provide direction on how to do so.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
