Abstract
Users form snap judgments about decision‑support systems (DSS) that guide later reliance. We examined how those judgments depend on DSS source—machine‑learning algorithm (ML) versus wisdom‑of‑the‑crowd (WoC)—and data scale (small, medium, large). Seventy‑eight participants repeatedly chose between paired advisers (e.g., medium‑ML vs. large‑WoC) and then performed a six‑trial visual‑search task with help from their selected advisor. Participants showed no inherent bias for ML or WoC but consistently preferred advisers built on larger datasets. Within six trials, reliability estimates were calibrated to observed accuracy, indicating rapid trust adjustment. Findings suggest users intuitively weighed sample size over provenance when forecasting DSS performance and that well‑sized crowd systems can compete with contemporary ML advisers.
Get full access to this article
View all access options for this article.
