Abstract
Generative artificial intelligence (AI) has started to be considered a cost-efficient alternative for geospatial and urban surveys, but there remains a critical need to evaluate how closely AI-generated outputs align with human responses. This paper compares responses from ChatGPT and residents in defining neighborhood boundaries, a long-standing challenge in urban studies with no single correct answer and typically relies on input from resident surveys. Our analysis focuses on both the defined boundaries and areas that are rarely covered by any boundaries. Our results show that ChatGPT tends to generate neighborhood boundaries with less variability in extent and geographic coverage compared to crowdsourced boundaries, potentially favoring more standardized representations. Additionally, we find that AI-generated boundaries are less likely than human efforts to cover areas with lower population density and higher percentages of non-White and Hispanic populations, reflecting potential biases. These findings highlight the need to critically evaluate generative AI’s potential to supplement human respondents in urban and spatial applications while carefully considering its limitations, particularly regarding bias and representation.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
