In this work, I introduce the concept of geo-hallucination to capture a critical vulnerability in the application of large language models (LLMs), large multimodal models (LMMs), and emerging geospatial foundation models to urban analytics. While these models promise to democratize spatial knowledge by lowering technical barriers, they also generate outputs that are spatially incoherent or fabricated, despite appearing authoritative. Unlike traditional GIScience errors rooted in data collection or processing, geo-hallucination arises from deficiencies in a model’s spatial cognition and reasoning. Through example cases in perceptual, navigational, and mapping tasks, I illustrate how such failures distort urban knowledge production, with implications for planning, governance, and equity. Drawing on philosophical insights into false certainty and urban theory on the production of space, I argue that geo-hallucination represents not merely a computational flaw but an epistemic and socio-political risk. Addressing it requires technical innovation, critical literacy, and a renewed commitment to grounding urban analytics in empirical and lived realities.