Abstract
Attention economics concerns itself with the study of the allocation of attention, conceptualized as a scarce resource. In this essay I relate fundamental insights from attention economics to recent advances in a specific type of artificial intelligence known as Large Language Models (LLMs), such as OpenAIs GPT. I argue that the development leap known as the ‘LLM revolution’ can be expected to have a fundamental impact on planning practice. However, we should be careful not to stare ourselves blind at the expectation that LLMs will necessarily always deliver superior ‘intelligence’. Rather, it may be more helpful to think of them as providing relatively cheap synthetic competent attention, considering that attention scarcity rather than information/knowledge scarcity is the critical bottleneck within many contexts of contemporary planning practice. The essay attempts to tease out the implications of such a perspective, with a particular focus on what this could mean for the future of the planning profession.
In the spring of 2023 Nacka municipality became the first ever public authority in Sweden to automate decision-making regarding a planning-related issue. Applicants seeking permission to install a fireplace could submit the necessary documentation digitally. After this, the automated AI assistant “Lovisa” would respond in a matter of minutes whether the permit was granted, a procedure which previously took weeks when handled by municipal staff. A human staff-member manually reviewed every automated decision to ensure that it was correctly handled by the AI. However, after a year of reviews the conclusion was that the AI made a correct decision in 100% of cases, and consequently the manual review was deemed unnecessary. The municipality has subsequently expanded the range of planning permits that are automated to also include water- and sanitation installations – and are planning to expand the scope much further in the coming years.
The case of Lovisa, the Swedish fireplace-permit-granting AI, may at first come across as an innocuous curiosity. But we shouldn’t be too quick to dismiss it as such. To the contrary, I suspect that Lovisa heralds in an entirely new era for planning practice and education, which will entail a fundamental reshuffling of the conditions of planning work that we need to begin to deal with right now – or preferably – already yesterday. AI, I hear someone yawning – isn’t that already old? We have been talking about that for years, but is anything really going on in practice?
Here it is first important to clarify what I am referring to when I refer to ‘AI in planning practice’. Scholars such as Andy Karvonen, Mathew Cook, Rob Kitchin, Federico Cugarullo, and others have made important contributions that expand our understanding of how AI is currently being embedded in cities through “AI urbanism”. That is: the integration of AI technologies into the automated management of everyday urban life (see e.g. Cugarullo et al., 2024). But as noted by Othengrafen and colleagues in a recent paper (Othengrafen et al., 2025) this literature has only to minor degree focused on the role of AI in the urban planning process. But even when narrowing and specifying the scope of interest to how AI can impact how planning work is practically performed, Peng et al. (2024) count no less than 881 papers published in the WoS Core Collection relating to AI and urban planning, 74% of which were published just during the three and a half years between the start of 2020 and May 2023 (another study by Cong et al., 2024 reached a similar result). Adding further fuel to critical sentiments regarding what can come across as a bit of a ‘hype’, a survey performed by Sanchez and colleagues (2023) of existing documented attempts at applying AI within planning practice shows that “the types of current applications are relatively sophisticated and not likely suitable for planning practice, particularly for routine planmaking activities”.
However, I would argue that this conclusion may indeed hold true with regards to elaborate bespoke AI applications generally discussed in the existing research literature. But it does not take fully into account the so-called ‘LLM revolution’ that has occurred in recent years – which has brought into the public domain extremely potent general-purpose AI tools such as DeepSeek and OpenAI GPT models. The purpose of this essay is therefore to explain why I suspect that the full adoption of these and similar powerful tools within the context of planning practice is not only immanent – but will also prove to be a decisive moment for planning practice. In this regard, I would suggest, the seemingly modest practical contribution of Lovisa in Nacka is most probably just a foreshock, a tremor foreshadowing a more cataclysmic event.
Therefore, I would suggest that planning researchers, practitioners and educators should begin to reflect already now about different ways in which they collectively can grapple with this fast-changing present, and try to figure out which theoretical resources can be drawn upon, or need to be developed, to find ways to grasp the potential implications of this emerging development for the future of the planning profession. In the remainder of this essay I will explore one such possible theoretical resource: attention economics. I will do this by arguing that we may be putting too much emphasis on expectations that AI will offer superior artificial intelligence, and that another way of thinking about it may be that it supplies abundant and relatively cheap synthetic competent attention which can be used as at least a partial substitute to comparatively more expensive human competent attention within planning work.
The expectation expressed by public servants and politicians in Nacka is that AI will assist by alleviating planners from simple, repetitive, boring work. Work that no one who has put effort into achieving a university degree wants to do anyway. As put by a manager in the Nacka municipal administration: “These are simple issues. The robot should be doing robot things and humans should be doing human things. Of course, Lovisa frees up time for the staff to instead work on the more complex issues”. 1 And in the best of worlds, this is what will also happen on a wider scale with the wholescale integration of AI into planning practice.
Nevertheless, there are also other – less appealing – possibilities in how this shift may unfold on the broader scale. One of the reasons for this is that the positive expectations are underpinned by the assumption that the type of creative, advanced problem-solving work that human planners can perform cannot be successfully delegated to a computer program. But it now turns out that any such assumptions are currently being at least partially undermined by recent advances in AI technology, which means that the future role of the professional planner in a world of ubiquitous AI implementation may be a little more uncertain than this. There even appears to exist a manifest risk that the development may go in a completely different direction: one in which AI to some extent acts as the expert problem-solver, while the human is allocated a role more akin to a quality assurance officer or compliance manager.
But before delving into the details of these different possible scenarios I must first introduce what I understand to be some of the current fundamental challenges for contemporary planning practice. Challenges that also explain why that which recent innovations in AI has to offer potentially is so valuable to contemporary planning practice.
The allocation of scarce attention in contemporary urban planning practice
More than three decades ago, John Forester pioneeringly suggested that to understand what goes on in planning practice we need to look closely at how attention is generated, organized and channeled in these processes (Forester, 1993). Recent work performed by me, Maria Håkansson and Jenny Lindblad takes its cue from Forester to investigate how attention is allocated in a specific type of planning context: the Swedish detailed development planning process. A conclusion from our analysis is that many Swedish urban planners feel that lack of knowledge or information is seldom a problem in their daily work. On the contrary, there is often a sensed overflow of information that needs to be considered and processed. The bottleneck is rather the capacity to deal with innumerable issues and related pieces of information that could potentially have a bearing on a particular project.
This situation produces a need for a strict economization of attention, sifting ‘need to do’-issues and items of information from ‘nice to do’ – that is: those which could be considered important, but are not necessarily, which therefore tend to fall to the side. Consequent to the above, in our analysis we arrive at the conclusion that contemporary Swedish planners generally labor under a work regime that can somewhat tongue-in-cheek be descried as an ‘attention deficit order’.
Someone might interject and ask why we are so obsessed with defining this as a question of scarce attention. Isn’t it just about a structurally conditioned lack of time on behalf of the planners – that they are too understaffed and therefore lack the necessary time to process all the available relevant information? But we would only partially agree with this, since crude time itself is only one of the missing components here. The question is who’s time – that is: the time of people who have the necessary competence to navigate the process and the technical know-how to deal with the involved issues. Therefore ’time’ is too blunt a concept here. The question rather has to do with a scarcity of competent attention, that is: available processing power, which to some extent also involves time – but only as a partial input.
I would therefore like to suggest a tentative definition of competent attention which would differentiate it from raw time. It goes something like this: We can understand competent attention as composed of necessary processing time combined with relevant competence or knowledge and necessary effort and engagement, giving us the formula: [Competent attention= (time)+(competence)+(engagement)]. My claim would be that all these three properties are necessary components of competent attention. If any of them are missing you don’t have competent attention. You may have attention without competence, or competence without attention. But none of these latter features will contribute to productively resolving any relevant issues in a planning process.
The economics of attention
The management of attention as a scarce resource is the focus of the developing scholarly field of attention economics. Pioneering work in attention economics originally received fairly little attention from mainstream economists. For instance the thinking of Austrian cultural theorist Georg Franck (1998) had its impact mostly limited to the Germanophone humanities, and Michael Goldhaber’s (1997) and Davenport and Beck’s (2001) work was better received in popular debates than in the academia. More recently, attention economics has however become a recognized issue within agenda-setting international organizations such as the UN (Line Carpentier, 2023), and also a topic pursued by internationally prominent economic theorists (see e.g., Loewenstein and Wojtowicz, 2023). Parallel to this, a very lively strand of critical literature has also developed, including for instance Yves Citton’s (2017) important work in which he discusses the dangers of conceptualizing attention as an economic resource, and instead argues for discussing it in terms of ecologies of attention (see further also e.g. Stiegler, 2010; Terranova, 2012; Crogan and Kinsley, 2012; Pedersen et al., 2021).
An important driver behind this newfound academic and policy interest in attention economics is the insight that the global social media economy primarily is an attention economy: when entertainment is ‘free’, what you pay with is your attention, which transforms attention into a financialized commodity, or even a currency (Franck, 2019; van Krieken, 2019; Heitmayer, 2025). Still, the roots of attention economics are generally traced back to a different academic field, namely organization theory - and specifically to Herbert Simon’s Designing Organizations for an Information Rich World (Simon, 1971). This piece of scholarly work is an interesting read for many reasons. Not only is it a school-book example of elegant academic prose with regards to precision and clarity, but it is also an illustrative example of a mostly forlorn happy-go-lucky Panglossian modernist faith in progress and radical ontological reductionism. But in relation to the present context, I will limit myself to pulling a few key themes from the paper that I believe still stand the test of time as important insights.
The basic premise of Simon’s argument is that “in an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence a wealth of information creates a poverty of attention and a need to allocate that attention efficiently among the overabundance of information sources that might consume it” (Simon, 1971:40-41). Further, “[i]n an information-rich world, most of the cost of information is the cost incurred by the recipient. It is not enough to know how much it costs to produce and transmit information; we must also know how much it costs, in terms of scarce attention, to receive it.” (41). Therefore, Simon argues, the crucial question in an information-rich world, such as ours, is: “How can we design organizations, business firms, and government agencies to operate effectively in such a world? How can we arrange to conserve and effectively allocate the scarce attention?” (41). Phrased otherwise: how can we productively handle endemic attention scarcity in an information rich-world?
LLMs as providers of synthetic (competent?) attention in planning
In the second half of his long career Simon (who in planning contexts is probably most well-known for his early work on bounded rationality) primarily occupied himself with questions regarding the possibilities and challenges related to computerization.
Simon saw computers as a potentially crucial device in the economization of attention. But whether computers will be able to fill this function will depend on exactly how they are put to use. Writing on the topic in 1997, Simon enumerates some of the functions a computer can fulfill. To begin with computers can be used for “number-crunching”, that is, to perform complex mathematical calculations that can also go beyond the capacity of the human mind. Second, they can be used as a “large memory”, storing enormous amounts of information. Third, they can be utilized to transmit information, as a network of communication, an “information superhighway”. Fourth, they can be configured as bespoke systems that mimic the role of a domain expert, “capable of matching human professional-level performance in some areas” – and here he mentions aspects of medical diagnosis, engineering design, chess, and legal search. And fifth, they can function as what he refers to as a “giant brain”: “capable of thinking, problem solving, and, yes, making decisions” (Simon, 1997: 22-23).
The one possible application of computers that Simon sees as truly potentially revolutionary is by using the computer not as a memory or conduit for communication, which multiplies information, but rather as a processor that condenses information and, through this, conserves precious human attention resources. That is: using computers as a source of what I above have referred to as AI as a provider of synthetic competent attention. This opens up question about the current potential for using computers in general and AI applications in particular as sources of synthetic competent attention in planning practice – to whom planners or decision makers can delegate complex work tasks which the computer will resolve through some form of ‘thinking’ and problem-solving.
With regards to the potential for AI to provide supplementary synthetic competent attention in planning processes, the existing scholarly literature contains a number of cautionary analyses. Previous research has repeatedly concluded that the practical integration of AI tools for planning has been slow and impact has, at best, been limited (Russo et al., 2018; Vonk et al., 2005). Reading these cautionary tales from an attention economics angle, it appears that a major challenge has been that the learning curve for using these systems has generally been steep, and that usability challenges have left planning practitioners with the sense that even when the applications provide enhanced analytical capacity, the investment of attention required for integrating and utilizing these tools within existing planning practice turns out to be prohibitive.
Another limitation of previous AI systems for planning is, as noted by Othengrafen et al. (2025), that these have generally been highly task-specific and often tailor-made systems with a very narrow scope of application in mind. This makes most of these systems potentially helpful in achieving very specific tasks that are part of planning processes. But as many planning practitioners would attest, being a planner much of the time demands that you are both a generalist and a specialist, and that you are able to quickly synthesize and integrate different types of knowledges and analyses, something which the AI tools that are commonly described in the planning literature have not been designed for, and are unable to do.
However, it appears as if the scholarly literature on planning AI has yet to catch up with the rapid and dramatic development of a new type of AI tool that has a potential to be both extremely easily accessible, broadly applicable, integrative – and at the same time has a capacity to be very deeply technically competent. What I am here referring to are generative AI tools that are built upon so called LLMs 2 , Large Language Models, the best-known of which is probably OpenAIs GPT family of models. What distinguishes these new tools from the specialized AI tools discussed by e.g. Othengrafen et al. is not just their capacity in a specific field of application, but rather their extreme easiness of use – in that they can basically be fully controlled through natural language prompts. That is: requests formulated in the format that you would put them to a fellow human. Therefore, efficiently instructing an LLM to perform complex tasks can be achieved through a simple chat interface and does not demand previous knowledge about AI – or even about computers at all.
There isn’t room within the context of this essay to explain in detail how the technology behind LLMs functions (for a quick introduction, see instead e.g., Kumar, 2024; Kamath et al., 2024). What nonetheless must be stressed are the serious – perhaps even monumental – ethical and political challenges associated with how these models are constructed, trained, and can be put to use. That, however, is not the particular topic of this essay – although I will be touching on some of these specific problems further below (see further also Weidinger et al., 2022; Mökander et al., 2024).
The big game changing difference between LLMs and the older expert AI systems is their flexibility, speed and ease of use. This makes LLMs extremely easy to tinker with, thus opening up possibilities for creatively experimenting with new areas of application in relation to planning-related work tasks. The question is just exactly how competent they are? To give one hint, the already outdated model GPT4 reportedly passed the United States multistate Uniform Bar Examination with flying colors, and would theoretically have been allowed to practice law in most parts of that country (Katz et al., 2024). 3 In subject areas such as physics, chemistry, biology and mathematics the current OpenAI o3 model already surpasses the ability of the vast majority of human PhDs in these subjects. 4
But planning isn’t law, and neither is it physics – even if both those disciplines make up parts of the matter that real planning work needs to deal with in the often complex and contextually sensitive judgements that planning work tends to entail. And in this regard, also coming back to Lovisa in Nacka, a reader may be excused for smirking a bit when the thought crosses her mind that perhaps simple permit reviewing for minor building alterations might formally be a planning issue, but in reality it is a somewhat menial work task, demanding quite a low level of competence. Real planning work, you may be excused for thinking, can never be performed by some stupid computer! Or?
The potential of computers to ever be able to perform the type of qualitative judgements and complex reasoning that humans can do was the central boon of contention of one of the most infected academic quarrels of the early days of AI research. On the one side stood the whole AI research community, including Herbert Simon, which was convinced that from the 1950’s and onwards AI development would be a quick and steady march towards human-level thinking capabilities. On the other side stood the phenomenological philosopher Hubert Dreyfus (see e.g. Dreyfus, 1972), who in his scathing critiques of what he saw as unfounded self-confidence of the AI crowd forcefully argued that computer technologies will never be able to emulate human thinking to any greater extent (for a review of the controversy see McCorduck, 2004).
For decades, the evidence seemed to speak primarily in Dreyfus’ favor. But with recent advances in natural language processing and deep neural net technologies that underpin LLMs, the scales now appear to be tipping back at least partially in Simon and colleagues’ favor. Numerous tests of the capacity of LLMs repeatedly turn out surprising results that speak to these models’ ability to successfully perform the types of qualitative judgements and complex syntheses that Dreyfus suggested would always be reserved for human intelligence (see e.g. Centrone et al., 2024; Schuering and Schmid, 2024).
In addition to being able to successfully make types of judgements that were previously thought to be reserved for humans, what makes LLMs truly interesting is that they are not only discriminative but also generative. That is: they cannot just recognize patterns in data, but also create new original designs on the basis of the data that they have been trained on (Batty, 2025). However, from an attention economics point of view, this may actually be a mixed blessing, since this creative output must be assessed with regards to quality – and this demands attention. To draw a parallel: most of us have experiences of working together with people who were supposed to help us, but who made so many serious mistakes in the process that their ‘help’ ended up generating more work for us than would have been required from us if we did the work ourselves to begin with. Therefore a key question is how well these tools can perform competent planning work in a manner that can supplement or even partially substitute human attention with planning processes.
I have myself experimented with the o1 and o3 models from OpenAI by asking them to perform planning-related work tasks including spatial and functional analyses of built environments and design proposals, asked the models suggestions for various types of improvements of urban designs within specific budget constraints, and more. My assessment (and the assessment of colleagues) is that the results generally are very impressive and performed with a competence level somewhere between a very talented recent Master’s graduate and a junior professional. With some trimming and quality-assurance work of maybe an hour or two, the output from such models could potentially replace a few days or weeks of work for a competent planning consultant. 5
If this assessment is even remotely correct, such a development will have serious implications for the labor market of planners. This is an issue I will return to further below, but I first need to mention some of the serious stumbling blocks that lay in the way of any form of broad-scale application of LLM-based AI in planning practice, at least as a potential substitute for human planning competence. Some of those obstacles are ethical, other are practical – and some are even due to the fundamental technical structures of LLMs, and the lack of real understanding of how neural networks such as LLMs actually “think” (Hutson, 2024). But out of all these problems I will here focus on one specific such, that has no less than a monumental impact on the potential function of LLMs as synthetic competent attention in planning work.
I can exemplify this with a personal experience. I – admittedly somewhat vainly – asked o1 to provide a presentation of my own academic work. It gladly obliged and presented a substantial summary in a few seconds. It provided quite a good and very well-written overview of my research interests. It even provided a few examples of key scholarly contributions from me, within which was included the following article:
Upon request the model provided a very succinct summary of the said paper. 6 I must say that from the looks of it, this comes across as a really interesting paper which I, on the basis of my research interests and previous work, definitely could have written. And perhaps even should have written. The problem is just: I didn’t. The truth is that I’ve never before heard of it in my whole life. And the reason for this is that it simply doesn’t exist beyond the mind, or whatever we call it, of the o1 model (the provided DOI leads to a completely different paper). Even though I prompted the model repeatedly asking it whether this paper really exists and if I had really written it, it insisted that – indeed – it does exist, and that I – indeed – have written it.
In the AI community, this type of sometimes highly creative but factually false claims made by LLMs is referred to as “hallucinations”, and they are an expectable error resulting from the basic technical structure of LLMs (Jones, 2025). Hallucinations appear not to be evenly distributed but occur more commonly in relation to certain types of prompts and knowledge areas (Huang et al., 2025) – and according to my own experience, academic references appear to be a particular Achilles heel.
Even if not completely avoidable, recent LLM models have become increasingly good at identifying and correcting their own hallucinations. But the difficulty of entirely preventing LLM hallucinations also functions as a guarantee of the continued importance of human, non-synthetic competent attention. Because if LLM outputs will always need to be thoroughly fact-checked, a human subject expert may always be necessary to have on hand to be able to competently identify when the model is integrating factually incorrect statements within its outputs. This is perhaps particularly important in many planning-related work processes due to their heavy legal ramifications, where erroneous analyses or statements can become extremely costly.
Consequently, an LLM is most probably a more helpful tool for a properly trained expert within the field, who can competently evaluate the reliability of results, than for the amateur or dabbler. For the expert it will in the future probably be more attention-economical to let an LLM do parts of her work and then just fact-check it, rather than doing it all herself. But this is of course an empirical question that will need to be explored in practice in the coming years. All of this has bearing upon the question of exactly how LLMs can be expected to impact the structure of work within the professional planning community, which is the question I will turn to in the next section of this essay.
Implications of the LLM revolution for planning practice
Three decades ago, Herbert Simon noted that during the course of the 20th century the division of labour between humans and computers has been steadily changing – and can be expected to continue doing so for as long as technology keeps becoming more sophisticated (Simon, 1997:240). From such a perspective, the LLM revolution is but the next step, albeit possible a great such, in a long line of technological advancements impacting the overarching organization of contemporary work life.
Questions concerning the likely impact of AI on the dynamics of the labour market also currently top the agenda of international organizations such as OECD (see e.g. OECD, 2023) and are also a current research topic for world-leading economist Daron Acemoglu. According to his current predictions there is significant risk that the broad-scale introduction of AI will affect the labour market by aggravating inequalities, further tilting the balance of power between labour and capital in favour of capital (see e.g. Acemoglu, 2021). The reason for this is that the current focus of AI development is largely geared towards automatization that replaces human labour rather than utilizing AI to enhance the productivity of human labour.
The current state-of-the-art of more detailed predictions regarding LLM job market impacts is a study published last year in Science by Eloundou and colleagues (Eloundou et al., 2024). The analysis is based on the list of occupations and their associated work tasks included in the US O*NET occupational database. The jobs are analyzed based on how easily the related work tasks can be either automated or significantly effectivized through LLM-based AI. The overall analysis shows that highly qualified and generally well-paying jobs which require graduate or post-graduate education are those that are most exposed to AI effectivization (Eloundou et al., 2024:1307).
The Eloundou-study doesn’t specifically address the planning profession, but it does break down the data into clusters of professions. One of the clusters that are expected to become most exposed to AI is “Architects and engineers” (and even more so – mind you, planning academics – researchers and educators; Eloundou et al., 2024: sup. mat. fig. S16). When I used OpenAI o3 to apply the Eloundou et al. methodology to the O*NET profile for “Urban and Regional Planners” (19-3051.00) the tentative conclusion was that around one-third of planning-related work tasks can be expected to be “exposed” to automatization through LLM. One way to interpret these predictions is that one-third of all currently professionally active planners may soon see themselves replaced by LLM-based AI – and that the future labor market demand for planners will be structurally and persistently much weaker than it currently is.
This of course sound ominous, but even so, automation of certain planning tasks through LLM-based AI is perhaps not necessarily only destructive for the planning profession. It may also offer opportunities for professional development and a strengthened engagement with important but currently partially neglected issues in planning work. Because if, as suggested above, much of contemporary planning practice is blighted by an endemic sense of limiting attention scarcity, then the integration of LLM-based applications into routine planning work perhaps contribute to alleviating some of that scarcity. A thoughtful application of LLM-based technologies would then potentially be able to free up human planning competence currently tied up with routine tasks – and instead deploy this competence towards engaging in important aspects of planning work that are currently not receiving the amount of attention they deserve.
What type of issues could this entail? In the research I performed together with Jenny Lindblad and Maria Håkansson the issues that were most recurrently mentioned by Swedish planners as currently not receiving the attention they deserve primarily related to social inclusion and equity issues, as well as issues regarding climate change mitigation. A promising example of how LLM-based AI can be harnessed to increase the capacity to tackle such issues can be found in another municipality in the Stockholm region that has also pioneered the introduction of AI into planning practice, Huddinge.
In contrast to neighbouring Nacka, Huddinge has chosen a different path. According to the municipality’s AI policy, AI should primarily be used to enhance the capabilities of staff and cannot be implemented with the purpose to automate decision-making. A task force within Huddinge’s municipal planning department noted that the planners lacked a trustworthy and accessible evidence base that could be used to bring social equity issues to the table in a convincing manner within the planning process. To answer to this need “Huddingeanalysen” was developed, a statistical tool that integrated fine-grained spatially mapped fundamental socioeconomic data, with a module that also facilitated the performance of regression analyses of numerous potentially relevant variables.
Even if this work was laudable, one may agree that it wasn’t in any manner revolutionary, and quite similar to other existing planning support tools. However, just recently the municipality has further developed the system by integrating it with an LLM-based chatbot which is also trained on relevant national, regional and local policy documents. The chat interface allows users to enter extremely simple natural language prompts that can instruct the system to perform instant complex mixed-methods analyses of the local impact of socio-economic factors, as well as interpretations of the results and instantaneous suggestions for potential policy responses grounded in already existing policy frameworks. This is work that previously would have taken a trained planning specialist weeks or months to perform, and which is now readily available through just a few simple keystrokes. To my mind, this is exactly the type of application that, in the best of worlds, this type of technology can be used for within the context of planning practice.
It is long recognized that the craft of planning consists not only of the application of technical skill but also the capacity to make situated judgements within the context of a specific political environment (see e.g. Alexander, 1981; Forester, 1989). The political dimension of planning is irreducible (Grange, 2014), and cannot be ‘solved’ through computation. The modernist-positivist pipe dream that science will one day save from politics will never materialize, either with or without AI, and it is specifically the political-judgmental aspects of the planning processes that can be expected to be particularly unamenable to automation or effectivization by AI. Therefore an somewhat curious consequence of the automation or semi-automation of simpler planning tasks as well as the development of powerful LLM-based tools for planning analysis is that the balance between the technical and political aspects of the planning profession will likely tilt further towards the political pole.
Nonetheless, a broad-scale introduction of LLM-based AI could have potentially profound consequences also for how planners relate to the political dimension of their practice. I will here mention just two such possible developments, out of many. To begin with, LLM-based AI can be utilized as a tool to engage in politics, both to conduct analyses of existing political situations including manifest and potential goal conflicts and power relations. It can also be used to devise possible strategies and courses of action. This can be very fruitful to assist in crafting creative solutions for negotiation or compromise in gridlocked political situations.
LLM-based AI can also be used as a resource for crafting convincing arguments to be applied in the political arena, which can assist planners in communicating more efficiently. It can further fill an emancipatory function in that this potentially can help democratize the planning process by synthetically providing the necessary competence to engage in the planning arena for groups who otherwise may lack the necessary resources or competence to do so. With the help of LLM-based AI such groups may be able to analyze and uncover the wider implications of planning documents and decision and help produce arguments for those who otherwise may have difficulties in expressing themselves in ways that are considered legitimate in the bureaucratic-administrative contexts of planning. But this is also a potentially dangerous development, considering that LLM-models can be extremely good at producing convincing and arguments for just about anything (Costello et al., 2024; OpenAI, 2025), which is a potent capability that can also be deployed to intentionally manipulate and mislead.
Another political implication of LLM-based AI is that it can very effectively obfuscate the political dimension of planning decisions if used to automate aspects of planning decision-making. Relating back to Lovisa in Nacka, the decisions that are automated there are still of a relatively minor societal impact. However, if the array of such automated decisions is expanded there is a manifest risk of a creeping decision-making ‘black-boxing’ which increasingly hides from sight issues that could potentially have become politically very controversial if they had been more openly addressed. This risk is not unique to AI, but the forceful and opaque technologies behind LLM-based AI rather adds another twist to a longstanding issue within planning practice, which concerns where the line is drawn between ‘technical’ and ‘political’ issues and the obfuscation of the political dimension of sensitive issues by way of ‘technification’ (cf. Metzger et al., 2014). The heightened risk of naturalizing or fetishizing the always contextually determined boundary between the categories of the technical and the political realms through the introduction of AI accentuates the need for planners to explore and develop critical capacities to question how the selection and curation of LLM training data affects output, how the models may be biased in various ways, and how certain values or assumptions may be hardwired into decision-making algorithms built on top of LLM-models.
Conclusion
Still a few years ago, even the staunchest of AI-enthusiasts doubted whether complex professional tasks related to urban planning could ever be successfully performed by AI. Even the grand doyen of AI within the field of urban studies, Mike Batty, wrote in 2018 that “There are some hard choices involved in producing any plan for the long-term development of the city […] and it is difficult to see the kind of design and decision-making involved in such planning being replaced by an AI. The sheer range of factors and the uncertainties involved cannot be automated using any available AI technology” (Batty, 2018:5).
However, in the few years that have passed since then, the unexpected so-called ‘LLM revolution’ has transformed the playing field dramatically. Current LLM models are already quite skilled at analyzing and integrating a wide range of information and knowledge and can often successfully choose and apply heterogenous frameworks of evaluation and reasoning to also fuzzily-defined problems. This makes them to some extent even able to – so to say – ‘compare apples with oranges’ with a level of ability higher than many experts in computer science would ever have thought would be possible.
Following these dramatic technological developments, any claim that LLMs are just ‘electronic parrots’, only capable of spewing back in a different format what has previously been put into them without any degree of originality, reasoning, or comprehension, comes across as dramatically spurious. 7 To me, skeptics who make such claims bring to mind the Swedish minister of communication who in 1996 dismissed the Internet by suggesting that it is “just a fad” that will soon pass.
I am myself far from an AI expert and only have a modest amount of experience working hands-on with LLM prompting. I am also a self-confessed technoskeptic. But nonetheless, I must say that I find the experience of dabbling with high-powered LLMs such as o1 and o3 in equal parts astonishing and disconcerting. Unless something dramatically changes in the coming months, it can be expected that LLMs sooner rather than later will come to have a truly disruptive impact on how planning is practiced. I am therefore convinced that it is only at our own peril that we underestimate the potentially major structural implications that AI in general and generative LLM-type models in particular will have on the organization of work within planning practice. Therefore, this is the time when planning practitioners and educators need to wake up and smell the proverbial coffee.
With the rapidly developing capacity of LLM-based models, the boundary between the work tasks that only humans can perform and those which can be efficiently outsourced to the synthetic attention of LLMs is rapidly shifting – and will most likely continue to do so in the years to come. With the coming broad-scale introduction of LLMs into planning practice, the current division of labor between human experts and computers can in any case be expected to shift in somewhat dramatic ways. In the rosiest of futures, each currently employed planner could soon have around one third of their currently tied up working capacity to invest in issues that at present cannot be awarded the attention they deserve. Or, in a somewhat less attractive trajectory: an equivalent of one-third of the current workforce of planners can soon see themselves without a job, because their work tasks have been taken over by LLM-based AI.
The most probable development likely lies somewhere in between these two extreme poles. And a key question here is not only the extent to which LLM-based AI is introduced into planning practice, but also in which ways this is done. In the examples of Nacka and Huddinge we find two very different ways to approach this shift. On the one hand, the automation of simpler planning tasks – such as in the case of Nacka – can help free up scarce human resources of competent attention, enabling these resources to instead be allocated towards more challenging but pressing issues. On the other hand, the enhancement of human analytical capabilities may more directly impact the productivity of human competence, and through this contribute to enhancing planning work.
Both of these development directions can contribute in different ways to increase the quality and efficiency of planning work. But even so, initiated commentators such as Daron Acemoglu have warned specifically against allowing that the development of AI is tilted too much towards the automation side, since this risks leading to societally destructive effects. Acemoglu therefore urges professional and labor organizations to mobilize to ensure that AI is introduced into their field of activity in ways that promote skill development and productivity of human workers, rather than replacing them. A key question at this point is therefore how the planning profession will respond to the emerging developments. According to Acemoglu, this is the time when professionals such as planners need to engage by exploring and debating which of their work tasks can and should be automated, and which not – not only from the perspective of the capacity of the emerging technologies, but also with a view towards professional and societal ethics.
Nevertheless, there are many aspects of planning work that will never be accomplishable by AI, and these particularly relate to the competence for nuanced and complex situated judgement that is needed to navigate the political side of planning work. Therefore it seems probable that once the dust settles, even if AI experts will have become commonplace in planning offices, many of the other remaining planning jobs will be clearly tilted towards the strategic and political dimensions of planning.
In any case, we can expect a near future in which the everyday work situation of practicing planners looks radically different from today. Some planning work tasks (and planning jobs) will become completely automated and delegated to LLMs providing synthetic attention, while others will most probably be awarded more human attention, which may also be dramatically enhanced with the help of AI capabilities. The planning community therefore needs to prepare its students, the future professionals, how to work productively with generative LLM-based AI as a resource providing competent synthetic attention. This way they can learn the skills to actively contribute to shaping the premises on which this tool is introduced into planning practice, thus ensuring that this powerful tool will serve to strengthen the profession rather than weakening it.
Footnotes
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The research upon which this essay was based was funded by the Formas research council grant #2020-01277.
Notes
Author biography
References
In 