Abstract
This article considers the extent to which chatbots based on large language models could reduce unmet demand for services offered by Australian community legal centres. It argues that while chatbots developed to streamline intake processes and assist lawyers with research and document preparation are unlikely to significantly reduce Australia’s justice gap, client-facing legal information chatbots have the potential to do so. However, realising this potential involves solving numerous problems, including uncertainty about how Australia’s Legal Profession Uniform Law applies to autonomous software systems and the risk that hallucinations of large language models will mislead clients.
Legal scholars and practitioners have not reacted uniformly to the growing capabilities of large language models (LLMs). 1 What some see as technologies driving much-needed change to the way legal services are delivered, 2 others have characterised as ‘artificially inflated’. 3 Whatever your outlook, it is difficult not to be intrigued by a vision promoted by some legal technologists: a world in which anyone can sit at a computer and ascertain their legal rights and obligations without waiting months for legal aid – assuming they meet the stringent eligibility criteria – or selling their car to finance legal services. 4 It is often said that ‘justice is open to all – like the Ritz Hotel’ 5 but perhaps models like Google’s Gemini and OpenAI’s GPT-4 could ensure that justice really is open to all – like a public library.
Australian community legal centres (CLCs) – non-government organisations that provide legal assistance to individuals who cannot afford to engage a lawyer on commercial terms – are among the organisations that could plausibly benefit from the growing capabilities of LLMs. Given their inability to meet national demand for assistance, 6 any form of technology that could increase their efficiency, reduce their costs, or directly help their clients is worth investigating.
Against the backdrop of technological progress and unmet demand for legal assistance, I will consider whether chatbots based on LLMs – as distinct from traditional chatbots based on pre-written scripts and inference engines, which have not been widely used in the Australian community law sector – could help CLCs increase access to justice by creating cost savings, helping frontline workers get more done, or enabling clients to discover their legal rights and obligations. I will begin with a few words on Australia’s ‘justice gap’ – unmet demand for legal assistance – and then move to what has already been said about how technology can help address this problem. I will then discuss opportunities promised by three kinds of applications – which I describe as ‘client-facing legal information chatbots’, ‘client-facing intake chatbots’, and ‘lawyer-facing chatbots’ – and identify barriers to realising these opportunities.
I argue that LLM-based chatbots could play an important role in reducing Australia’s justice gap, although the opportunities at hand are smaller than many expect while the barriers to harnessing them are substantial. It is hoped that this article will clarify the nature of these opportunities and barriers, and perhaps even help individuals and organisations working to increase access to justice make informed decisions about how to harness the growing capabilities of LLMs.
Australia’s justice gap
While government funding for Australia’s CLCs has increased since the infamous ‘funding cliff’ of 2017, 7 the sector remains under-resourced. Community Legal Centres Australia estimated in 2023 that ‘80 people a week are turned away from centres in each electorate’, 8 and the World Justice Project found in 2019 that 67 per cent of people with a legal problem in Australia were unable to access assistance. 9 A closely related issue is that not everyone who would benefit from legal assistance seeks it. 10 These are not uniquely Australian problems – Australia is one of many countries with a well-documented ‘justice gap’. 11
Government funding for legal aid has not typically been based on any ‘empirically grounded, detailed needs assessment’ but on Australia’s ‘fiscal outlook’. 12 If this remains the case, the supply deficit will likely persist.
The promise of large language models
A great deal has been said about how technology could reduce Australia’s justice gap. Among the best-known contributions is the FLIP report, which pointed to ‘many ways that technology can facilitate access to justice’ 13 and identified projects intended to drive innovation (although it did not expressly address the capabilities of LLMs). 14 Another important contribution is Sam and Pearson’s empirical study cataloguing the kinds of technologies used to increase the efficiency of Queensland CLCs. 15
More recently, scholars have turned to the capabilities of chatbots based on LLMs, which are computational systems that generate text based on structures learned from vast amounts of training data. 16 Unlike traditional chatbots, which respond to user inputs using pre-written scripts, LLM-based chatbots can generate novel and human-like responses. It is often said that these systems ‘understand’ language, 17 but it is more precise to say that they probabilistically and stochastically determine which ‘token’ – in this case, word – comes next in a sequence of text generated in response to a user prompt.
While some early users of LLM-based chatbots like ChatGPT were principally enthused by the prospect of generating poetry about their pets in the style of Wordsworth, others have focused on their potential use in legal settings. 18 Queudot and colleagues, for example, have developed and evaluated chatbots designed to provide information on immigration and employment law in Canada, pointing to the significant long-term potential of these technologies as both client-facing and lawyer-facing support tools. 19 More recently, Nay and colleagues compared the capabilities of LLMs and United States (US) tax attorneys, finding that models ‘demonstrate emerging legal understanding capabilities, with improved performance in each subsequent OpenAI model release’. 20 Among the most optimistic scholars is Charlotin, who argues that ‘[c]onversational-based LLMs are also likely to improve access to justice: chatbots or virtual assistants can help people navigate the legal system, answer their legal questions, and provide them with legal advice’. 21
However, it remains unclear whether the growing capabilities of LLMs could benefit Australian CLCs, which are among the organisations working to close the justice gap. The remainder of the article will consider how these organisations might harness technological progress in service of community members who cannot afford to engage lawyers on commercial terms.
LLM-based chatbots and Australian community legal centres
Client-facing legal information chatbots
One kind of application that could help CLCs increase access to justice is client-facing legal information chatbots, which I define as LLM-based applications capable of helping users understand their rights and duties by generating information in response to prompts entered into a chatbot interface. An example is OpenAI’s ChatGPT, although this application is not optimised for providing legal information. 22 A more specialised chatbot is Ailira (Artificially Intelligent Legal Information Research Assistant), 23 which elicits data from users and generates relevant legal information on succession law.
While CLCs are not the only organisations that could develop or operate versions of Ailira, they are particularly well-placed to do so. For one thing, they understand their clients – the potential users of chatbots they develop or operate – remarkably well and could optimise the technology accordingly. For another, a chatbot deployed by a community legal centre would form part of a patchwork of services, which means that its inability to solve a problem would not signify the end of the road for the client – from the point at which the system reaches its limits, its user could be referred to a lawyer of that community legal centre. This is not true of legal technology start-ups that do not employ lawyers.
Little imagination is needed to recognise the potential benefits of chatbots, operated by one or more CLCs, that are capable of providing jurisdiction-sensitive legal information directly to clients. 24 They could plausibly address demand for assistance by providing a viable alternative to advisory sessions facilitated by lawyers. Chatbots could also increase the number of individuals who seek legal assistance by decreasing friction associated with obtaining advice from a community lawyer (such as long telephone wait times, clashes between work commitments and appointments, and a reluctance to verbalise legal problems). The best-case scenario is that individuals could, at any time, visit the website of their local community legal centre and obtain useful and relevant information in seconds.
However, several challenges cast a heavy shadow over the opportunity at hand. The first is the feasibility of developing and maintaining LLM-based chatbots in a sector that has very little financial and human capital to spare. A cost-prohibitive option is building an LLM from scratch – a bespoke model trained on billions of parameters might be accurate, but this training process could cost millions of dollars. 25 It would be more feasible to use an existing model, like OpenAI’s GPT-4, and fine-tune it on relevant legal data. Still, building and testing applications requires a significant amount of expertise. 26 The challenge not only lies in building a workable product, but in ensuring that it is governed adequately (for example, privacy and data protection risks must be managed with great care, particularly if a given system can receive sensitive materials such as evidentiary photographs). Even the rollout of a relatively simple database tool by the National Association of Community Legal Centres was ‘fraught with issues and delays’, 27 raising doubts about whether this work could feasibly be undertaken by CLCs or their peak body.
The second challenge is the difficulty of ensuring the accuracy of applications based on LLMs. It is now widely recognised that LLMs ‘hallucinate’, meaning that they ‘produce seemingly credible but incorrect responses’. 28 These systems could misstate the proposition for which a case stands, or even make up authorities. 29 In perhaps the most widely publicised example of this problem to date, a US lawyer ‘cited’ non-existent cases fabricated by ChatGPT. 30 Even diagnostic tools, such as Justice Connect’s impressive model, have not yet surpassed 90 per cent accuracy, 31 raising doubts about the capabilities of more ambitious applications. In jurisdictions with larger training datasets and significantly more investment in the development of legal technology – the US, for instance – it might well be possible to achieve greater accuracy, but hallucinations cannot be eliminated. 32
Some might argue that the risk of misinformation is proportionate in cases where the application is only purporting to provide ‘information’, not ‘advice’. Yet as Tan and colleagues argue, when it comes to legal information, accuracy above the ‘pass mark’ is not enough because legal self-help tools will serve a wide range of lay people, and every minor information error may potentially lead to harmful decisions being made.
33
In any case, the distinction between ‘advice’ and ‘information’ is superficial in this context. It is obvious that static information on a web page was not prepared specifically for the reader and should not be relied upon to the same extent as advice from a lawyer. However, information produced by LLM-based chatbots has numerous hallmarks of legal advice – for example, it is curated in response to the question of the user rather than for a general audience 34 – and might reasonably be interpreted accordingly.
The third challenge is that the quality of outputs produced by LLM-based chatbots depends on the quality of the prompts – inputs – provided by the user. Yet clients who ‘utilise legal self-help tools may lack a legal background and therefore have difficulty articulating their situation clearly’; 35 indeed, ‘[i]n some cases, they may not even be sure what type of legal information they need’. 36 This means that even if the model were trained on reliable data, it is not guaranteed that it will generate relevant information. This issue can be understood in terms of Meno’s paradox: a person will obtain better results from an LLM-based chatbot if they know roughly what they are looking for (the more legal knowledge the user has, the more precisely they can frame their legal issues) yet if they knew what they were looking for, then their need for the application would be diminished. 37 While this problem can be partly addressed by eliciting information from users before inviting them to formulate their own questions, it cannot be completely solved: structured multiple choice screening questions could be answered incorrectly.
Finally, it is not clear whether LLM-based chatbots that generate legal content would be compliant with Australia’s Legal Profession Uniform Law. One issue is whether autonomous software agents – or non-lawyers who develop or operate them – can be said to have committed the offence of unqualified legal practice. 38 While there is no direct authority on the point in Australia, claims that legal technology companies have engaged in the ‘unauthorised practice of law’ – the US version of the offence – have succeeded in some instances. 39 Another issue concerns legal professional privilege. 40 It is worth asking whether the decision to upload a document into a system amounts to an inadvertent waiver of privilege in cases where the application is powered by a third-party model. Some commentators have suggested so. 41 My aim here is not to resolve these issues but to highlight legal uncertainties that may function as a handbrake on the deployment of client-facing legal information chatbots.
Offering free legal information through a chatbot is an aspiration worth pursuing; well-designed systems could increase preparedness to seek legal assistance and provide an efficient way of discovering legal rights and obligations. However, even assuming that CLCs could obtain the resourcing needed to create and safely operate client-facing legal information chatbots, risks like misinformation and illegality may prove difficult to manage.
Client-facing intake chatbots
Most technology deployed in CLCs has been used to support general organisational needs rather than execute legal tasks. 42 Systems that have become commonplace include ‘practice management software, cloud computing and automated data collection’. 43 Perhaps LLM-based chatbots will conform to this trend by helping CLCs perform non-legal tasks, like capturing client data.
There are already numerous LLM-based client-facing intake chatbots being used in the commercial law setting to speed up client intake processes. Settify, for instance, enables law firms to obtain data from prospective clients using an interactive chatbot rather than a passive form, providing a greater chance that all background information required from a client is obtained during their first contact with the legal practice. 44 Another example is Justice Connect’s AI model, which helps users to classify their legal problems and directs them to relevant resources. 45 But what opportunities do these systems offer CLCs?
Some might argue that automating intake processes using LLM-based chatbots would create significant cost savings and enable CLCs to allocate a higher proportion of their revenue to the provision of legal advice and representation. However, it is not clear that intake processes generate significant costs in the first place. While we cannot infer from publicly available data what percentage of operating revenue is spent on intake processes – financial reports of CLCs do not disaggregate wages into legal and non-legal staff costs – it is widely reported that CLCs rely significantly on free labour. 46 The potential cost savings are the salaries of supervisors who train and oversee intake volunteers and the equipment used by volunteers (such as office space, computers and software licences).
The opportunity at hand should also be understood in light of the capabilities of chatbots that are not based on LLMs. Consider, for example, NALA (New Age Legal Assistant), which is used by Marrickville Legal Centre to ‘streamline first contact with clients at intake, gathering vital information needed before consultation’. 47 NALA is a rule-based chatbot, 48 relying on predefined scripts to handle client interactions. There is no doubt that a well-functioning LLM could outperform NALA at certain tasks. At the time of writing, it does not appear to engage intelligently with inputs provided by users: if it asks for an address and the user responds with ‘Test’, NALA does not recognise this as an invalid address, nor can it classify the tone of the input and generate a seemingly empathetic response. Still, if systems in use can achieve most of the benefits of emerging LLM-based systems, we might wonder whether it is worth investing in an upgrade.
Let us assume, however, that it could be shown that a client-facing intake chatbot would, contrary to my suggestions, reduce intake costs significantly and enable an increase in expenditure on the provision of legal advice. Which barriers would we have to overcome when attempting to deploy the technology?
The first is digital exclusion. In order to achieve efficiency gains, client-facing intake chatbots would have to replace – not merely augment – existing processes, which entails forcing clients to use them. 49 Some studies provide a basis for being sceptical that mandating engagement with LLM-based tools would have an exclusionary effect. In his latest book on legal innovation, Susskind refers to two relevant findings of Oxford University’s Internet Institute: first, that 95 per cent of the British population are internet users; and second, just 1 in 30 do not know someone who can help them access the internet. 50
However, even assuming that the findings would be replicated if the Internet Institute’s study were run in Australia, we should refrain from taking them as a licence to automate intake processes. Clients who rely on CLCs for support might be in the ‘1 in 30’ to which Susskind refers – it is well documented that some clients of CLCs live in regional areas with poor internet connections. 51 Further, access to the internet is only part of the problem. As a recent study on the emergence of a ‘digital underclass’ in both Great Britain and Sweden notes, the causes of digital exclusion extend beyond incapacity, encompassing a broad range of motivational and cultural factors. 52 It is important to remember that some clients, even if they are competent internet users, may have had adverse experiences with technology. For instance, the recent Robodebt 53 scandal may have instilled anti-technology attitudes within some Australian residents. Additionally, Australian research that considers digital inclusion holistically – looking beyond internet access to questions such as whether individuals can make effective use of the internet – has suggested that almost 10 per cent of Australians are digitally excluded. 54 In sum, there seems to be tension between the well-documented capacity for technology to entrench inequality, 55 and the mission of CLCs to support individuals who, in many cases, are disadvantaged and excluded.
Another concern is whether compelling individuals to use digital systems is compatible with legal ethics. Australian solicitors have a duty to ‘act in the best interests of a client’ and ‘avoid any compromise to their integrity and professional independence’. 56 Delegating intake functions to a machine could constitute a breach of these standards. Courts, on numerous occasions, have emphasised the importance of taking instructions with care, including through questionnaire forms. 57 Given the well-known limitations of LLM-based chatbots, as well as the complex needs of some clients of CLCs, outsourcing intake functions to a machine may clash with ethical duties and perhaps even amount to professional negligence. 58
Yet another potential barrier worth considering is the difficulty of building and maintaining intake applications. Just as laws change, so too do intake criteria that must be incorporated into client-facing intake chatbots (such as salary caps, geographic restrictions, matters on which the centre offers advice, and disadvantage metrics). The instability of these factors means that the application would have to be updated regularly. Further, systems would ideally be in multiple languages – not all clients of CLCs are proficient in English – which adds to the complexity of developing and maintaining systems.
Finally, it is worth contemplating whether CLCs would want to take away opportunities for volunteers to experience the law in action. Volunteer experience can inspire the next generation of community lawyers, and this should be factored into decisions to automate the work they presently perform.
Client-facing intake chatbots may only enable marginal efficiency gains yet they might pose significant challenges.
Lawyer-facing chatbots
The third kind of application that could help CLCs increase access to justice – this time by boosting the productivity of legal practitioners – is lawyer-facing chatbots, which I define as LLM-based systems that lawyers interact with through a chatbot interface in order to accomplish legal tasks more efficiently. These applications can accomplish many kinds of tasks, but this article focuses on two potential uses: document preparation (encompassing both generation and modification) and research support.
There are numerous LLM-based document preparation applications on the market. Ivo, which is built on Open AI’s GPT-4, promises to ‘reduce the time, effort, and cost spent towards negotiating agreements’ 59 by assessing those agreements against user-provided criteria and recommending changes. 60 Another example is LegalRobot, which can simplify legal text and compare different versions of documents. 61 But how useful would these applications be to CLCs?
It is not obvious that LLM-based document preparation applications would offer material advantages over tools already in use within CLCs (such as Clio 62 and Actionstep, which offer traditional template-based document preparation features). 63 The overwhelming majority of services provided through CLCs take the form of advisory sessions, 64 not representation, which reduces the volume of documents that need to be produced (community lawyers draft statements of claim less frequently than commercial litigators). If document production is only a small part of what CLCs do, we can reasonably assume that the opportunity for efficiency gains is small.
Another potential use for lawyer-facing chatbots is carrying out research tasks, such as providing an overview of a particular area of law, identifying relevant authorities and summarising legal materials. Products developed for this purpose include Airlie (a research tool built from scratch by law firm Allens) 65 and CoCounsel (a GPT-4-based tool built by a Thomson Reuters subsidiary). 66
However, it is not clear whether this technology would materially increase the efficiency of community lawyers. For the most part, they are competent, well-trained professionals who have an excellent working knowledge of their areas of practice. Further, given that CLCs generally aim to help as many clients as possible, CLC lawyers only spend a small percentage of their time working on research-intensive projects (eg, scouring case law for obiter dicta that might pave the way to a novel doctrine). 67 These examples point to a relatively limited scope of opportunity for legal chatbots to increase efficiency.
Let us assume, however, that community lawyers could accomplish certain tasks faster when using lawyer-facing chatbots. Are there any barriers to rolling out these systems in CLCs? As noted already, LLM-based models can hallucinate, creating the risk of misinformation. Some might argue that this is not a problem because lawyers are ‘better equipped to spot errors in the AI-generated legal contents’ than clients. 68 However, as the outputs of LLM-based systems are the product of millions of non-linear computations, they are difficult to explain using concepts that can be held in the human mind (the ‘explainability problem’). 69 This means that lawyers are faced with a problem: they can either hope that the outputs of the system are accurate (an assumption that could constitute professional negligence) or spend a significant amount of time verifying them, which may negate efficiency gains. The risk of hallucination together with the explainability problem may be persistent barriers to deriving benefit from lawyer-facing chatbots.
Conclusion
While it is doubtful that client-facing intake chatbots and lawyer-facing chatbots will help CLCs significantly reduce Australia’s justice gap, client-facing legal information chatbots have greater potential to do so. However, there are numerous barriers to developing and deploying these systems within the community law sector, including the risk of misinformation, small legal innovation budgets and uncertainty about how Australia’s Legal Profession Uniform Law applies to autonomous software systems.
Footnotes
Acknowledgment
I am indebted to José-Miguel Bello y Villarino, Kimberlee Weatherall, Ben Mostyn, Simon Rice, Nick Manning and Chanel Tattler for numerous thought-provoking conversations about legal technology and access to justice; and to the Alternative Law Journal’s editorial team and peer reviewers for their valuable feedback on the manuscript.
Declaration of conflicting interests
The author(s) declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
