Abstract
General principles of contract law govern contractual relationships. However, technology is not only changing legal acts – it is revolutionizing the way people and machines act and communicate. Therefore, it is relevant to ask whether technology is also transforming the foundations of contract law, particularly its general principles. In this paper, I will discuss the use of AI models in contracting and explore the questions their use might raise in light of the principle of good faith and fair dealing. Good faith has traditionally guided the contractual behaviour of parties and has an established meaning across various legal instruments and practices, both nationally and internationally. In the age of AI and automated contracting, it becomes important to ask what role such behavioural norms should play when assessing the conduct of machines.
Introduction
Electronic contracts and communication by electronic means have become widespread in contracting. Electronic communications have been influencing contract law doctrine already for years. For example, in 1996, the United Nations Commission on International Trade Law (UNCITRAL) published the Model Law on Electronic Commerce, 1 aiming to set international standards and principles on practices in electronic contracting.
Going a step further from electronic communications, technological development has enabled businesses to move towards contract digitization and digital contract management methods. In recent years, artificial intelligence (AI) technology has been at the forefront of these developments. Increased amount of data on business activities, coupled with the rapid advancement of AI technologies for processing vast amounts of data, has created new possibilities for contracting. For example, these developments have enabled AI-based systems and models to be used in contract negotiations. Jim Shaughnessy, Chief Legal Officer at DocuSign, writes that future developments may lead into direction where contracts transform into digital assets acting in connection with real life. Such digital assets could then provide real-time information for use in contract negotiations. 2
AI systems not only open new avenues for electronic communication between parties but also give ample opportunity to automate contracting, document drafting and contract management. Basically, the use of AI enables processing of a large amount of contract-related data in a fast and reliable manner. This can reduce the amount of work traditionally performed by humans, especially document reviewing. Algorithms can identify certain issues or terms that are relevant for a particular contract or a particular party. AI systems can be further trained based on data collected from contracts and contract negotiations, with AI improving its accuracy over time.
Systems based on AI algorithms are bringing multiple benefits compared to human-led contract drafting and management. However, the use of AI systems might also raise questions on possible risks. For example, the accuracy that an AI system can deliver in practice depends on the quality of the training data. There have been many discussions on issues and effects caused by biased data used in training. 3 The challenge may not only be to adequately train the AI but also to implement such an AI model. A lot depends on humans and their capabilities to use AI systems. Humans might be needed not only to oversee the quality of AI systems’ tasks and outcomes but also to bring their expertise to be used together with AI 4 .
Socio-legal method on law and technology
In this paper, the aim is to first discuss how the general contract law principle of good faith and fair dealing and its applicability to AI-based contracting have been viewed in the academic literature. In the second part of the paper, the aim is to assess how the good faith and fair dealing principle and its behavioural standards could be integrated into AI models and systems.
The author approaches the subject both through an examination of existing law and its principles and through socio-legal research. The doctrinal method is used to examine the doctrine of contract law, with the focus being on the principle of good faith and fair dealing. In considering the application of this traditional principle to AI, the author seeks to examine its relationship to technology as a phenomenon rather than as an interpretation of the norms applicable to AI in a particular case.
The author uses the socio-legal method to understand technological revolution and its implications for legal tradition through a broader perspective than simply observing how regulatory environment is developing to meet the needs of new technology. My aim is to observe how technology changes and affects human behaviour and whether this behavioural change impacts traditional contract law and its general principles. The law should to be seen as a one functional part of a bigger picture in society and not as a single player.
Socio-legal method makes it possible to examine traditional and fundamental contract law principles in the new era of the global digital world. The impact of digitalization is not limited to the legal profession, but brings law closer to all other professions. Legal tech start-ups have backgrounds in different professions and are not just creations of lawyers. This means that legal questions, which before were only asked by lawyers, are now being asked and solved by entrepreneurs and governments, with a different focus than in a lawyers-only setting. In principle, it is a good development that legal services are coming closer to clients and are developed with a client experience in mind. However, so long as we have a legal system, and to the extent that such a system represents the values that society is willing to promote, it is necessary for the law and the legal system as a whole to be examined and reorganized by the legal scholarship in connection with other disciplines.
When it comes to contract law, the implication is that there is a need to evaluate what parts of the legal system are necessary to serve the very purpose of the legal system. Those essential parts or elements should not be buried under the technological revolution or in any way adversely affected by the technological developments. This necessitates that the scholars take steps to better understand the changes that technology is bringing to the legal system and whether these changes are serving the objectives of the legal system or not. 5
Legal scholars should define the questions they are asking in order to address what kind of solutions they are facing and why. New interpretative questions challenge the existing legal system, for example, in situations where AI creates content that was previously created by humans and protected by copyright. The role of traditional legal institutions is changing, and by the same token, the paradigms of the old legal structures are being challenged and put to the test. 6 This article is an attempt to shed light on some of these challenging interactions between law and technology and, more specifically, between contract law and AI.
AI and the contract law tradition
Definition of AI
AI does not have a common and standardized definition in legal research. For example, AI can be defined as ‘a series of different kinds of general purpose digital technologies modelling functioning and reasoning of human brains’.
7
Another definition of AI in legal research is that AI is …a human artefact. Artificial yet so real. It replicates and represents an augmented version of intelligence and sentience, attributes that signify a human being. Simultaneously amalgamating itself with humans through digital identities and the futuristic human-machine interface.
8
AI systems can be complex, as they do not necessarily consist of a single program or component, but can be delivered by several different suppliers. 10 An AI system can therefore be either a product or a service. The logic used by the system may not be transparent, and the more autonomous the system becomes, the more difficult it may be to understand the reasoning behind the outcome (the so-called ‘black-box’ problem). As a result, AI systems cannot be considered as a single coherent set.
Regulation (EU) 2024/1689 (better known as the EU AI Act),
11
which is the first wide-ranging legal framework in the world for regulating AI systems within the European Union (EU), provides legal definition of AI systems in the context of EU law. However, defining AI systems in the regulative process was not an easy task. The Commission gave its proposal on the definition of AI systems.
12
The European Parliament, in turn, amended the definition.
13
The adopted version of the EU AI Act puts forth the following definition of AI in Article 3(1) as follows:
‘AI system’ means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
In the context of this paper, AI and AI models and systems should be understood as widely covering all solutions that use any form of AI in their programming either to assist or to automate processes related to contracting, including communication between parties, and contract management.
Contract law tradition and technology
Traditionally, law governs relationships between natural persons (individuals) and legal entities, holding accountable the individuals responsible for the actions of those entities. In the context of technology and AI, the question is whether law also regulates relationships between humans and AI. 14 In general, contract law doctrine, both civil and common laws, is seen as quite compatible with the use of technology, data or AI in contracts and contracting. 15 As long as AI tools are used as an assistance for humans, for example, if an AI tool drafts a legal document that is further modified in the negotiations by the human parties, it is relatively easy to say that current legal doctrine is recognizing electronic contracting and that an agreement has been negotiated and concluded following contract law tradition. However, when contracts and negotiations are automated, it is not so clear what the intention of the parties is and whether it corresponds to the outcome of the agreement concluded by an algorithm.
AI systems used in the area of contracts could also raise the question of whether the AI is acting itself as an individual actor or as an agent. AI that is clearly in an assisting role, while humans are responsible for leading negotiations and drafting final contract terms, should not have any additional or elevated status to the software used in the process. However, when an AI model is to act more autonomously and automatically, it is submitted here that AI may become an agent in the meaning of contract law rules. Being an agent includes that AI has ability to act intentionally. 16 Thus, AI that acts as an agent would be able to make independent decisions on behalf of the party using the AI system in contracting.
Principle of good faith and fair dealing
In this paper, the principle of good faith and fair dealing is mainly discussed in the meaning of a behavioural rule for parties to act during the contract negotiation phase.
The principle of good faith appeared in Roman times to set rules for sales. Today, the principle of good faith is a recognized doctrine covering contractual relationships in national, international and transnational legal systems. The doctrine of good faith is found in both civil law and common law jurisdictions. The principle of good faith has long roots in contract law doctrine, but that does not mean that the meaning of the principle is clear. Good faith is open to many interpretations, and therefore, its definition is still somewhat vague, leaving open to the parties the precise requirements it imposes. However, the doctrine of good faith has a generally accepted purpose of securing the expectations of the parties. It therefore sets standards for cooperation. 17 Some examples of how open norm good faith is can be deduced from comparative law studies. For example, good faith and fair dealing is translated from English into French language just simply as good faith. In German law, the language differs when good faith refers to contract interpretation and when good faith is having meaning of someone's mistaken belief. The concept of good faith is not limited to contract law either. There are, for example, jurisdictions where good faith can be applied in most areas of private law. 18
The UNIDROIT Principles of International Commercial Contracts 2016 (UNIDROIT Principles) provide the general duty to act in accordance with good faith and fair dealing. The Introduction to the 1994 UNIDROIT Principles states that the Principles are intended to be flexible regarding technological development and circumstances that are to be changing constantly. The aim of the UNIDROIT Principles is to ensure that fairness gets promoted in the changing circumstances affecting commercial relations by imposing the duty onto parties to act in good faith. 19
The general duty to act in accordance with good faith and fair dealing is provided in Article 1.7 in the UNIDROIT Principles:
Each party must act in accordance with good faith and fair dealing in international trade. The parties may not exclude or limit this duty.
The first paragraph of Article 1.7 sets a clear duty to act in good faith starting from the negotiations and then for the duration of the contract. Article 1.7 is formulated as a mandatory provision. More precisely, although the parties are free to negotiate on the use of the UNIDROIT Principles and modify or vary their default provisions, the good faith and fair dealing provision is seen as having such great importance that the parties are barred from excluding it. However, even though the UNIDROIT Principles set the unequivocal duty to act in good faith, they, at the same time, underline that the good faith duty should be understood in the context of international trade. Thus, the concept or the concrete meaning of the duty to act in good faith should not be understood, interpreted or applied by using criteria from any specific national law. 20
Defining bad faith can help to concretize what it means to act in good faith. The UNIDROIT Principles Article 2.1.15 is concerned with negotiations in bad faith:
A party is free to negotiate and is not liable for failure to reach an agreement. However, a party who negotiates or breaks off negotiations in bad faith is liable for the losses caused to the other party. It is bad faith, in particular, for a party to enter into or continue negotiations when intending not to reach an agreement with the other party.
A Commentary on Article 2.1.15 gives more examples and illustrations regarding bad faith behaviour. Thus, a party would be negotiating in bad faith if that party is, either on purpose or through negligence, misleading the other party by giving false facts or hiding any facts in order to conclude the contract. 21
The principle of good faith in the context of international sales of goods can also be found in the United Nations Convention on Contracts for the International Sale of Goods (CISG) jurisprudence. Article 7(1) of the CISG states as follows: In the interpretation of this Convention, regard is to be had to its international character and to the need to promote uniformity in its application and the observance of good faith in international trade.
22
[A] contract in writing which contains a provision requiring any modification or termination by agreement to be in writing may not be otherwise modified or terminated by agreement. However, a party may be precluded by his conduct from asserting such a provision to the extent that the other party has relied on that conduct.
23
In the CISG context, it seems that good faith and fair dealing forms a guiding principle whose content depends on the actual circumstances of each individual case. 24 The court will examine the circumstances on a case-by-case basis and decide if, for example, a contractual term ought to be held invalid under the principle of good faith. Example on interpreting the good faith and fair dealing principle in the context of the CISG, the Court stated in the case of Audiencia Provincial de Navarra 25 , that the principle of good faith means that the content of the contract should correspond to the reasonable expectations of the parties.
Evaluating behaviour of AI
In this part, we shall discuss how the behavioural rules established by the principle of good faith and fair dealing might be viewed in the light of AI models and systems used in contracting. In this context, it becomes necessary to discuss the said behavioural rules irrespective of whether the AI model is designed to act as an assistant or to act autonomously, like an agent or even as an individual machine-based party.
Objective and subjective good faith – intention of AI
Good faith can be divided into objective good faith and subjective good faith. Objective good faith refers to the standard of acting in good faith. Subjective good faith can be related to the information available to the parties. Behavioural standard of acting in good faith in civil law doctrine refers to acts that show honesty and faithfulness. Therefore, the behavioural standard of good faith mainly covers actions of a party's inner mind and intention. Objective good faith can, therefore, be linked to the parties’ intention. Subjective good faith is closely connected to transparency, which covers the use of AI, the data that AI uses and the transparency of algorithms, which covers information on how AI transform used data into the outcomes it presents.
In the case of autonomous AI, a system may be programmed in such a way that the AI system interprets the goal and purpose of a party. On that basis, the AI system makes independent interpretations of what the agreement should aim to achieve in order to fulfil its interpreted goal and purpose and possibly even to extend it to enable, for example, business opportunities that the party itself has not been able to identify and thus has not intentionally pursued.
An intention to be bound by a contract is one of the key elements of legally binding contract in many different legal systems. Thus, contracting parties need to have a necessary understanding and capability to enter into an agreement in order for that agreement to become an enforceable contract under the applicable contract law. 26 In civil law countries, the intention to be bound by a contract is recognized as a prerequisite for a legally binding contract. For example, Article 2:101 of the Principles of European Contract Law (PECL) states that the intention of the parties to be legally bound by a contract is a prerequisite for the conclusion of a contract. 27 In common law countries, intention is not a requirement of the legally binding contract as such, but this does not mean that intention is not required in common law jurisdictions. In the United States and England, for example, a contract must be supported by consideration. In the United States, intention is not generally said to be necessary, but the enforcement of a preliminary contract requires the intention of the parties to be bound by the contract. 28 It is also worth noting that the United States is also a party to the CISG. 29
Autonomously acting AI tools and systems might also raise questions of liability. Situations where a contract is negotiated, concluded and terminated automatically by AI could raise questions about how to identify the party liable for breach of a contract. There is thus a need to further develop regulatory tools concerning these AI-related liability issues. AI-specific principles and standards could also be needed to further guide the use of AI and limit possible risks related to AI in contracting. 30 For example, liability issues could arise in situations where an autonomous AI tool is used to draft the contract on behalf of both contracting parties. However, it is likely that this could be the case where one party has the power to set the terms and conditions and the other party either accepts or rejects the offer.
Approaching AI as an agent in the meaning of a contract law tradition, Linarelli argues that in the end, it is humans who give the intention to AI by developing AI solutions for particular purposes. Therefore, it does not seem to be very meaningful to debate whether AI has its own intention; the intention of the AI tool is the result of decisions taken in its development and design process. 31 Thus, if AI is programmed to develop intelligence to automate the contracting process and to act independently (e.g. if an AI tool sees the need to revise an existing contract or to negotiate a new one), the intention to contract and to be bound by the agreements has been given when that AI tool has been developed and when it has been implemented as part of the business process.
Protection of weaker party
Some concerns related to the use of AI in contracting are, for example, that the rights and obligations of the parties may become more unbalanced and that the weaker party may bear greater risks related to the contractual relationship. 32
Examples of evaluating AI behaviour in contractual relationships can be found in the field of investment law and algorithms used for international investments. When evaluating the contractual behaviour of AI, it might be necessary to distinguish between cases where infringement has occurred because the situation was such that the AI did not have the possibility to comply and cases where the AI rejects the obligations on purpose. The acceptability of the behaviour in cases where AI is unable to act in compliance with the obligations would preferably be seen more favourable than in cases where AI is acting negatively and rejects set standards. 33
During the contract negotiations and concluding the contract with the help of AI, it could be useful to document more carefully the parties’ intention behind the negotiated terms. This could be helpful in the situation where contract terms need to be interpreted afterwards. Documenting the intention could, for instance, involve keeping a log of the actions taken during the negotiations (e.g. deleting or rewriting certain terms in the contract). 34
Martin-Bariteau and Pavlovic also argue that further protection of vulnerable parties is needed, as the use of AI may result in favouring parties with stronger negotiating positions. Therefore, transparency and accountability should be addressed more closely in the use of AI models. 35
Behaviour of AI
The question whether the AI is behaving in good faith requires an analysis that, based on Weitzenböck's suggestions, should mainly focus on observance on compliance with an objective good faith behaviour standard and how the AI is behaving in relation to that standard. The first question in analysing AI behaviour is whose state of mind and intention should be analysed, the AI tool itself or the human behind the AI. It seems appropriate that both, the AI and the human, are covered. This would involve an analysis that takes into account how and under what terms the AI tool was established and created. This includes the design and programming of the software, as it is quite obvious that in the design process, the aims of the software and behaviour patterns are formulated. The behaviour of the AI itself includes the analysis of how the AI determines its parameters in order to fulfil the set tasks. 36
Analysing good faith behaviour of AI
In this part, we shall combine the approaches of legal scholars and AI business developers to draft a summary of aspects that should be further analysed in order to observe the behaviour of AI and how it respects the general behavioural rules set out in the principle of good faith and fair dealing in the context of international trade.
In general, it is always important to assess how the AI will work with a completely new data set. 37 Thus, it is important to understand how the AI tool will act when it is working based on the contract data collected from the user and for the user's needs. In the context of this paper, it is also important to understand how the AI model will behave during contract negotiations and whether it is possible for the AI to learn to act against good faith behavioural rules.
The aim and values of the AI model should be set based on the problem the AI model is designed to solve. Training an AI model is a key element in setting the model's aims. In the training phase, there are elements that need to be considered more closely. Thus, it is relevant to analyse where the training data is collected and where new data may be collected. Based on these data sources, the AI model can practice actions aimed at solving the problem. It is also relevant to note what features are created from the data used and how they are maintained during the use of the AI tool, and how they create solutions that respect the purpose and values set for the model. When putting the AI system into practice, it is relevant to continue evaluating the AI model (e.g. determining the role of the human using the tool). 38
Based on the needs arising from the contract law doctrine and the business needs of using AI systems in contracting, the following relevant questions can be summarized to guide the further analysis of behaviour of AI in the light of good faith and fair dealing principle:
Requirements of objective good faith:
Intent:
Level of human intervention versus autonomously acting AI (e.g. What kind of decisions can or should the AI make independently?) Aims and values and the problem of the AI model and its design? (e.g. behavioural rules and principles) Quality and use of the training data (e.g. accuracy and relevant features). Acts:
Use of new data and models (e.g. How does the AI determine its parameters to fulfil the intent of its task?) Requirements of subjective good faith:
Transparency of algorithms and data Protection of weaker party
Examples on existing AI tools for contracting
There are several existing AI systems that can be applied to contracting and contract management for different types of contracts. In order to understand how the previously discussed theoretical approach on good faith and behavioural requirements for AI in contracting, derived from the contract law tradition, faces the reality of the international business relationships and contracting, we shall briefly discuss the presentation of three different examples of the existing AI models. The examples used in this part of the paper are selected by giving the following prompt to the OpenAI ChatGPT-3.5: ‘Are there existing artificial intelligence programs that negotiate contracts?’ 39 . From the given six examples, three were randomly selected for the short introduction in this paper. The following three examples of existing AI systems used in contracting and contract negotiations are LawGeex, LexCheck and Kira.
LawGeex AI automatically reviews contracts and informs on issues identified. Based on predefined policies, LawGeex can understand the needs and position of the party. AI can be, for example, used during the negotiations to diagnose and redline possible problems with the negotiated terms. AI uses as a reference either a corporate legal playbook or already existing contracts, which provide guidelines to AI for new contracts. However, LawGeex AI has been trained on massive amounts of contracts and has its own legal brain. 40
LexCheck provides a platform with AI that can automatically review and revise contracts. The corporate legal playbook is used to train the AI. The standards set in the digital playbook can be enforced to ensure contractual risks are minimized. Digital playbook can set pre-approved terms that can be compared to the terms being negotiated. 41 As for Kira AI, it focuses on contract analysis to identify and analyse the content of contracts and documents. AI automatically provides highlights in documents that can help complete further analysis of the data. Kira provides a centralized platform that can be used to facilitate communication and to promote transparency in projects. 42
Three examples of AI systems used in contracting and contract negotiation appear to share common functions in managing the practices of the user organizations. For example, LawGeex and LexCheck take the organization's playbook and determine the needs and practices that are important to that specific organization in those contractual relationships where an AI system is used. Therefore, it seems that the playbook is the key to determine the intention and good behaviour in the practice of AI systems.
Conclusion
The principle of good faith and fair dealing is a set of behavioural rules that the parties are to respect in contractual negotiations. In the UNIDORIT Principles, for example, such behavioural rules are of such great importance to the contractual relationship, that the parties cannot exclude the application of the good faith and fair dealing principle. 43 The contract law tradition and its general principles are not only relevant but also generally quite future technology proof. 44 The development and use of AI solutions in contracting are changing the way humans and machines act, with cooperation between them being established in order to negotiate and conclude contracts. However, at present, in spite of the frequent reliance on the AI models, the behavioural rules guiding parties in contractual relationships art still valid and applicable. Therefore, it seems only natural to require good faith behaviour also when using AI systems – when assisting or automating contract negotiations.
Algorithmic transparency and transparency in the use of AI tools are seen as crucial to promote fairness. However, transparency about the algorithms used may not be enough in order to ensure that AI is acting in good faith. AI might use, for example, discriminatory practices to achieve the aims set for the tasks it is performing. Transparency alone would not likely prevent AI systems from benefitting from all such practices, which could be seen as discriminatory or acting in bad faith. In principle, the other party would have an option, especially in cases where there is a significant imbalance of power, not to enter into an agreement with the party using AI. Still, one cannot help but ask whether there should be ethical guidelines setting out good faith and fair dealing algorithms to ensure that AI systems act in accordance with good faith and fair dealing during negotiations.
In order to evaluate if the AI system is respecting currently existing good faith behavioural guidance, it is necessary to carefully analyse how the AI model is designed and trained. Design and training are the basis of how the AI will work in practice, but it is equally necessary to continue analysing how the AI acts and learns during the actually tasks it performs during contract negotiations.
However, as there is no existing clear concept of good faith and fair dealing, there are also no clear behavioural rules that, humans or machines, should follow in order to act in good faith. Therefore, it is on a case-by-case estimation if someone is acting in bad or good faith.
Declaration of conflict of interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
