In this article, we investigate the issue of representing total preorders by ranking functions in belief revision scenarios. Both total preorders and Spohn’s ranking functions are most popular semantic structures in nonmonotonic reasoning and belief revision. While each ranking function uniquely induces a total preorder, total preorders can usually be represented by infinitely many ranking functions which only differ in the position of their empty layers. In order to work towards representation invariance of total preorders by ranking functions, we first introduce the notion of revision equivalence which postulates that equivalence is preserved during (most general) revision operations. Moreover, we single out so-called linearly equivalent ranking functions as prototypes of ranking functions with regularly inserted empty layers. We show that, in general, revision equivalence is hard to achieve, which prompts us to take a more operator-focused perspective. We introduce the postulates Preservation of Equivalence (PoE) and Preservation of Scale (PoS) in order to axiomatize operators which can guarantee that the equivalence resp. linear equivalence including the scaling factor between ranking functions is preserved. Afterwards, we evaluate various iterated revision operators from the literature with respect to these postulates. While (PoE) and (PoS) do not generally hold for revision operators in the Darwiche–Pearl framework, we show that strategic c-revisions with suitably chosen strategies are scale-preserving with respect to linearly equivalent ranking functions. Furthermore, we present approaches to defining equivalence- and scale-preserving revision operators for ranking functions from arbitrary existing revision operators.
Both in belief revision theory and in nonmonotonic reasoning, total preorders (TPOs) on possible worlds are commonplace since they essentially encode conditional beliefs. Many revision operators based on TPOs satisfy the popular AGM postulates (Alchourrón et al., 1985; Katsuno & Mendelzon, 1991), and in nonmonotonic reasoning, TPOs induce inference relations of high quality (Kraus et al., 1990; Lehmann & Magidor, 1992; Makinson, 1989). Ranking functions assigning natural numbers to possible worlds, so-called ordinal conditional fuctions (OCFs) (Spohn, 1988), implement a convenient representation of total preoders that makes it easy to specify changes and validate inferences. Moreover, recent research (Schwind et al., 2022) has shown that they are powerful enough to function as a canonical representation of epistemic states in the popular Darwiche–Pearl framework for belief revision (Darwiche & Pearl, 1997).
While each ranking function uniquely induces a TPO, TPOs can usually be represented by infinitely many ranking functions. This is because ranking functions can have empty layers, that is, not each natural number is assigned to possible worlds. The benefit of such empty layers that naturally arise when working with ranking functions is not clear prima facie, and they appear to be a bit arbitrary and sometimes even obsolete when no representation of quantitative information is necessary. Therefore, when facing a revision scenario, it is not immediately clear which ranking function would be most adequate to represent a TPO, and in which cases the chosen ranking representation may even be irrelevant.
In this article, we show that in general, ranking functions that differ (only) in the seemingly arbitrary position of their empty layers are considered as representations of entirely different epistemic states from the perspective of OCF-revision operators, leading to significantly different revision results. We introduce the formal notion of revision equivalence1 to establish a strong equivalence relation among ranking functions that guarantees the equivalence of ranking functions also under (arbitrary) revisions. We axiomatize OCF-revision operators that preserve equivalence in general via the postulate Preservation of Equivalence (PoE). Afterwards, we consider linear equivalence as another strong notion of equivalence among ranking functions, which holds if one ranking function is a scaled-up version of another. Moreover, we introduce the postulate Preservation of Scale (PoS) for revision operators which are able to preserve even the scaling factor between two linearly equivalent ranking functions. All these notions can be utilized when representation invariance is desired, that is, whenever one wants to revise a TPO on possible worlds, which is encoded as a ranking function, without depending on a specific ranking representation.
While general iterated revisions according to the Darwiche–Pearl framework (Darwiche & Pearl, 1997) do not satisfy (PoE) or (PoS), we show that strategic c-revisions (Kern-Isberner, 2004; Kern-Isberner et al., 2023) with suitably chosen strategies are scale-preserving. Furthermore, we propose methods for modifying (arbitrary) other revision operators to ensure these postulates. Our results reveal that representation invariance of TPOs by ranking functions can be achieved when the arithmetics of ranking functions is thoroughly used by the revision operator, as is done by c-revisions and operators constructed using the approaches we propose in this article.
In summary, the main contributions of this article are:
We introduce the notion of revision equivalence between OCFs, as well as the postulate (PoE) for equivalence-preserving OCF-revision operators.
We introduce the postulate (PoS) to define OCF-revision operators that are able to preserve the scaling factor
between ranking functions
with
.
We evaluate several revision operators from the literature with respect to (PoE) and (PoS), with a focus on strategic c-revisions.
We propose approaches for constructing revision operators that satisfy (PoE) and (PoS) based on existing TPO- and OCF-revision operators. As a practical example, we define (equivalence-preserving) OCF-versions of the well-known elementary revision operators (Chandler & Booth, 2023): Natural (Boutilier, 1993), lexicographic (Nayak et al., 2003), and restrained revision (Booth & Meyer, 2006).
We introduce parametrized transformations from TPOs to OCFs and show that they can be utilized to define scale-preserving revision operators.
This article is a revised and largely extended version of the conference paper (Kern-Isberner et al., 2024) presented at KR 2024. In particular, we offer a new perspective on revision equivalence by introducing a new meta-postulate (PoE) for equivalence-preserving revision operators in this article, whereas in Kern-Isberner et al. (2024) revision equivalence was only defined as a formal property for pairs of ranking functions, and preservation of equivalence was only considered informally. Similar to (PoE), the property of preserving linear equivalence and the corresponding scaling factor between OCFs is now expressed via the meta-postulate (PoS). Moreover, we propose new methods for making OCF-revisions scale-preserving, and introduce parametrized TPO-to-OCF transformations in the process. We also clarify the definition of OCF revision operators induced from TPO revision operators and the more general case of mimicking a TPO revision operator by an OCF revision operator, as well as the relationship between different methods for defining equivalence-preserving revision operators. We investigate additional operators and illustrate our results with several new examples, showing, for example, that the downward compatibility of the different notions of equivalence presented is indeed one-directional. Furthermore, the discussion and technical results with respect to OCF-revision operators defined from TPO-revision operators have been significantly extended.
We now outline the rest of this article. Section 2 recalls formal basics and notations which are used throughout the article. We briefly point out links to related work in Section 3, and give a comprehensive description of previous work on iterated belief revision in Section 4. Section 5 introduces our concept of revision equivalence for ranking functions, and Section 6 switches to a more operator-focused perspective and introduces the new postulate (PoE). Section 7 sharpens the concept of revision equivalence to linear revision equivalence and introduces (PoS). In Section 8, we present approaches for defining equivalence- and scale-preserving revision operators from existing OCF-revision operators, and in Section 9, we present similar methods to define OCF-revision operators from TPO-revision operators. Section 10, where we define OCF-versions of the elementary TPO-revision operators, showcases a practical application of these methods. Towards the end of this article, we discuss both the utility of our results and more technical connections to related work in Section 11. Finally, we conclude and point out future work in Section 12.
Formal Preliminaries
Let be the finitely generated propositional language over a finite alphabet
with atoms
and with formulas
, equipped with the standard connectives
. For conciseness of notation, we will sometimes omit the logical
-connector, writing
instead of
, and overlining formulas will indicate negation, that is,
means
. The symbol
denotes an arbitrary propositional tautology, and
denotes an arbitrary propositional contradiction. The set of all propositional interpretations resp. possible worlds over
is denoted by
.
means that the propositional formula
holds in the possible world
; then
is called a model of
, and the set of all models of
is denoted by
. Similarly, for sets of propositions
,
denotes the set of possible worlds that satisfy all elements of
. For propositions
,
holds iff
, as usual. Analogously, for sets of propositions
,
holds iff
. By slight abuse of notation, we will use
both for the model and the corresponding conjunction of all positive or negated atoms. Since
means the same for both readings of
, no confusion will arise. Logical equivalence is denoted by
and the set of classical logical consequences of
by
,
A conditional , with
, expresses the statement “If
then plausibly
.” The set of all conditionals is denoted by
. A conditional belief base
is a finite set of conditionals. For this article, we presuppose that each considered conditional
is not self-contradictory, that is, that
holds. Although considering contradictions within conditionals is interesting and relevant in principle, we focus on non-contradictory formulas and conditionals here to make the general techniques clearer, leaving the general case for future work. Note that we identify conditional statements
having a tautological antecedent with the (plausible) proposition
.
Semantics for conditionals are provided by epistemic states, which represent both the propositional beliefs of an agent, as well as meta-information about how the agent may change her beliefs in the light of new information. In the original paper by Darwiche and Pearl (1997), which introduced epistemic states, no specific structure is assumed for an epistemic state
apart from a function
that maps the epistemic state to its associated propositional beliefs
. In order to categorize and compare different kinds of conditional semantics, more structured approaches have emerged in the literature, for example formalizing conditional logics as institutions (Beierle & Kern-Isberner, 2009, 2012), which are based on category theory (Goguen & Burstall, 1992), or recently epistemic spaces (Sauerwald & Thimm, 2024; Schwind et al., 2022), defined as tuples consisting of a set of epistemic states and a belief function. Since in this article, we restrict ourselves to one global propositional language (and do not need for example epistemic states which work on different signatures), but require a notion of conditional acceptance, we make use of the following simple umbrella structure, which can be seen as an extension of the idea of epistemic spaces from Schwind et al. (2022).
(Epistemic Space)
An epistemic space is a tuple
where
is a non-empty set whose elements are called epistemic states, and
is an acceptance relation with respect to conditionals.
We slightly abuse notation and use
to denote both the set of epistemic states as well as the respective epistemic space. Note that the acceptance of propositional beliefs
in epistemic state
is also covered by
due to
. Therefore, we also write
instead of
. Since it will always be clear from context which acceptance relation is meant, we will henceforth omit the subscript and simply use the symbol
for all acceptance relations.
A conditional belief base
is accepted by an epistemic state
(resp. a ranking function
) if every conditional in
is accepted. We denote this as
. We call
consistent if there is an epistemic state
such that
. This corresponds to a strong notion of consistency for ranking functions as described in Goldszmidt and Pearl (1996), Haldimann et al. (2023a).
Epistemic states naturally induce inference relations
via their acceptance behavior:
iff
or
. Note that the latter condition
ensures supraclassicality. Two epistemic states
are called (inferentially) equivalent, denoted as
, iff they induce the same inference relation, which is the case if and only if they accept exactly the same conditionals, that is
iff
for all
.
A revision operator for an epistemic space
is a function
, with
being a non-empty set, and such that
holds for all
and
. We call
the input domain of the revision operator
.
In this article, we consider two commonly used kinds of representations for epistemic states: TPOs on the possible worlds
, and ordinal conditional functions (OCFs) on
, which will both be defined next. The corresponding epistemic spaces will be called
and
, respectively.
Total preorders are transitive and reflexive total relations, and stand for plausibility orderings on the set of possible worlds. As usual,
if
, but not
, and
if both
and
. The most plausible worlds are located in the lowermost layer of
which we denote by
. More generally, if
is a subset of possible worlds,
denotes the set of minimal worlds in
according to
. The preorder
is lifted to a relation between propositions in the usual way:
if there is
and
for all
; equivalently, whenever
are both not empty, if
. A conditional
is accepted by
, denoted by
, if
.
OCFs, which we often simply call ranking functions,
with
, were introduced (in a more general form) by Spohn (1988) and implement TPOs by ranks in the ordinals, here natural numbers. These ranks express degrees of implausibility, or surprise, with higher ranks being considered less plausible. The degree of (im)plausibility of a formula
is defined by
. Hence, due to
, at least one of
must be 0. A proposition
is believed in
, denoted by
, if
for all
such that
; this is equivalent to saying that
. This notion can be extended in a natural way to assign ranks to sets of formulas
via
. A conditional
is accepted by
if
. For a subset of possible worlds
,
denotes the set of minimal worlds in
according to their ranks in
. Note that these definitions are in full compliance with the corresponding definitions for TPOs and epistemic spaces. The symbol
denotes the uniform OCF, which is uniquely defined as the only OCF with just one non-empty layer containing all possible worlds, that is,
.
Both ranking functions and TPOs are organized in layers, that is subsets of worlds which are equivalent with respect to the TPO or which have the same rank, respectively.
(Layers)
The layers of a TPO are the equivalence classes of worlds with respect to
. The layers of an OCF
are correspondingly the sets
for each rank
.
If
are the layers of a TPO or an OCF, respectively, then by convention
resp.
is the lowermost layer containing the most plausible worlds. By slight abuse of notation, we sometimes write
to indicate the ordering of the layers. In contrast to TPOs, ranking functions can have empty (in-between) layers2, that is, there may be ranks
with
while there is
with
. Ranking functions without (in-between) empty layers are called convex3 (Sezgin, 2023).
TPOs and ranking functions both provide convenient representations of epistemic states for nonmonotonic reasoning and belief revision. They are fully compatible in that each ranking function induces (uniquely) a TPO, and each TPO can be associated with some ranking function. This relationship between TPOs and ranking functions was made precise via transformations in Haldimann et al. (2023b), Kern-Isberner et al. (2023). Here, we recall the corresponding definitions and technical results in an adapted form from Kern-Isberner et al. (2023).
Let be a ranking function on . The transformation operator maps to its induced TPO such that
holds for all
. The transformation operator
maps each TPO
to the minimal induced ranking function
by setting
for all
.
While the definition of
is straightforward, a transformation operator
for the other direction needs to make a choice since
is a set of (equivalent) ranking functions. The default in the literature is often to use the minimal possible ranks, which corresponds to the definition of
above, and yields a convex ranking function as a result. The transformation operator
is a well-defined operator such that
holds, that is
. An equivalent but more constructive definition of
can be found in Haldimann et al. (2023b). For our purposes, the indirect but more simple definition via (2) above suffices.
Note the following relationships between the transformation operators.
For every OCF
, it holds that
, that is, applying
at least preserves equivalence.
Any two OCFs
are equivalent if and only if
.
Related Work
The fact that epistemic states inducing the same TPO, which includes equivalent ranking functions, do not necessarily behave the same under iterated revision is a known problem in the literature (Aravanis et al., 2019; Schwind et al., 2022). The goal of this article is to focus on the relationship between TPOs and ranking functions, as well as to propose some possible solutions. For a more in-depth discussion of connections between these two cited works and our approach, we refer to Section 11.
The idea of revision equivalence, which we propose in this article, is similar in spirit to uniform equivalence resp. strong equivalence in answer set programing (ASP) (Eiter et al., 2005) which postulates that logic programs yield the same (respective parts of) answer sets even if further facts resp. rules are added. However, while in ASP, the role of a stronger notion of equivalence is to ensure modularity of (parts of) a logic program, we are more interested in guaranteeing representation invariance regarding the TPO structure in the context of this article.
Our axiomatization of revision equivalence for ranking functions is similar to the equivalence axiom (E) in forgetting (Gonçalves et al., 2016; Kern-Isberner et al., 2019). In this article, we explore equivalence in the framework of iterated revision operators.
In this article, we also present general methodologies for obtaining revision operators for ranking functions from revision operators for TPOs to broaden the scope of applicability of our notion of representation invariance. Similar methodologies have been already intuitively applied in Sezgin (2023), Sezgin and Kern-Isberner (2023). In this article, we extend these methodologies and underpin them with formal properties. Moreover, we apply them to more TPO-operators from the literature, namely the so-called elementary operators from Chandler and Booth (2020).
We make use of transformation functions between TPOs and OCFs (Kern-Isberner et al., 2023), and introduce parametrized transformations from TPOs to OCFs. Such transformations are special cases of so-called epistemic state mappings which have been studied in a more general form in Haldimann et al. (2023b).
The idea of revision equivalence elaborated in this article and the novel postulates presented here can be used to evaluate approaches to revise ranking functions by (sets of) propositions and/or conditionals. Parallel revision (Delgrande & Jin, 2012) and the specific revision operator defined in Darwiche and Pearl (1997) are such operators and are considered in this article. Most prominently, strategic c-revisions (Beierle & Kern-Isberner, 2021; Kern-Isberner, 2004) are used in this article to illustrate the approach and obtain basic results.
AGM-based Iterated Revision
In this section, we recall basics of iterated revision according to the seminal paper of Darwiche and Pearl (1997), which extends the original AGM framework (Alchourrón et al., 1985). Afterwards, we present several revision operators from the literature for the revision of ranking functions.
The AGM/DP Revision Framework
The original AGM framework for belief revision (Alchourrón et al., 1985) consists of eight postulates for revising belief sets, which are deductively closed sets of propositional formulas, by single propositions. This framework is now considered a reasonable baseline for rational belief revision operators, and has been extended in various directions. In the following, we recall a version of the AGM postulates by Darwiche and Pearl (1997) for the revision of epistemic states by single propositions, adapted for the notation we use in this paper.
(AGM Revision Operator)
Let be an epistemic space. A revision operator
is called an AGM-revision operator for
if it satisfies the following postulates for each
and all
.
.
If
, then
.
If
, then
.
If
, then
.
.
If
, then
.
The following representation theorem is an adaption of the famous result by Katsuno and Mendelzon (1991) which highlights the importance of TPOs in the AGM/DP revision framework.
A revision operator that assigns a posterior epistemic state
to a prior state
and a proposition
is an AGM-revision operator for epistemic states iff there exists a TPO
on
with
such that
holds for every proposition
.
Beyond the theorem above, Darwiche and Pearl proposed postulates for the iterated revision of epistemic states. The semantic version of these postulates4 is given below, concerning the revision of an epistemic state
equipped with a TPO
with a proposition
(Darwiche & Pearl, 1997).
If
, then
iff
.
If
, then
iff
.
If
and
, then
implies
.
If
and
, then
implies
.
An AGM revision operator for epistemic states that satisfies the postulates (DP1)–(DP4) will be called a DP-revision operator in this article. A crucial insight from Darwiche and Pearl (1997) is that TPOs (and in particular ranking functions) encode conditional information and therefore, they are adequate representations of epistemic states in the context of (iterated) AGM revision.
The axiomatic base of DP revision has become the leading paradigm for AGM revision under iterated change. Many approaches to revising TPOs, and richer structures like ranking functions which are able to induce TPOs, have been presented since then. In the following subsections, we recall major approaches to revising ranking functions, starting with a simple but effective operator that was already proposed in Darwiche and Pearl (1997).
The DP-Operator
Darwiche and Pearl proposed a revision operator for the revision of ranking functions in Darwiche and Pearl (1997) to illustrate the feasibility of their axiomatic framework.
()
Let be an OCF and let
. Then the revision of
by
via
is defined as follows:
The operator
is a DP revision operator compliant with the axiomatic framework presented in Darwiche and Pearl (1997) and sketched above. Essentially,
shifts the plausibility of models of
compared to all models of
, such that the lowermost layer of
contains only models of
, while the plausibility of all models of
is decreased by one rank. The internal ordering and rank differences among models of
(resp. models of
) are not altered in the process.
We use the general term DP revision for any AGM-based approach to revision of TPOs or ranking functions that satisfies the postulates (DP1)–(DP4) from above. The specific revision operator
from Definition 5 is referred to as the DP-operator.
The DJ-Operator
Another operator for the revision of OCFs is the parallel OCF-revision operator proposed in Delgrande and Jin (2012), which we denote by
here. Unlike the operator
and most other revision operators in the literature, parallel OCF revision supports revision by not just one but multiple propositional formulas simultaneously.
In order to define
, we first need to introduce two operations on sets of formulas.
(Completion, , Reduction, )
Let
be a set of formulas and
. Then the completion of
with respect to
is defined as
Furthermore, the reduction of
with respect to a possible world
is defined as
The revision operator
can now be defined via three conditions, which can be utilized to successively set the ranks of different sets of possible worlds in the resulting OCF.
Let be an OCF and let be a (finite and consistent) set of propositions. Then
is constructed as follows.
For all
, set
.
For
(successively): Let
with
. For all
, set
For
, set
The first construction rule in the definition above ensures that minimal models of the new information
are most plausible in
. The second rule sets the rank of minimal models of subsets of the new information
, ensuring that minimal models of larger subsets are more plausible that minimal models of smaller subsets. The third construction rule sets the rank of all remaining worlds, that is those that are not minimal models of any subset of
, based on the rank of the respective minimal models defined in the second rule and the original rank difference between the minimal and non-minimal models in
.
The revision operator satisfies extended versions of the AGM postulates for multiple revision, as well as extended versions of (DP1), (DP3), and (DP4), but not (DP2)5. Furthermore, the construction of
ensures the satisfaction of additional rationality postulates for multiple OCF-revision that are defined in Delgrande and Jin (2012).
For revisions by single propositions, that is if the set contains only a single element, the revision operator
essentially coincides with
.
Let
be an OCF and let
be a proposition. Then
.
Let
and
.
For , we have
. Note that
since
is a minimal model of
. Hence,
.
Since , the second construction rule from Definition 7 can only be applied to the set
. We have
. Hence,
. Note that the only possible choice for the set
such that
is
. Furthermore,
. Now we obtain:
If
, we have
. The maximum expression above then evaluates to
. If
, we have
. Again, the maximum expression evaluates to
. Since
is a minimal model of
in
, we have
. Therefore, we obtain
.
For
we make a case distinction on whether or not
is a model of
. If
, then
, that is,
. Then
If
, then
, that is,
. Now we have
which is equivalent to
. Since
, also
, that is,
remains a non-minimal model of
in
. Therefore, the rank of
in
has already been determined by the previous construction rule, that is,
. Hence, we obtain
In every case, the revision via
leads to the same result as the revision via
.
Let be an OCF and
a set of conditionals. Then a c-revision of
by
is an OCF
of the form
with nonnegative integers
for each
, called impact factors, satisfying
The constraint satisfaction problem for c-revisions of
by
, denoted by CR(κ,Δ), is given by the set of constraints
, for
, where the
are constraint variables taking values in
.
Constraint (5) ensures that
, and the integer
is a normalizing term, that is, ensuring that
. Each c-revision is an iterated revision in the sense of Darwiche and Pearl (1997) because the DP-postulates are implied by the principle of conditional preservation (Kern-Isberner, 2004, 2018). A solution of CR(κ,Δ) is an
-tuple
of natural numbers, and each solution defines a proper c-revision
according to (4) (Kern-Isberner et al., 2023).
Since (4) and (5) provide a general schema for revision operators, many c-revisions are possible. With a selection strategy (Beierle & Kern-Isberner, 2021), we can select single, well-defined solutions for any revision problem:
(Selection Strategy , Strategic c-revision
)
Aselection strategy (for c-revisions) is a function
assigning to each pair of an OCF
and a (consistent) set of conditionals
an impact vector
that solves
. If
, the c-revision of
by
determined by
is
, denoted by
, and
is a strategic c-revision.
The case of
, that is, revising a ranking function by a single conditional, is particularly interesting in the context of this article because it allows us to study the simplest case of conditional revision. Schema (4) yields in this case
where
. By identifying (plausible) propositions
with conditionals
, we can derive from (6) the schema for c-revisions by propositions:
where
.
As before, the impact factors
in (6) and (7) can be specified by an appropriate selection strategy. Note that (7) generalizes the DP-operator
from Definition 5.
Revision Equivalence
We start this section by reviewing (static) inferential equivalence and then broaden our view to take revision operations into account. We also present some first results on when to expect revision equivalence, that is, equivalent ranking functions staying equivalent even after revision.
Inferential Equivalence of Ranking Functions
As can be found, for example, in Beierle and Kutsch (2019), two ranking functions
are (inferentially) equivalent,
, iff for all
it holds that
iff
. In particular,
iff
, that is, equivalent ranking functions have exactly the same layers, but may assign different ranks to possible worlds resp. formulas. Nevertheless, for a given
, in both
the same worlds are minimal models of
.
Let
be two equivalent OCFs, let
. Then a model
of
is a minimal model of
with respect to
iff it is a minimal model of
with respect to
. In particular,
, and for any model
of
,
iff
.
Let
be a minimal model of
with respect to
, that is,
for all models
of
. Then also
for all models
of
holds, and therefore
is also a minimal model of
with respect to
.
Since
are the respective minimal models of a tautology
,
follows as a special case.
Finally, observing that
holds if and only if
is a minimal model of
with respect to
,
, this yields
iff
.
As an immediate consequence of Proposition 4, we obtain that inferentially equivalent ranking functions have the same belief sets.
If
for OCFs
, then
.
Some characterizations of inferential equivalence via transformations can already be found in Lemma 1. The following lemma provides additional useful characterizations of inferential equivalence in terms of relationships between the numerical ranks.
For any ranking function
, define
. Let
be two ranking functions. Then the following statements are equivalent:
and
are inferentially equivalent.
There is a strictly increasing bijection
such that
for all
.
There is a transformation
, strictly increasing on
, such that
for all
.
We prove this lemma in a cyclic way by showing (1)
(2)
(3)
(1),
(1)
(2): Let
be two (inferentially) equivalent ranking functions, that is,
iff
. Let
, with
. Note that due to equivalence,
and
have the same number
of non-empty layers, and the layers
contain exactly the same worlds for all
. Define
by
. Then
is a bijection and strictly increasing, and it holds that
for all
. Hence
for all
.
(2)
(3): Let
be two ranking functions, and assume there is a bijection
, strictly increasing, such that
for all
. We extend
to a function
by setting
Then clearly,
coincides with
on
and hence is strictly increasing on
. Due to
, for all
we have that
iff
for
, which was to be proven.
(3)
(1): Let
be two ranking functions, and assume there is a transformation
, strictly increasing on
, such that
. In particular, this means that
. We have to show that
are (inferentially) equivalent. For this, we show that
iff
, and
iff
hold. First, let
for
. Due to
for all
, this yields
. Conversely,
implies
since
is strictly increasing. Now let
. Since
is strictly increasing, we conclude
. Conversely,
implies that
, and since
is strictly increasing on
, we must also have
.
This concludes the proof.
Figure 1 illustrates the bijection mentioned in the lemma. If the ranks of the worlds are along a strictly increasing curve in the plot, then the ranking functions are inferentially equivalent (Figure 1(a)). Otherwise, if no such bijection can be found, the ranking functions are not equivalent (Figure 1(b)).
Illustration of Lemma 6. The dots represent possible worlds (whose ranks in
resp.
would be given by the values along the axes). (a)
and (b)
.
Revision Equivalence of Ranking Functions
Equivalent rankings are usually not identical. They induce the same TPO, but can assign different ranks to possible worlds due to having empty layers at different positions. For the (static) conditional/nonmonotonic inferences they yield, these empty layers have no effect. However, when revising the ranking functions by the same information, empty layers may crucially influence the revised ranking function. This can be observed in the following example.
Let
be ranking functions over
as given in Table 1. It is straightforward to see that
, that is, the ranking functions are inferentially equivalent, with
. Now, we revise both ranking functions by
using the DP-operator from Definition 5, yielding
(see again Table 1).
Ranking Functions
and their DP-Revisions for Example 1.
0
0
1
1
3
6
4
7
2
4
1
2
1
2
0
0
Observe that for both prior ranking functions, we have
, that is,
,
. The acceptance of the conditional
still holds for
, but not for
. In more detail, we have
, but
. Therefore,
and
are no longer equivalent because they do not accept the same conditionals anymore. The position of empty layers in
has affected the revision outcome.
The influence of empty layers on revision operators has important consequences for the representation of TPOs by ranking functions in belief revision scenarios. Both ranking functions in Example 1 represent the TPO
but revision of
by
yields
, while the revision of
by
produces
. Note that both revisions are DP-revisions, that is, the DP-revision framework cannot guarantee equivalent revision outcomes for equivalent ranking functions. Since this framework lifts AGM revision (Alchourrón et al., 1985) to the epistemic level, it is clear that also AGM revision cannot address this point.
Towards establishing a representation invariance of TPOs (which are fundamental to AGM belief change theory (Katsuno & Mendelzon, 1991)) with respect to ranking functions under belief revision, we introduce a stronger notion of equivalence, which we call revision equivalence. With this notion in place, we will be able to study classes of equivalent ranking functions which do not lose their equivalence during revision. As a general prerequisite and as indicated in Section 2, we presuppose that all occurring propositions and conditionals, respectively sets thereof, are consistent. This helps us to focus on the core ideas of the methodology while the more general case of allowing inconsistencies is left for future work.
(Revision Equivalence)
Let be two ranking functions (over the same signature), let
be an iterated revision operator taking ranking functions and (sets of) propositions resp. conditionals as input, and returning a revised ranking function as output.
are (propositionally) revision equivalent with respect to
, in symbols
, if
for all
.
are conditionally revision equivalent with respect to
, in symbols
, if
for all
.
are universally propositionally/conditionally revision equivalent with respect to
, in symbols
resp.
, if
for all
, resp.
for all
.
In spite of the many postulates that, for example, the AGM theory or the DP framework provide, it is clear that (particularly) iterated revision operators dealing with ranking functions and TPOs can be quite arbitrary. To restrict arbitrariness a bit, we often postulate that any revision operator
to be considered for revision equivalence in the following satisfies a stability postulate which is inspired by AGM revision resp. expansion:
Let
be a proper input for a revision operator
for ranking functions, that is,
may be a proposition, or a conditional, or a set thereof. If
, then
.
(Stability) claims that the prior ranking function is not changed unnecessarily, that is, if it already satisfies the success condition of revision. For strategic c-revisions, (Stability) can be ensured by postulating the following for strategies:
If
then
.
Since our Definition 10 of revision equivalence is most general and can be applied to a wide range of revision operators, we also presuppose a compatibility of revision by conditionals with (multiple) propositional revision via the following postulate.6
for all
and all
.
In this way, we ensure that conditional revision operators can be used to define (iterated) propositional revisions. For example, the (strategic) c-revisions recalled in Sec. 4.4 can also be taken as (multiple) propositional revision operators.
Before we explore the technical properties of revision equivalence in more detail, we want to show that (propositional) revision equivalence generalizes (inferential) equivalence, and that conditional revision equivalence generalizes propositional revision equivalence, when the two postulates above are presupposed.
Let
be an iterated revision operator taking ranking functions and propositions resp. conditionals as input, and returning a revised ranking function as output.
If
satisfies (Stability), then revision equivalent ranking functions with respect to
are (inferentially) equivalent.
If
satisfies (PC), then conditionally revision equivalent ranking functions with respect to
are (propositionally) revision equivalent with respect to
.
The proof of this proposition is straightforward, and it is clear that (PC) can also be applied to sets of propositions, that is, to multiple revisions (Delgrande & Jin, 2012; Kern-Isberner & Huvermann, 2017). This implies that Proposition 7 also holds analogously for forms of universal equivalence.
This proposition shows that the notions of conditional revision equivalence, propositional revision equivalence, and inferential equivalence are downwards compatible. It helps us with our investigations which ranking functions are revision equivalent with respect to a given revision operator because we can focus on considering (inferentially) equivalent ranking functions.
The converse implications of Proposition 7 do not hold, that is the different notions of equivalence are not “upwards compatible”. This is shown in the example below.
Consider a strategic c-revision operator
which employs the following selection strategy:
Given (PC), we can use this operator for propositional revision:
. Let
be inferentially equivalent ranking functions as given in Table 2. The table also shows that conditional revision by
via
yields different results for the two ranking functions. We have
, and also
in both revision scenarios. This results in
, but
.
Ranking Functions
and their Revisions by
for Example 2.
1
1
0
0
0
0
2
2
2
2
1
1
3
4
2
3
In the next subsection, we will investigate classes of revision-equivalent ranking functions.
Classes of Revision-Equivalent Ranking Functions
When representation invariance of TPOs is the goal, it is clear that (conditional) revision equivalence of ranking functions is highly desirable because it allows to take any ranking function that implements a given TPO (via the transformation operator
) for purposes of revision, and any such revision would yield an OCF-representation of the same revised TPO. Hence, revision operators for ranking functions could be unambiguously used for defining (maybe complex) revisions of TPOs. This would not just be a matter of convenience, but enrich the realm of possible TPO revisions: For instance, c-revisions are able to revise ranking functions by sets of conditionals, which is far beyond what is currently possible for TPOs (see, e.g., Kern-Isberner et al. (2023) for first approaches).
However, the next proposition shows that this is not realistic in full generality, not even in simple cases when revising by just one proposition, and not even when we focus on strategic c-revisions which allow for specifying connections between different revision problems explicitly.
Let
be two different, but (inferentially) equivalent ranking functions such that their lowermost layer
has more than one element. Then there is a strategic c-revision operator
and
such that
.
Let
be as specified in the proposition. Since the uniform ranking function
defined by
for all
is the only ranking function with just one non-empty layer, and
are presupposed to be equivalent but different, both must have at least two (non-empty) layers. Then we can choose
, and
in another layer, which means that
and
for both ranking functions (
). We c-revise by
, that is, we investigate
for
with a strategic c-revision
such that
and
. Both
have the form (4), more precisely, for
,
and both constraint variables
comply with the constraint (5) which yields
in this case. Now we consider models of
. Due to the prerequisite that
has more than one element,
. For all
, we have
,
. According to the selection of
, we now have
, but
. Therefore,
and
cannot be equivalent.
Proposition 8 states that any two ranking functions as specified in the proposition cannot be (propositionally) revision equivalent under all c-revisions. Since any c-revision fully complies with the DP-postulates, such ranking functions are also not (propositionally) revision equivalent under all DP-revisions.
Nevertheless, exploring the prerequisites of Proposition 8 and its proof, we find a specific type of ranking functions that are (even conditionally) revision equivalent with respect to c-revisions.
If
are two equivalent ranking functions having exactly two non-empty layers
such that
contains exactly one element, then
for any
, and any strategic c-revision
whose strategy
satisfies (Stab).
Let
be equivalent ranking functions with exactly two non-empty layers
such that
. Let
. Each strategic c-revision
with a strategy
such that
has the form
We consider three cases, regarding whether
is a model of
,
, or
.
Case 1: If then
, and we have
,
. Because of
satisfying (Stab), this implies
for both
, and both revised ranking functions are still equivalent.
Case 2: If , we have that
holds, and also
,
hold,
. From this, we obtain for
which due to
for
, yields a unique TPO
such that
for both ranking functions. Therefore,
.
Case 3: Let . Then we have
and
for
. Hence also
. Furthermore,
yielding again a unique TPO
such that
Hence
also in this case.
The motivating Example 1 has clearly shown that revision equivalence cannot be expected in general. Moreover, Propositions 8 and 9 have shown that revision equivalence of ranking functions under any c-revision can be expected only in very special cases. Broadly speaking, the reason for these negative results is that OCF-revision operators in general can make use of all the information that is specified in an OCF, including the empty layers, even if we may not want them to do so.
Therefore, we shift to a more operator-focused perspective in the next section. Instead of trying to find specific (classes of) ranking functions, whose equivalence is preserved by arbitrary revision operators, we investigate specific (classes of) revision operators, which are able to preserve the equivalence of arbitrary ranking functions.
Preservation of Equivalence from the Perspective of Revision Operators
After the results of the last section have shown that revision equivalence between ranking functions is not easy to achieve, we now turn to the investigation of properties of revision operators which guarantee the preservation of inferential equivalence. Therefore, we refine our research questions accordingly focusing on the perspective of revision operators:
Given a specific revision operator
for ranking functions, is it ensured that
preserves equivalence in general, that is, for any two ranking functions
, and for any suitable input
, it holds that
implies
?
Given a specific class
of revision operators for ranking functions, can preservation of equivalence be ensured in general, that is, for any
, for any two ranking functions
, and for any suitable input
, it holds that
implies
?
These research questions motivate the following postulate.
(Preservation of Equivalence)
Let be a revision operator for ranking functions with input domain
. Then we say that
preserves equivalence among ranking functions if for any two ranking functions
and for any suitable input
, the following postulate holds:
If
, then
.
A revision operator satisfying (PoE) guarantees true representation invariance of TPOs: Regardless of which specific OCF
is chosen as a representation for a given TPO
, the revision of
would yield the same result up to inferential equivalence, that is always the same induced ordering on possible worlds.
Propositions 8 and 9 yield negative answers to (RQ2) for the class of strategic c-revisions, and hence also for the broader class of DP-revision operators. Therefore, in the next section, we refine both the notion of revision equivalence and the strategies for c-revisions further so that (PoE) can be guaranteed in special cases.
But first, we focus on (RQ1), investigating specific revision operators with respect to them satisfying (PoE). The research question is answered negatively for
from Definition 5 and
from Definition 7.
The revision operators
and
do not satisfy (PoE).
We already know from Example 1 that the operator
cannot satisfy (PoE). Since the revision in that counterexample is also a DJ-revision (according to Proposition 3), the operator
also does not satisfy (PoE).
For strategic c-revisions by (single) conditionals, (RQ1) can be answered positively in clearly specified cases.
A strategic c-revision operator
for revising ranking functions by a single conditional satisfies (PoE) if for all ranking functions
and all conditionals
,
We show the sufficiency of the condition
for all
and all conditionals
to ensure (PoE). Let
be (inferentially) equivalent ranking function and
any conditional. We set
,
. Let
be a selection strategy such that
,
. We need to show that
. According to (6), we have
for
. Note that
is trivially fulfilled because of
.
Let . If for both worlds,
or
, schema (9) yields immediately
iff
because of
. Let us now consider the other two cases.
If and
, according to schema (9), we have
iff
for all
. Since
for all
, this holds true in any case. Therefore,
for both
, and hence the equivalence holds.
In the fourth case and
, we can never have
for any
because this would mean
and hence
, contradiction. In summary, this proves the proposition.
The lower bound
specified for impact factors
of c-revisions in Proposition 11 is tight in the sense that there are equivalent ranking functions
and conditionals
where the violation of this lower bound leads to loss of equivalence under revision, as the next example shows.
Consider the following ranking functions
over the signature
in Table 3. Clearly,
. We revise both ranking functions by
and obtain, according to schema (9),
with
for
; please note that
and
for both
. For the maximal ranks, we compute
and
, where
,
. We choose
and
. The resulting revisions are shown in the rightmost two columns of Table 3, and it can be clearly seen that
and
are no longer equivalent.
Ranking Functions
and their Revisions by
for Example 3.
1
1
1
1
1
1
3
2
2
5
2
5
0
0
0
0
In line with results from the previous section, the postulate (PoE) cannot be expected to hold in general. However, strategic c-revisions can comply with (PoE) by using suitable general revision strategies. In the subsequent sections, we will move forward in two directions: First, we will work with a stronger notion of equivalence in the next section, with the goal of obtaining a more generally applicable postulate. Afterwards, in Section 8 and Section 9, we will propose fruitful approaches towards designing revision operators (from existing operators for the revision of TPOs and OCFs, respectively) with the goal of representation invariance in mind.
Linear Revision Equivalence and Preservation of Scale
As the previous sections have shown, representation invariance for TPOs by ranking functions is hard to achieve in general. In this section, we equip the notion of equivalence of ranking functions with more numerical structure by introducing linear revision equivalence. We define this novel concept and show how strategic c-revisions can be made compatible with it. Afterwards, we also evaluate other approaches from the literature regarding linear revision equivalence.
Linear Equivalence of Ranking Functions
Revision equivalence is a property of ranking functions, but is crucially parameterized by the used revision operator. If the (type of) revision operator is fixed, then the research question is in which cases equivalence of ranking functions can be preserved under revision. Example 1 showed that DP-revision operators are too general to guarantee revision equivalence, and Proposition 8 revealed that also c-revisions cannot preserve equivalence of ranking functions in general. Nevertheless, Proposition 9 gives rise to some hope that we can have revision equivalence under c-revisions for specific classes of ranking functions. Therefore, we focus on c-revisions first, and we also narrow down the notion of equivalence of ranking functions to a specific case that we have already observed in Example 1: There, we have
, yielding a specific form of equivalence where empty layers are inserted in a very regular way. We call this linear equivalence.
Two OCFs
over
are linearly equivalent, in symbols
, if there is a positive rational number
such that
.
Note that in Definition 12, we allow
to be rational, but
must be an OCF yielding only natural numbers for ranks. For an example of a rational scaling factor
, see Table 4.
Linearly Equivalent Ranking Functions
with a Scale of
as an Example for Definition 12.
0
0
3
2
6
4
9
6
It is obvious that
and any of its multiples
are equivalent. Clearly,
is also an equivalence relation on OCFs. Suitable representatives of the equivalence classes are, for example, those
where the greatest common divisor of all
is
. Note that
is a subrelation of
, so all statements of Proposition 4 also hold for linearly equivalent OCFs. Linear equivalence corresponds to transformations among OCFs of the type
which are shown to be especially well-behaved in Haldimann et al. (2023b).
Proposition 4 states that for any equivalent OCFs
and for any model
of
,
iff
. From this, however, we cannot derive a relationship between
and
in general, mainly because of empty layers being possibly present in the OCFs. For the special case of linearly equivalent OCFs, a useful result can be shown here.
If
, then
for any formula
.
Let
be a minimal model of
with respect to
, that is,
. Due to Proposition 4,
, and the statement of the lemma follows immediately.
A crucial property of linear equivalence is that empty layers are inserted in a very regular way. For the special case of a convex ranking function
, the distance between any two successive layers of any linearly equivalent ranking function
, that is, the number of empty layers between them, is constant. The following lemma makes this more precise, its proof is straightforward.
Let
be two inferentially equivalent ranking functions with non-empty layers
,
, that is,
for all
,
,
.
are linearly equivalent iff there is
such that
In particular, if
is a convex ranking function and
, then we have
See Figure 2 for a visualization of a linear equivalence between two ranking functions in the style of Figure 1. Clearly, the ranks of the non-empty layers in one ranking function rise proportionally to the other, which is the core message of Lemma 13. Moreover, a comparison between Figure 1(a) and Figure 2 illustrates why the notion of linear equivalence is a strengthening of inferential equivalence: Linear equivalence comes with a stronger restriction of the possible bijections between ranks.
Illustration of the bijection between ranks for linearly equivalent ranking functions
.
Strategies for Linear Revision Equivalence
Regarding the negative result with respect to DP-revision and even linearly equivalent ranking functions from Example 1, a first hypothesis might be that c-revisions in general do a better job preserving linear equivalence under revision. However, the next example shows that this is not the case.
Let
be the same ranking functions over
as in Example 1, that is, we have that
, hence
. Now, we strategically c-revise both ranking functions by
, choosing
, and
. However, we are again loosing equivalence under revision, as Table 5 clearly shows. Now we have
, but
.
Ranking Functions
and their Strategic c-Revisions for Example 4.
0
0
1
3
2
3
6
4
9
8
2
4
1
2
2
1
2
0
0
0
Nevertheless, we might find that our selection strategy
is too arbitrary by setting
. We know that
, so choosing
seems to be more suitable. And indeed, as Table 5 shows, we now have equivalence of the revised ranking functions if we keep
. Even more, we find that the revised ranking functions are still linearly equivalent with the same factor
, that is,
.
The crucial insight from Example 4 is that indeed, strategic c-revisions yield revision equivalence of linearly equivalent ranking functions if the strategies respect multiples of ranking functions. This motivates the following postulate for strategic c-revisions:
.
Using this postulate, we are now able to formulate a first general positive result which even holds for universal conditional revision:
Linearly equivalent ranking functions are universally conditionally revision equivalent under any strategic c-revision
where the strategy
satisfies (Mult
). More precisely, if
, and
satisfies (Mult
), then it holds that
for any (consistent) set of conditionals
.
Let
and let
satisfy (Mult
). Now let
and
for a set of conditionals
. Furthermore, let
and
denote the
-th element of
and
, respectively, belonging to the
-th conditional
. Then for all possible worlds
it holds that
where
is the normalizing integer defined as
By substituting
with
and applying (Mult
) as well as distributivity (with respect to
) in both equations above, we obtain
where
is the corresponding normalizing integer for
defined as
Therefore,
, which was to be shown.
This theorem shows that strategic c-revision
where the strategy
satisfies (Mult
) respects linear equivalence in a perfect way, that is, the factor
which is crucially related to number and position of empty layers (see Lemma 13) is the same for the revised ranking functions.
For strategic c-revisions
with a single conditional, the converse also holds, that is, the preservation of linear equivalence together with the factor
is only possible if the strategy
satisfies (Mult
).
Let
be a conditional, let
be an OCF, and let
be a strategic c-revision. Then
holds if and only if
satisfies (Mult
).
Direction
: Follows directly from Theorem 14.
Direction
: Assume
. This means, for
and
:
In the case that
the equality holds. In the case
it needs to hold that
That means it needs to hold that
. Thus,
must hold and, therefore,
satisfies (Mult
).
Proposition 15 does not extend to revisions with sets of multiple conditionals due to possible overlap among conditionals. For a trivial example, consider a revision with
. Then
holds by choosing
and
, which is a violation of (Mult
).
Preservation of Scale
The property of (Mult
)-strategic c-revisions expressed in Theorem 14 of respecting the scaling factor of ranking functions may also be interesting for other revision scenarios. This yields a clear invariance property that can be checked to ensure revision equivalence of ranking functions in a precise numerical manner.
(Preservation of Scale)
A revision operator
is scale-preserving if for any ranking functions
and for any proper input
, it holds that
If
, then
.
Theorem 14 then states that strategic c-revisions
where the strategy
satisfies (Mult
) are scale-preserving under revision by sets of conditionals.
The other two revision operators we have considered for OCFs,
and
, are not scale-preserving. Since the OCFs
in Example 1 are linearly equivalent, this example also acts as a counterexample for (PoS).
Note that while (PoS) strengthens (PoE) significantly for linearly equivalent ranking functions, neither postulate implies the other in general. (PoE) only forces linearly equivalent ranking functions to remain equivalent under revision, but linear equivalence may not hold anymore, or the scaling factor might change. On the other hand, (PoS) does not say anything about preserving equivalence of ranking functions in general.
An example for revision operators that satisfy (PoE) but not (PoS) will be provided in Section 10 (see Example 7). The following counterexample shows that (PoS) does not imply (PoE).
Consider the two ranking functions
given in Table 6. Moreover, let
be a selection strategy satisfying (Mult
). It then follows from Theorem 14 that
must satisfy (PoS).
Ranking Functions
and their Strategic c-Revisions for Example 5.
0
0
2
1
1
1
3
2
2
3
0
0
3
5
1
2
Note that there exists no
such that
since
. Therefore, (Mult
) is not enough to restrict the choice of
based on
. Consequently, we may choose
without
violating (Mult
) or
violating (PoS).
However, the results of the revisions via
show that
because
while
. Hence,
violates (PoE).
In summary, with the stronger notion of linear equivalence, the power of strategic c-revisions can be used to preserve even the scaling factor between two linear equivalent ranking functions, but this does not necessarily lead to the preservation of inferential equivalence. It seems to be necessary to design or modify a revision operator explicitly with the preservation postulates in mind, as can be done with strategic c-revisions by restricting selection strategies to those satisfying (Mult
).
For other revision operators from the literature, we will propose methods for achieving satisfaction of (PoE) and (PoS) in the following sections.
Modifying OCF-Revision Operators for Equivalence- and Scale-Preserving Revision
In the previous sections we introduced the postulates (PoE) for equivalence-preserving revision, and (PoS) for scale-preserving revision of ranking functions. We also showed that these postulates are not easy to satisfy in general. In this section, we will therefore propose simple methods for constructing revision operators that satisfy these postulates based on (arbitrary) existing OCF-revision operators.
Convex Equivalents
A first idea towards guaranteeing (PoE) would be to ignore empty layers completely and treat every ranking function
exactly the same as one representative of its equivalence class (with respect to
). The canonical representative of the equivalence class of
is its convex equivalent, that is the minimal ranking function that represents the same TPO as
. In this way, the revision of all ranking functions from the equivalence class of
leads to exactly the same result, that is, we have a perfect representation invariance for the TPO
.
(Convex Equivalent, )
Let
be an OCF. Then the convex equivalent of
is the OCF
.
Note that the output of
is convex by definition, and
is indeed equivalent to
because of Lemma 1. As an example, consider again Table 6: The OCF
is the convex equivalent of
. We now define a revision operator which revised
instead of the actual input
, and show that it preserves equivalence.
Let
be a revision operator for OCFs with input domain
. Then the revision operator
defined by
satisfies (PoE).
If
are equivalent, then
. Therefore,
.
In the construction above, the convex equivalent of
may of course be replaced by any other equivalent ranking function, as long as the same representative is picked for all ranking functions from the equivalence class of
.
It should be noted that
does not satisfy (Stability) in general, not even if
satisfies (Stability). This is an artifact due to the loss of all information about empty layers of the previous ranking function when applying
.
We only need to modify Proposition 16 slightly to achieve preservation of linear equivalence as well. Since the revision result is exactly the same for every ranking function from an equivalence class, and
implies
, scaling this revision result by the prior scale of
resp.
would again yield linearly equivalent ranking functions. This prior scale can be computed in various ways, for example by calculating the distance between the most plausible worlds and the first non-empty layer, which we call the base significance level:
(Base Significance Level, )
Let be an OCF. Then the base significance level of
is defined as
Now scaling the revision result obtained via
from Proposition 16 additionally guarantees (PoS).
Let
be a revision operator for OCFs with input domain
. Then the revision operator
defined by
satisfies both (PoE) and (PoS).
The satisfaction of (PoE) follows directly from Proposition 16. For (PoS), let
be OCFs with
(for some
). Hence,
. Since
, we obtain
, that is, (PoS) holds.
One side effect of the approach above is that
may introduce arbitrary empty layers, which are completely unrelated to the position of the empty layers in the original
. This effect can be eliminated by again scaling the convex equivalent of the revision result instead of the revision result itself.
Let
be a revision operator for OCFs with input domain
, which satisfies (PoE). Then the revision operator
defined by
satisfies both (PoE) and (PoS).
For OCFs
with
we have
since
satisfies (PoE). Therefore, we have
and consequently
, that is, (PoE) holds.
If , then there is some
such that
. Hence,
. Since
, we obtain
, that is, (PoS) holds.
Since
is always convex, the result of the revision by
from Proposition 18 always has a constant amount of empty layers in between each pair of non-empty layers (due to Lemma 13).
In Corollary 17 and Proposition 18, suitable transformations defined on natural numbers help to ensure (PoS). Such transformations and their usefulness for preserving both equivalence and scale among ranking functions will be investigated in more detail in the following.
Parametrized Transformations
The idea of Proposition 18 is to rescale the result of the transformation via
in order to arrive at a more suitable OCF representation of the TPO
. This idea can be generalized to the concept of parametrized transformations, which we will define in the following.
In general, a parametrized transformation between two sets of epistemic states
and
can be seen as a mapping
that considers some additional context
in order to transform the input epistemic state without altering the (conditional) beliefs. In the case of OCFs, the effect of this additional context can be expressed via a function on natural numbers.
(Parametrized -transformation)
Let be a set of strictly increasing functions on natural numbers with
for all
. Furthermore, let
be a non-empty set, and let
be a function called context function. Then the parametrized -transformation is defined via
Note that the standard transformation
can be seen as instantiation of
with
, that is, when the identity function is chosen in all contexts. The requirement of the functions in
to be strictly increasing ensures that the order of the possible worlds is not changed during the transformation. Otherwise the operation would not only be a transformation of the belief state’s representation, but a belief revision.
The next proposition shows that
is indeed a suitable generalization of
in the sense that the order of possible worlds is preserved, that is that
holds for all TPOs
.
Let
be a parametrized transformation according to Definition 16. Then
.
Let
be a TPO. Due to Lemma 6,
and
are inferentially equivalent. Therefore, due to Lemma 1, we have
.
Definition 16 allows for different scales of the resulting OCFs. The functions
may be chosen in a way that aligns best with the intended drop-off in plausibility for each subsequent layer in the original TPO. For example, one may choose functions of the form
for a linear or
for an exponential decrease in plausibility (for
in order to ensure that
is strictly increasing). For linear functions, we define the following notation.
(Linear -transformation)
Alinear -transformation is a parametrized
-transformation using
with the context set
and the context function
, that is,
With Definition 17 in place, the revision operator from Proposition 18 can be expressed as follows:
Moreover, parametrized transformations will prove to be very useful in Section 9 when we consider TPO-revision operators.
Minimal Linear Equivalents
In the previous definitions of
and
in Equations (14) and (15), respectively, we utilized the convex equivalent of a ranking function. Next we will consider the smallest linear equivalent of a ranking function instead. For this we construct a function that lets us scale an OCF to its smallest linear equivalent by employing the greatest common divisor of all populated layers:
This definition ensures that
, defined as
, is always also an OCF. Now we construct the revision operator.
(Scale-preserving Counterpart)
Let be an OCF,
a revision operator with input domain
, and
a proper input for
. Let
be the greatest common divisor of all non-empty layers of
. We define the scale-preserving counterpart to
via
Next we show that any operator constructed according to Definition 18 preserves linear equivalence.
Let
be an OCF,
a revision operator with its scale-preserving counterpart
and
a proper input for
. Then
satisfies (PoS).
Let
be some OCF with
. Let
and
. Because
, we have
. Then we have
. Thus also
. Multiplying both sides with
yields
. By substituting
we get
and by applying the definition of
we get
as required.
While (PoS) is satisfied by
without any prerequisites for the underlying revision operator
, (PoE) is not necessarily satisfied unless
already satisfies (PoE).
If
satisfies (PoE), then
satisfies (PoE).
Let
be OCFs with
and let
and
. Clearly
. Therefore, if
satisfies (PoE), then also
for any fitting input
. Finally then
.
It should be noted that the construction according to (18) is stronger than (PoS) demands. Not only is the scale preserved, but also empty layers that are not part of the scaling are respected by (18). This is in contrast to
and
which only consider convex equivalents, thus getting rid of all empty layers, not only those that are part of the scaling. We illustrate this with an example.
Let
be a strategic c-revision. Let
be an input for
, such that
. Then
satisfies (PoE) (cf. Proposition 11). Let
and
be the revision operators obtained by applying (15) and (18) to
respectively. Let
and
be ranking functions with
as seen in Table 7. The results of the revision of
and
with
according to
and
are shown in Table 7. The revision result of
is the same as
. While both revisions preserve the scale between
and
, the additional empty layer in
is not preserved by
while
respects it during revision.
Ordinal Conditional Fuction (OCFs)
and
with Revisions by
and
from Example 6.
1
2
0
0
0
0
3
6
2
4
1
2
0
0
3
6
2
4
1
2
4
8
3
6
On the other hand,
always satisfies (PoE) even if the the underlying revision operator does not which is not the case for
and unlike
,
will retain potential arbitrary empty layers introduced by
and can not be characterized in terms of a parameterized transformation.
In the previous sections, we have investigated revision of ranking functions and their arithmetic properties, and shown how accurately revision equivalence can be ensured and established by making use of linear equivalence. In the rest of this article, we broaden our view towards considering more the qualitative structure of ranking functions by studying connections between TPO revision operators and OCF revision operators.
Defining OCF-Revision Operators From TPO-Revision Operators
In this section, we define OCF-revision operators from TPO-revision operators, with the goal of preserving inferential equivalence of ranking functions. A first approach to this via explicit transformations is pursued in Subsection 9.1. In Subsection 9.2, we elaborate on a generalization of the first approach which we call mimicry of TPO operators.
OCF Revisions Via Explicit Transformations
In principle, every revision operator
for TPOs can be used to define a revision operator
for ranking functions by utilizing the transformation functions
and
. First, the ranking function
is transformed into a TPO via
, then the TPO revision is performed directly on
, and finally the result is transformed back into an OCF via
.
()
Let
be a revision operator for TPOs. We can define an OCF-revision operator
induced by
via
where
is an OCF and
is an appropriate input for
.
The definition is quite similar in spirit to the OCF-revision of convex equivalents from Proposition 16 in the sense that differences between equivalent OCFs are eliminated via a
-transformation; but instead of an OCF-revision operator, a TPO-revision operator is used in Definition 19. Just like
from Proposition 16, the revision operator
does not satisfy (Stability) in general, even if
does not alter the preorder when revising with information that is already accepted. Again, this is a side effect of the loss of information about the position of the empty layers in the prior OCF. Moreover, the following proposition shows that revision via
preserves equivalence among ranking functions during belief revision.
Let
be a revision operator for TPOs, and let
be an appropriate input for
. Then the OCF-revision operator
induced by
via Definition 19 satisfies (PoE).
Let
be equivalent OCFs. Definition 3 immediately implies that
. Hence, the revision using
delivers not just equivalent, but identical results for both OCFs, that is,
.
Proposition 22 allows us to obtain OCF-revision operators from TPO-revision operators via Equation (19) where the revision result on the side of the TPO does not depend on the chosen ranking representation of the TPO.
Because of the similarity between Definition 19 and the approaches described in Subsection 8.1, it is immediate that parametrized transformations from Definition 16 can be incorporated here as well. By simply replacing the
-transformation in Equation (19) with a parametrized transformation, we obtain the following definition.
()
Let be a revision operator for TPOs and let
be an appropriate input for
. We can define an OCF-revision operator
constructed from
via
where
is a parametrized transformation as defined in Definition 16.
Definition 19 now amounts to a special case of Definition 20 where
. Note that the operator
does not necessarily transform every TPO-revision result in the same way, but may utilize
in the context of the parametrized transformation. Therefore, analogously to Proposition 18 (and Equation (17)), we obtain the following result for OCF-revision operators defined from TPO-revision operators using parametrized transformations.
The OCF-revision operator
(obtained from any TPO-revision operator
via Definition 20) using the linear transformation
for each prior OCF
satisfies both (PoE) and (PoS).
Let
be equivalent OCFs, let
be defined as in the proposition, and let
be a suitable input. Then
follows via Lemma 6, that is, (PoE) holds.
If there is a
such that
, then
. Hence,
, that is, (PoS) holds.
The advantage of the approach described in this subsection is the simplicity of Definition 19. It is quite obvious how this operator manages to achieve equal results for equivalent inputs, and it can be easily adapted to every TPO revision operator.
However, one may argue that this approach—while elegant and convenient for theoretical considerations—is not very constructive for practical implementations since the operator
relies on directly applying a TPO revision operator
, which needs to be implemented as well. Therefore, in the next section we will look at how we can generalize this approach and stay within the framework of ranking functions without needing to explicitly transform OCFs into TPOs.
OCF Revisions Via Mimicry of TPO Operators
The idea for generalizing the approach from the previous subsection is to design an OCF-revision operator
from scratch such that it mimics the characteristic behavior of
as is outlined in the following definition.
(Mimic Operator )
Let be a revision operator for TPOs with an arbitrary input domain
. Then an OCF-revision operator
mimics if it has the same input domain, and for every OCF and every
it holds that
Note that (21) above is equivalent to
. Moreover, it should be noted that this equation does not have only one solution, that is, there is not only one possible mimic operator. To the contrary, in general, there are infinitely many mimic operators for a given TPO-revision operator.
A mimic operator
needs to (implicitly) abstract from the numerical ranks since it is tied to a TPO revision operator which cannot differentiate between equivalent ranking functions. Therefore,
yields equivalent revision results for equivalent input ranking functions, as is shown in the following proposition.
Let
be an OCF-revision operator that mimics a TPO-revision operator
. Then
satisfies (PoE).
Let
be a suitable input for
, and let
be OCFs with
. Note that Equation (21) implies
for all
. Moreover,
holds iff
. Therefore, the following holds:
Hence,
, that is, (PoE) holds.
Moreover, the converse of Proposition 24 also holds.7 In other words, the OCF-revision operators which satisfy (PoE) are exactly those that behave like TPO-revision operators.
Let
be an OCF-revision operator that satisfies (PoE). Then there exists a TPO-revision operator
which is mimicked by
.
We show the result by contraposition. Let
be an arbitrary TPO-revision operator. If (PoE) does not hold for
, then also Equation (22) cannot hold in general for OCFs
with
. Therefore, Equation (21) cannot hold either, and
does not mimic
.
The conceptual difference between Definition 21 and the definitions from Subsection 9.1 lies in the intentional vagueness of the implementation of
. Instead of performing transformations explicitly (like, e.g.,
does), the operator
might achieve the same qualitative result, that is the same order of possible worlds, in a purely arithmetic way within the OCF framework. Figure 3 illustrates the difference in the explicit behavior of the operators, and examples for applications of mimicry operators will be provided later in Section 10.
Illustration of a revision via explicit transformations (Definition 19) compared to revision via a mimic operator (Definition 21). Solid arrows indicate explicit actions, while dashed arrows indicate implicit relationships. The symbol
stands for any
-transformation (
or
). (a) Revision via explicit transformations and (b) revision via mimicry.
Observe that the approach from the previous subsection actually implements mimicry. More precisely, the operator
from Definition 19 mimics the operator
it is induced by.
Let
be a TPO-revision operator. Then the OCF-revision operator
induced by
via Definition 19 mimics
, that is, for every TPO
, every
, and every new information
, it holds that
.
Although Equations (19) and (21) look very similar, and
holds if both operators
and
are constructed from the same TPO-revision operator
, these approaches do not necessarily yield the same result. It is important to note that the operator
may introduce in-between empty layers during revision, while
always yields a convex OCF, so
in general. Figure 4 illustrates the relationships between a TPO-revision operator
, the OCF-revision operator
induced by
via Definition 19, and other OCF-revision operators
mimicking
.
Illustration of the relationship between
,
, and
, as established in Lemma 26.
OCF Versions of Elementary Revision Operators
After outlining approaches for the definition of OCF-revision operators from TPO-revision operators in Section 9, we are now going to provide practical examples based on the three well-known elementary revision operators.
Elementary Revision Operators for TPOs
The term elementary revision operators (Chandler & Booth, 2020, 2023) stands for a group of three basic operators for iterated belief revision of epistemic states
, which are equipped with TPOs
, with a single proposition
: Natural revision
(Boutilier, 1993), lexicographic revision
(Nayak et al., 2003), and restrained revision
(Booth & Meyer, 2006). We will use their characterization from Chandler and Booth (2020) by the following properties.
iff (1)
, or (2)
and
.
iff (1) and , or (2) ( iff ) and
.
iff (1)
, or (2)
and either (a)
or (b)
and (
or
).
Figure 5 illustrates the behavior of the elementary revision operators. Roughly, natural revision ensures that only the minimal models of
are in the bottom layer while everything else is left unchanged. Lexicographic revision preserves the respective preordering among the models of
and the models of
but shifts all models of
above the models of
. Restrained revision also ensures that only the minimal models of
are in the bottom layer and preserves the respective preordering among the models of
and the models of
. Moreover, strict relations
are preserved in any case. However, if models of
and models of
are at the same level in the prior
, then models of
are strictly preferred to models of
in the posterior TPO
.
Illustration of the result of revising a TPO
using the elementary TPO-revision operators. Lower Worlds are more plausible. Dashed arrows indicate how the orderings are changed by the revision operators. (a) Natural revision, (b) lexicographic revision and (c) restrained revision.
When revising by
, both lexicographic and restrained revision make models of
more plausible than models of
, at least in cases when they are equally plausible before the revision. This behavior of revision operators was recognized under the name enforcement (Enf) in Kern-Isberner and Krümpelmann (2011) where general revisions of ranking functions by sets of conditionals were investigated. Reduced to the case of revising TPOs by single propositions or conditionals, the postulate of enforcement reads as follows:
Let
.
propositional revision: Let
. If
then
.
conditional revision: Let
. If
then
.
The propositional version of (Enf) is also known in the literature as the semantic version of the postulate (P) from Booth and Meyer (2006), respectively of the postulate (Ind) from Jin and Thielscher (2007). Both lexicographic and restrained revision satisfy the propositional version of (Enf) while natural revision does not.
Elementary OCF-Revision Operators Via Mimicry
In this section we are going to define OCF versions of the elementary TPO-revision operators, and verify that these OCF-revision operators actually mimic their respective TPO-counterparts in the sense of Definition 21.
As can be seen in the example applications of the three elementary revision operators in Figure 5, for all three operators, the minimal models of the new information are pulled down to the lowest layer. This can be implemented in OCFs via
for all
with
, and
for all other worlds
which are not minimal models of
.
For natural revision, note that the order on worlds is only changed if the new information
is indeed not yet believed in the original epistemic state. This immediately leads us to the following definition of natural revision for OCFs.
()
Let be an OCF and let
be a proposition. The natural OCF-revision operator
is defined by Let
be an OCF and let
be a proposition. The natural OCF-revision operator
is defined as follows:
If
, then
.
If
, then
The definition for the case
ensures (Stability), that is, nothing changes if
already accepts the new information
. The following result is immediate because
iff
.
If
then
, that is,
satisfies (Stability).
In the general case, that is, if
, the minimal models of
are put in the lowermost layer, and all other layers are shifted by
.
Therefore, all non-minimal models of
and all models of
have a rank above
, while still keeping their internal ordering (since all ranks are shifted by the same constant value).
The operator
mimics
.
Let
be an epistemic state equipped with a TPO
and let
. Furthermore, let
be a proposition and let
be possible worlds. Let
and let
. For the case
, the proof is immediate from Proposition 27. For the case
, we prove that
, that is
, by showing that
complies with the equivalence stated in (NR).
“”: (1) If
, we have
. This results in
for all
. Therefore,
. (2) If
and
, we have
. Therefore,
and
. Hence,
.
“
”: We prove the other direction by contraposition. Let
. This results in
. (i) If
, we have
and consequently
. (ii) If
, the negation of the second condition in (NR) means we have
. As a result,
and
.
In lexicographic revision, all models of
become more plausible than all non-models of
, while both groups of worlds keep their respective internal ordering. This can be implemented by increasing the rank of all non-models of
by the same constant such that the minimal model of
is ranked higher (i.e. is less plausible) than the least plausible world satisfying
. Note that, even if the information
is already believed in the original epistemic state, the ordering of the possible worlds is adjusted. Only when all models of
are already more plausible, nothing is changed. This can be implemented on OCFs by checking whether the maximal rank among the models of
is below
.
()
Let be an OCF and let
be a proposition. The lexicographic OCF-revision operator
is defined as follows:
If
, then
.
If
, then
In the first case of Definition 23, that is, if
, all models of
have a lower rank than any model of
, so the goal of lexicographic is already fulfilled a priori, and
does not change
. This special case is dealt with in the following proposition.
If for all models
of
,
holds, then
.
Note that Proposition 29 does not necessarily guarantee (Stability). This is not an artifact, since
deliberately makes all models of the new information
more plausible, even if
is already believed.
In the general case of Definition 23, that is, if
, the ranks of models of
are shifted by
so that the minimal models of
are now on the lowermost layer, and the models of
are shifted up beyond all models of
.
The operator
mimics
.
Let
be an epistemic state equipped with a TPO
and let
. Furthermore, let
be a proposition and let
be possible worlds. Moreover, let
be the maximal rank of models of
in
. Let
and let
. We prove that
, that is
, by showing that
complies with the equivalence stated in (LR).
“”: (1) If
and
, we have
. Therefore,
. Hence,
and
. (2) If (
iff
and
, we have
. During the revision, the same constant value is added to both sides of this inequation. Therefore,
and
.
“”: We prove this direction by contraposition. Let
and
. Analogously to the first case in the other direction, the revision results in
.
For restrained revision, all layers containing models and non-models of
are split up into two layers where the lower layer contains the models of
and the upper layer contains the non-models of
. This dynamic insertion of new layers does not require special effort in TPOs. But in OCFs (without, e.g., rational ranks), when we want to insert a new layer, we need to shift the ranks of all layers above. This can be achieved either by an iterative process (shifting ranks when necessary) or by scaling the whole OCF to create empty in-between layers that may be filled when necessary. We opt for the latter option here since it leads to a more concise definition.
()
Let be an OCF and let
be a proposition. The restrained OCF-revision operator
is defined by
The first case in the definition above again concerns itself with the minimal models of
. The second case preserves the ordering among all non-minimal models of
. The third case preserves the ordering among all models of
while also making them slightly less plausible than models of
which were at the same rank before, such that layers which contain both models of
and
are split. After revision, layers with even ranks only contain models of
while layers with odd ranks only contain models of
. Note that, technically, the operator
scales up the ranking function even if it already accepts
and already consists of alternating layers of
-models and
-models. However, the order of the worlds does not change in that case. Therefore, on the qualitative level, this operator behaves just like restrained revision.
The operator
mimics
.
Let
be an epistemic state equipped with a TPO
and let
. Furthermore, let
be a proposition and let
be possible worlds. Let
and let
. We prove that
, that is
, by showing that
complies with the equivalence stated in (RR).
“”: (1) If
, then
. Hence,
, resulting in
for all
. (2) Let
. (a) If
, then
. Therefore,
. Hence,
. (b) If
, then
. If
, then
. Because
, we have
. Again, this means
. If
, then
. because
, we have
. Again, this means
.
“”: Let
, that is
. If
, then
must hold, that is, case (1) of
is fulfilled. If
, then also
, and
holds, which is the first part of case (2). Now we look at the relative ranks of
and
in this case. (i) If
, then
, that is, (2a) holds. (ii) If
, then
. Now (2b) must hold, since (
and
) would have resulted in
, contrary to our assumption. (iii) The third option would be
. If this were the case, then
would have to hold. Because of
and
, we would have
, contradicting our initial assumption of
. Therefore, the case
is impossible, and the only possible cases are (1), (2a), and (2b).
The Propositions 28, 30, and 31 show that the operators
,
, and
are indeed suitable OCF-realizations of the TPO-revision operators
,
, and
, respectively.
Moreover, all three operators ,
, and
preserve equivalence among OCFs. The proof for this is immediate from Propositions 28, 30, and 31 together with Proposition 24.
The operators
,
, and
satisfy (PoE).
However, although they preserve inferential equivalence, the three OCF operators
,
, and
do not preserve linear equivalence among OCFs, as is shown in the following example.
Table 8 shows two linearly equivalent ranking functions
with
. We can see that
and
and
. However, there is no
such that
or
or
.
Ranking Functions
and their Natural, Lexicographic, and Restrained OCF-Revisions with
for Example 7.
1
2
0
0
0
0
0
0
3
6
4
7
2
4
6
12
0
0
1
1
3
5
1
1
2
4
3
5
5
9
5
9
Although this counterexample means that the revision operators
,
, and
do not satisfy (PoS), it is straightforward that scale-preserving versions of these operators can be constructed via the methods described in Section 8.
It was shown in Chandler and Booth (2023) that the elementary revision operators for TPOs can be characterized by a strong invariance postulate which also connects equivalent revision scenarios. Therefore, we briefly explore the following final research question8 :
If
satisfies (PoE), is then
an elementary revision operator?
In general, this question has to be answered negatively, since, for example, the “do nothing” operator
preserves equivalence, but
is not even a DP revision operator. But even if we presuppose the DP postulates, we can find equivalence-preserving revision operators which do not behave like elementary revision operators.
Recall
from Definition 5. While
does not satisfy (PoE), we can construct the revision operator
according to (12) which satisfies (PoE). Clearly,
also satisfies the postulates (DP1)–(DP4). Now consider the ranking functions
and their DP-revisions from Table 9. The revision result
coincides with
since
is its own convex equivalent. However, this revision does not coincide with either lexicographic, natural or restraint revision, as can be seen by comparing Table 9 with the revision results in Table 8.
Ranking Functions
and their Revisions with
via
for Example 8.
1
2
0
0
3
6
2
4
0
0
1
1
2
4
3
5
Discussion
In this section we discuss the results of this article. The first subsection is focused on the utility of our proposed postulates. In the second subsection, we will compare some notions from our article to similar concepts in the literature.
Utility of the Preservation Postulates
Whenever a postulate is proposed, a natural question to ask is whether—or in which cases—the postulate is a reasonable requirement for revision operators. As expected, the utility of (PoE) and (PoS) cannot be established prima facie, but rather comes down to the specific requirements of a revision scenario.
OCFs model quantitative degrees of belief, and the quantitative information may be important for a revision scenario. However, sometimes OCFs are simply used for convenience, since it may be easier to specify revision operators using arithmetic operations on ranks, even though the user only cares about the qualitative ordering among worlds. In this case, one would want the operator to satisfy (PoE) to make sure that the TPO-to-OCF transformation has no unwanted side-effects, that is, that the revision operator does not use more information from the OCF than just the qualitative ordering among worlds. In other words, the revision result should be representation-invariant. However, if the quantitative information represented by the OCF has concrete meaning, that is when it is abstracted from real-world data, one may want that information to influence the revision result. In such a case, is unnecessarily restrictive.
For (PoS), a stronger case can be made when the ranks are assumed to have meaning: If one OCF is considered to be more fine-grained than another, that is when the distance between the OCFs’ layer reflect different orders of magnitude with respect to the plausibility of the possible worlds, one may want to keep this property in the revision result. This applies especially when ranks are abstractions of real-world data, for example from probabilities: In such a case, the scale of an OCF carries information about the relationship between the abstraction and the underlying data. On the other hand, when the ranks have no practical meaning whatsoever, of course, (PoS) is not necessary.
Connections to Similar Concepts From the Literature
The concept of revision equivalence that was introduced in Section 5 is similar to “strong equivalence” from Schwind et al. (2022).
Let be epistemic spaces, and let
be revision operators for
and
, respectively, with
as their input domain. Then
and
are strongly equivalent with respect to the pair
if
and for each finite sequence
with
,
, it holds that
It is clear that strong equivalence is more general than propositional revision equivalence. On the one hand, it is a notion of equivalence between epistemic states from different epistemic spaces, and on the other hand, it considers sequences of inputs, that is, strongly equivalent epistemic states always stay equivalent after a finite number of revision steps. In contrast to that, revision-equivalent ranking functions stay equivalent for one revision step, but may diverge after consecutive revision steps. However, the postulate (PoE) is a way to guarantee strong equivalence between ranking functions in the sense of Definition 25.
Let
be a revision operator for OCFs that satisfies (PoE). Then any two ranking functions
with
are strongly equivalent with respect to
.
If
, then
holds (see Corollary 5). Let
with
. Let
, and
for
and
. The postulate (PoE) guarantees that
for all
. In particular,
, which means that
and
are strongly equivalent.
The concept of mimicry introduced in Section 9 is very similar to the notions of “translation” from Schwind et al. (2022) and to “simulation” from Aravanis et al. (2019), which also capture the idea that a revision operator for one epistemic space can behave like another operator for a different epistemic space. We only give the definition of translation below since it is the more general notion. For the definition, let
denote the set of consistent propositions from
.
Let be epistemic spaces and let
be a revision operator for
. A translation of
into
is pair
consisting of a mapping
and
is a revision operator for
such that for all
and all consistent formulas
it holds that
.
The mapping
in the definition above functions as a transformation operator between epistemic states. When comparing Definition 26 to the definition of mimic operators (see Definition 21), it becomes quite clear that the concept of translation can be seen as a generalization of mimicry to arbitrary epistemic spaces (for revision by consistent propositions).
Let
be a propositional TPO-revision operator and let
be a propositional OCF-revision operator. Then
mimics
if and only if
is a translation of
into
.
Recall that
mimics
if and only if for every
and
it holds that
, which is equivalent to
.
Furthermore, the results from Schwind et al. (2022) imply that for every propositional DP-revision operator for TPOs, corresponding mimicry operators can be found.
Conclusions and Future Work
In this article, we introduced the concept of revision equivalence for ranking functions, and the postulate (PoE) which ensures that empty layers of ranking functions do not affect the induced qualitative TPO after revision. This allows for using the convenient and powerful framework of ranking functions for revising TPOs, as then any ranking representation of a TPO can be used for revision by equivalence-preserving revision operators. However, as our investigations showed, such a kind of representation invariance is not easy to achieve in general, for example, under iterated revisions according to the DP-framework (Darwiche & Pearl, 1997). Therefore, we proposed methods for constructing equivalence-preserving revision operators both from existing OCF-revision operators by considering convex equivalents as representatives for equivalence classes of ranking functions, and by mimicking the behavior of TPO-revision operators.
Furthermore, we introduced more structural information into the notion of equivalence by considering linearly equivalent ranking functions and found that c-revisions (Kern-Isberner, 2004) are perfectly adequate to preserve linear equivalence even for revising by sets of conditionals. For a general approach to scale-preserving revision operators, we introduced the postulate (PoS) and proposed methods for modifying revision operators in order to satisfy this postulate. In particular, minimal linear equivalents were shown to be useful representatives for classes of linearly equivalent OCFs, and parametrized TPO-to-OCF transformations were introduced as a tool for making revisions scale-preserving.
In order to demonstrate our methods, we showed how equivalence- and scale-preserving OCF-versions of the well-known elementary TPO-revision operators can be constructed.
As part of our future work, we plan to evaluate and make use of more approaches to iterated revision from the literature, both revision operators for ranking functions and for TPOs, and also to consider iterated revision frameworks beyond the basic DP-framework. In particular, we will investigate implications of the axioms (Ind) (Jin & Thielscher, 2004, 2007) and (P) (Delgrande & Jin, 2012). Additionally, we would like to explore connections between the postulates and approaches from this article and those axioms that have recently been used for characterizing the elementary revision operators in Chandler and Booth (2023). Moreover, the problem of ensuring the (PoE) is also relevant for improvement operators (Konieczny & Pino Perez, 2008), whose behavior may heavily depend on empty layers as well.
Footnotes
Acknowledgments
We would like to thank the anonymous reviewers for their extensive and valuable feedback.
ORCID iDs
Alexander Hahn
Lars-Phillip Spiegel
Gabriele Kern-Isberner
Jonas Haldimann
Christoph Beierle
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation), project number 512363537, grant KE 1413/15-1 awarded to Gabriele Kern-Isberner and grant BE 1700/12-1 awarded to Christoph Beierle. Alexander Hahn was supported by grant KE 1413/15-1, and Lars-Phillip Spiegel was supported by grant BE 1700/12-1.
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Notes
References
1.
AlchourrónC. E.GärdenforsP.MakinsonD. (1985). On the logic of theory change: Partial meet contraction and revision functions. Journal of Symbolic Logic, 50(2), 510–530. https://doi.org/10.2307/2274239
2.
AravanisT.PeppasP.WilliamsM. (2019). Observations on darwiche and pearl’s approach for iterated belief revision. In Proceedings of the Twenty-Eighth international joint conference on artificial intelligence (IJCAI’19) (pp. 1509–1515). International Joint Conferences on Artificial Intelligence Organization. https://doi.org/10.24963/ijcai.2019/209.
3.
BeierleC.Kern-IsbernerG. (2009). Formal similarities and differences among qualitative conditional semantics. International Journal of Approximate Reasoning, 50(9), 1333–1346. https://doi.org/10.1016/j.ijar.2009.04.006
4.
BeierleC.Kern-IsbernerG. (2012). Semantical investigations into nonmonotonic and probabilistic logics. Annals of Mathematics and Artificial Intelligence, 65(2-3), 123–158. https://doi.org/10.1007/S10472-012-9310-1
5.
BeierleC.Kern-IsbernerG. (2021). Selection strategies for inductive reasoning from conditional belief bases and for belief change respecting the principle of conditional preservation. In The International FLAIRS conference proceedings, (Vol. 34). https://doi.org/10.32473/flairs.v34i1.128459.
6.
BeierleC.KutschS. (2019). Computation and comparison of nonmonotonic skeptical inference relations induced by sets of ranking models for the realization of intelligent agents. Applied Intelligence, 49(1), 28–43. https://doi.org/10.1007/s10489-018-1203-5
7.
BoothR.MeyerT. (2006). Admissible and restrained revision. Journal of Artificial Intelligence Research, 26, 127–151. https://doi.org/10.1613/jair.1874
8.
BoutilierC. (1993). Revision sequences and nested conditionals. In Proceedings of the Thirteenth international joint conference on artificial intelligence (IJCAI’93). (pp. 519–525).
9.
ChandlerJ.BoothR. (2020). Revision by conditionals: From hook to arrow. In D. Calvanese, E. Erdem & M. Thielscher (Eds.), Proceedings of the Seventeenth international conference on principles of knowledge representation and reasoning (KR’20). (pp. 233–242). https://doi.org/10.24963/kr.2020/24.
EiterT.FinkM.TompitsH.WoltranS. (2005). Strong and uniform equivalence in answer-set programming: Characterizations and complexity results for the non-ground case. In M. M. Veloso & S. Kambhampati (Eds.), Proceedings, The Twentieth national conference on artificial intelligence and the seventeenth innovative applications of artificial intelligence conference, July 9-13, 2005, Pittsburgh, Pennsylvania, USA (pp. 695–700). AAAI Press / The MIT Press. http://www.aaai.org/Library/AAAI/2005/aaai05-109.php.
14.
GoguenJ.BurstallR. (1992). Institutions: Abstract model theory for specification and programming. Journal of the ACM, 39(1), 95–146.
15.
GoldszmidtM.PearlJ. (1996). Qualitative probabilities for default reasoning, belief revision, and causal modeling. Artificial Intelligence, 84, 57–112.
16.
GonçalvesR.KnorrM.LeiteJ. (2016). The ultimate guide to forgetting in answer set programming. In Principles of knowledge representation and reasoning: proceedings of the fifteenth international conference, KR 2016 (pp. 135–144). AAAI Press. http://www.aaai.org/ocs/index.php/KR/KR16/paper/view/12849.
17.
HaldimannJ.BeierleC.Kern-IsbernerG.MeyerT. (2023a). Conditionals, infeasible worlds, and reasoning with system W. In S. A. Chun & M. Franklin (Eds.), Proceedings of the Thirty-Sixth international florida artificial intelligence research society conference, (Vol. 36). https://doi.org/10.32473/flairs.36.133268.
18.
HaldimannJ. P.BeierleC.Kern-IsbernerG. (2023b). Epistemic state mappings among ranking functions and total preorders. Journal of Applied Logics – IfCoLog Journal of Logics and their Applications, 10(2), 155–192.
19.
JinY.ThielscherM. (2004). Representing beliefs in the fluent calculus. In Proceedings of the 16th Eureopean conference on artificial intelligence (ECAI’04). (pp. 823–827).
Kern-IsbernerG. (2001). Conditionals in nonmonotonic reasoning and belief revision. Lecture Notes in Artificial Intelligence LNAI. Springer.
23.
Kern-IsbernerG. (2004). A thorough axiomatization of a principle of conditional preservation in belief revision. Annals of Mathematics and Artificial Intelligence, 40(1–2), 127–164. https://doi.org/10.1023/a:1026110129951
24.
Kern-IsbernerG. (2018). Axiomatizing a qualitative principle of conditional preservation for iterated belief change. In M. Thielscher, F. Toni & F. Wolter (Eds.), Principles of Knowledge representation and reasoning: proceedings of the sixteenth international conference, KR 2018 (pp. 248–256). AAAI Press.
25.
Kern-IsbernerG.BockT.BeierleC.SauerwaldK. (2019). Axiomatic evaluation of epistemic forgetting operators. In R. Bartak & K. Brawner (Eds.), Proceedings of the 32nd International FLAIRS conference, FLAIRS-32 (pp. 470–475). Palo Alto, CA: AAAI Press. ISBN 978-1-57735-821-3.
26.
Kern-IsbernerG.HahnA.HaldimannJ.BeierleC. (2024). Total preorders vs ranking functions under belief revision – the dynamics of empty layers. In Proceedings of the 21st international conference on principles of knowledge representation and reasoning. (pp. 498–508). https://doi.org/10.24963/kr.2024/47.
27.
Kern-IsbernerG.HuvermannD. (2017). What kind of independence do we need for multiple and iterated revision?Journal of Applied Logic, Special Issue on Uncertain Reasoning, 22, 91–119.
28.
Kern-IsbernerG.KrümpelmannP. (2011). A constructive approach to independent and evidence retaining belief revision by general information sets. In T. Walsh (Ed.) Proceedings 22nd international joint conference on artificial intelligence (IJCAI’11) (pp. 937–942). Menlo Park, CA: AAAI Press.
KoniecznyS.Pino PerezR. (2008). Improvement operators. In G. Brewka & J. Lang (Eds.), Principles of knowledge representation and reasoning: proceedings of the eleventh international conference, KR 2008, Sydney, Australia (pp. 177–187). AAAI Press.
MakinsonD. (1989). General theory of cumulative inference. In M. Reinfrank et al. (Eds.), Non-monotonic Reasoning (pp. 1–18). Berlin: Springer Lecture Notes on Artificial Intelligence 346.
SauerwaldK.ThimmM. (2024). The realizability of revision and contraction operators in epistemic spaces. In Proceedings of the 21st international conference on principles of knowledge representation and reasoning. (pp. 665–670). https://doi.org/10.24963/kr.2024/62.
36.
SchwindN.KoniecznyS.Pino PérezR. (2022). On the representation of darwiche and pearl’s epistemic states for iterated belief revision. In Proceedings of the 19th international conference on principles of knowledge representation and reasoning. (pp. 320–330). https://doi.org/10.24963/kr.2022/32.
37.
SezginM. (2023). A conditional perspective of belief revision. PhD Thesis, TU Dortmund University, Dortmund, Germany. https://doi.org/10.17877/DE290R-24062.
38.
SezginM.Kern-IsbernerG. (2023). Implementing bounded revision via lexicographic revision and c-revision. In B. Williams, Y. Chen & J. Neville (Eds.), Proceedings of the AAAI conference on artificial intelligence, (Vol. 37, pp. 6525–6532). AAAI Press. https://doi.org/10.1609/aaai.v37i5.25802.
39.
SpohnW. (1988). Ordinal conditional functions: a dynamic theory of epistemic states. In W. Harper & B. Skyrms (Eds.), Causation in decision, belief change, and statistics, II (pp. 105–134). Kluwer Academic Publishers.