Abstract
Trust can be used to improve online automated recommendation within a given domain. Trust transitivity is used to make it successful. But trust transitivity has different interpretations. Trust and trust transitivity; both are the human mental phenomenon and for this reason, there is no such thing as objective transitivity. Trust transitivity and trust fusion both are important elements in computational trust. This paper analyses the parameter dependence problem in trust transitivity and proposes some definitions considering the effects of base rate. In addition, it also proposes belief functions based on subjective logic to analyse trust transitivity of three specified cases with sensitive and insensitive based rate. Then it presents a quantitative analysis of the effects of unknown dependence problem in an interconnected network environment; such Internet.
1. Introduction
In online environment, people may communicate with anyone who is not previously known or interacted in any way. There is always a possibility exists of breaking contract which is made over the online network. Computational models of trust have an important role to determine with whom to interact safely. Trust transitivity is the most explicit form of computational trust, meaning for example that if Alice trusts Bob, and Bob trusts Claire, then by transitivity, it can be computed that Alice will also trust Claire. This assumes that Bob recommends Claire to Alice. This simple principle, which is essential for human interaction in business and everyday life, manifest it in many different forms (Jøsang and Bhuiyan, 2008). In uncertain probability theory (Shafer, 1976, Jøsang, 2001), the metric which express belief is called opinion. Under a defined scope, trust relationships can support transitivity (Xiao-Yong et al. 2007, Sundaresan, 2007, Syavash et al. 2008). The use of trust in transitive chains requires the existence of a common purpose which needs somehow to be derived from or given by a specific transitive chain (Lifen, 2008, Wang and Vassileva, 2007, Wang et al. 2008). Subjective logic takes both the uncertainty and individuality of beliefs into account while still being compatible with standard logic and probability calculus. The migration from the assumed towards the perceived world is achieved by adding an uncertainty dimension to the single valued probability measure, and by taking the individuality of beliefs into account. Trust can be interpreted as a belief about the reliability of an object, and as a decision to depend on an object (Jøsang, 2002b). In this paper, trust is interpreted as a belief about reliability. As a calculus of beliefs, subjective logic can therefore be used for trust reasoning.
Although this model can never be perfect, and able to reflect all the nuances of trust, it can be shown to respect the main intuitive properties of trust and trust propagation. As soon as one attempts to perform computations with input parameters in the form of subjective trust measures, parameter dependence becomes a major issue (Christianson and Harbison. 2003). If Alice for example wants to know whether tomorrow will be sunny, she can ask her friends, and if they all say it will be sunny she will start believing the same. However, her friends might all have based their opinions on the same weather-forecast, so their opinions are dependent, and in that case, asking only one of them would be sufficient. It would in fact be wrong of Alice to take all her friends' opinions into account as being independent, because it would strengthen her opinion without any good reason. Being able to identify cases of dependent opinions is therefore important, though it is difficult.
This paper investigates the parameter dependence problem in trust transitivity and proposes possible formal computational models that can be implemented using belief reasoning based on subjective logic. With adequate computational trust models, the principles of trust propagation can be ported to online communities of people, organizations and software agents, with the purpose of enhancing the quality of those communities. In section 2, we have introduced the trust computational model with subjective logic. We have identified two problems of trust transitivity in section 3. In section 4, we have shown the effect of not being aware of dependence between opinions. By giving an appropriate example, we have shown that it is possible for recommended opinions to return to their originator through feedback loops, resulting in even more exaggerated beliefs and with repeated loops, it may create a mass hysteria. In section 5, we have concluded the outcome of this research.
2. Computing Trust
Trust has become important topic of research in many fields including sociology, psychology, philosophy, economics, business, law and IT. It is not a new topic to discuss (Pujol, 2002, Mui, 2002). In fact, it has been the topic of hundreds books and scholarly articles over a long period of time. Trust is a complex word with multiple dimensions. A vast literature on trust has grown in several area of research but it is relatively confusing and sometimes contradictory, because the term is being used with a variety of meaning (McKnight and Chervany, 2002, Sabater and Sierra 2005). Also a lack of coherence exists among researchers in the definition of trust. Though dozens of proposed definitions are available in the literature, a complete formal unambiguous definition of trust is rare. In many occasions, trust is used as a word or concept with no real definition. The most cited definition of trust is given by Dasgupta where he defines trust as “the expectation of one person about the actions of others that affects the first person's choice, when an action must be taken before the actions of others are known” (Dasgupta, 1990). This definition captures both the purpose of trust and its nature in a form that can be reasoned about.
Deutsch (2004) states that “trusting behaviour occurs when a person encounters a situation where she perceives an ambiguous path. The result of following the path can be good or bad and the occurrence of the good or bad result is contingent on the action of another person” (Hussain and Chang, 2007). Another definition for trust by Gambetta is also often quoted in the literature: “trust (or, symmetrically, distrust) is a particular level of the subjective probability with which an agent assesses that another agent or group of agents will perform a particular action, both before he can monitor such action (or independently of his capacity ever to be able to monitor it) and in a context in which it affects his own action” (Gambetta, 2002). But trust can be more complex than these definitions. Trust is the root of almost any personal or economic interaction. Keser states “trust as the expectation of other persons goodwill and benign intent, implying that in certain situations those persons will place the interests of others before their own” (Keser, 2003). Golbeck and Hendler (2006) has proposed a definition of trust suitable for use in web-based social networks with a discussion of the properties that influenced its use in computation. They have also presented two algorithms for inferring trust relationships between individuals that are not directly connected in the network (Ziegler. and Golbeck, 2007). Trust is such a concept that crosses disciplines and also domains. The focus of definition differs on the basis of the goal and the scope of the projects. Two generalized definitions of trust defined by Jøsang (Jøsang et al. 2007) which they called reliability trust; the term “evaluation trust” is more widely used by the other researchers, therefore we use this term and decision trust respectively will be used for this work. Evaluation trust can be interpreted as the reliability of something or somebody. It can be defined as the subjective probability by which an individual, A, expects that another individual, B, performs a given action on which its welfare depends. On the other hand, the decision trust captures broader concept of trust. It can be defined as the extent to which one party is willing to depend on something or somebody in a given situation with a feeling of relative security, even though negative consequences are possible.
Subjective logic is a belief calculus specifically developed for modeling trust relationships (Jøsang, 2002a). In subjective logic, beliefs are represented on binary state spaces, where each of the two possible states can consist of sub-states. Belief functions on binary state spaces are called
The fact that subjective logic is compatible with binary logic and probability calculus means that whenever corresponding operators exist in probability calculus, the probability expectation value E(ω) of an opinion ω that has been derived with subjective logic, is always equal to the probability value that would have been derived had simple probability calculus been applied. Similarly, whenever corresponding binary logic operators exist, an absolute opinion (i.e. equivalent to binary logic TRUE or FALSE) derived with subjective logic, is always equal to the truth value that can be derived with binary logic.
Subjective logic has a sound mathematical basis and is compatible with binary logic and traditional Bayesian analysis. Subjective logic defines a rich set of operators for combining subjective opinions in various ways (Jøsang and Knapskog, 1998, Jøsang, 2001, Jøsang and Presti, 2004, Jøsang et al, 2003–2008). Some operators represent generalizations of binary logic and probability calculus, whereas others are unique to belief calculus because they depend on belief ownership. With belief ownership it is possible to explicitly express that different agents have different opinions about the same issue. The advantage of subjective logic over probability calculus and binary logic is its ability to explicitly express and take advantage of ignorance and belief ownership (Jøsang and Bhuiyan, 2008). Subjective logic can be applied to all situations where probability calculus can be applied, and to many situations where probability calculus fails precisely because it can not capture degrees of ignorance. Subjective opinions can be interpreted as probability density functions, making subjective logic a simple and efficient calculus for probability density functions.
Here we have mentioned only on the transitivity and the fusion operators. The transitivity operator can be used to derive trust from a trust path consisting of a chain of trust edges, and the fusion operator can be used to combine trust from parallel trust paths. These operators are described below.
Transitivity is used to compute trust along a chain of trust edges. Assume two agents
The effect of discounting in a transitive chain is that uncertainty increases, not disbelief.
Cumulative
where it is assumed that
3. Analysing Trust Transitivity
Assume two agents

Principle of Trust Transitivity
Trust transitivity, as trust itself, is a human mental phenomenon, so there is no such thing as objective transitivity, and trust transitivity therefore lends itself to different interpretations. We have identified two main difficulties. The first is related to the effect of
3.1. Uncertainty Favoring Trust Transitivity
then ωA:Bx is called the uncertainty favoring discounted opinion of A. By using the symbol ⊗ to designate this operation, we get ωA:Bx = ωAB ⊗ ωBx.
It is easy to prove that this operator is associative but not commutative. This means that the combination of opinions can start in either end of the path, and that the order in which opinions are combined is significant. In a path with more than one recommending entity, opinion independence must be assumed, which for example translates into not allowing the same entity to appear more than once in a transitive path. Fig. 2 illustrates an example of applying the discounting operator for independent opinions, where

Example of applying the discounting operator for independent opinions
3.2. Opposite Belief Favoring
This operator models the principle that
3.3. Base Rate Sensitive Transitivity
In the transitivity operators defined in Sec.4.1 and Sec.4.2 above,
Imagine a stranger coming to a town which is known for its citizens being honest. The stranger is looking for a car mechanic, and asks the first person he meets to direct him to a good car mechanic. The stranger receives the reply that there are two car mechanics in town, David and Eric, where David is cheap but does not always do quality work, and Eric might be a bit more expensive, but he always does a perfect job. Translated into the formalism of subjective logic, the stranger has no other info about the person he asks than the base rate that the citizens in the town are honest. The stranger is thus ignorant, but the expectation value of a good advice is still very high. Without taking
An intuitive approach would then be to let the expectation value of the stranger's trust in the recommender be the discounting factor for the recommended (
However this operator must be applied with care. Assume again the town of honest citizens, and let the stranger
There might be other principles that better reflect human intuition for trust transitivity, but we will leave this question to future research. It would be fair to say that the base rate insensitive discounting operator of Def.2 is safe and conservative, and that the base rate sensitive discounting operator of Def.3 can be more intuitive in some situations, but must be applied with care.
4. The Effects of Unknown Dependence
One of the strengths of this work is in its analytical capabilities. As an example, consider how mass hysteria can be caused to a group of people not being aware of dependence between opinions. Let's take for example; person

The effects of unknown dependence
The arrows represent trust between nodes so that; for example
As a numerical example, let
In this example, we will apply the consensus operator for
For comparison, if G only took the recommendation from A into account (as he should), his derived opinion would be ω
In real situations it is possible for recommended opinions to return to their originator through feedback loops, resulting in even more exaggerated beliefs. When this process continues, an environment of self amplifying opinions, and thereby hysteria, is created.
6. Conclusion
Trust propagation does not manifest itself as a physical phenomenon in nature, but only exists on the mental and cognitive level. It is therefore difficult to assess whether computational models for trust propagation are adequate and reflect the way people reason about trust. A number of principles have been described to model the propagation of trust. We have discussed the shortcomings of computational trust based on subjective logic by the risk of over counting of trust evidence when opinions are not independent and have described a set of computational trust principles that reflect intuitive trust propagation constructs. Different situations require different trust models. With appropriate computational trust models, the principles of trust propagation can be ported to online communities of people, organizations and software agents, with the purpose of enhancing the quality of those communities. The specific computational trust operators must therefore be selected as a function of the situation to be modelled.
