Abstract
The trustworthiness of consumer evaluation is an important prerequisite for reference to make a decision. Hence, a trust evaluator must recognize biased information (referred to as false recommendation), and do so dynamically. Drawing on the sociological concept of trust fusion, a new trust evaluating model is proposed, one built upon (i) Bayesian updating of the trust evaluation with each transaction, and (ii) the identification and correction of purposefully misleading evaluations according to improved evidence theory. Simulations show that the algorithm’s trust value increases slowly with successful transactions, but drops rapidly with a failed transaction, capturing the notion that trust is hard to establish, yet easy to destroy. Further simulations demonstrate the model has good robustness and error tolerance of trust evaluation against false recommendations at varying levels of deception. The algorithm effectively and robustly compensates for deception.
Get full access to this article
View all access options for this article.
