Abstract
In this article we consider a situation in which an item (e.g., some type of equipment) can be in one of several states defined in terms of the attributes of the item. Some states are desirable and some are not. The goal is to take some action designed to improve the state of the item. A policy is a rule defining what action to take depending upon the state that the item currently occupies. Thus, we encounter a situation that can be modeled by a finite state Markov chain. However, due to uncertainty concerning the effectiveness of any course of action, precise transition probabilities are replaced with linguistic terms. Thus, the matrix of transition probabilities is fuzzy. For the same reason, the initial state probability vector can also be fuzzy. Furthermore, associated with each course of action is a cost that may in part depend on the ultimate outcome of the action taken. Again, due to uncertainty, costs may also be expressed linguistically causing the cost matrix to be fuzzy. The methodology described here establishes a procedure for associating a fuzzy expected cost with each policy. Then, after defuzzification, the policy with the smallest expected cost is selected.
Get full access to this article
View all access options for this article.
