Abstract
This paper proposes an engineering semantic evaluation method of a collection of decision rules, acquired automatically or manually. The method is intended to complement the available decision rule evaluation methods which are based on the use of various empirical error rates. These methods are often criticized as evaluating only the “predictive power” of decision rules without any indication of how appropriate these rules are in the context of a given domain. The proposed method directly addresses the issue of the domain relevance of decision rules, and it measures their “semantic power.”
In the proposed method, the evaluation is conducted using Background Knowledge in the form of a Concept library containing both primary and complex concepts which are acquired from/or provided by the domain experts. Primary Concepts represent the basic notions from a given domain which are directly related to the problem described by the collection of decision rules to be evaluated. Complex Concepts are those defined by a combination of at least two primary concepts and they are classified into several categories considering their complexity measured by the number of primary concepts used for their definition.
All basic assumptions and the procedure of the method are provided. Also, a concept of an evaluation system based on the method is described. The method was used to evaluate two collections of decision rules which are equivalent in terms of their “predictive power.” Surprisingly, a significant difference in their “semantic power” was discovered, as shown by the numerical results and by the semantic learning curves constructed.
The performed experiments demonstrated feasibility of the proposed method and that it can produce results which may provide an additional, yet unknown, understanding of decision rules and eventually of the symbolic learning systems which produced those rules.
Get full access to this article
View all access options for this article.
