A framework for validation of rule-based systems

Authors

    Authors

    R. Knauf; A. J. Gonzalez;T. Abel

    Comments

    Authors: contact us about adding a copy of your work at STARS@ucf.edu

    Abbreviated Journal Title

    IEEE Trans. Syst. Man Cybern. Part B-Cybern.

    Keywords

    expert system validation; rule-based systems; test case validation; VERIFICATION; Automation & Control Systems; Computer Science, Artificial Intelligence; Computer Science, Cybernetics

    Abstract

    This paper describes a complete methodology for the validation of rule-based expert systems. This methodology is presented as a five-step process that has two central themes: 1) to create a minimal set of test inputs that adequately cover the domain represented in the knowledge base and 2) a Turing Test-like methodology that evaluates the system's responses to the test inputs and compares them to the responses of human experts. The development of minimal set of test inputs takes into consideration various criteria, both user-defined, and domain-specific. These criteria are used to reduce the potentially very large set of test inputs to one that is practical, keeping in mind the nature and purpose of the developed system. The Turing Test-like evaluation methodology makes use of only one panel of experts to both evaluate each set of test cases and compare the results with those of the expert system, as well as with those of the other experts. The hypothesis being presented here is that much can be learned about the experts themselves by having them anonymously evaluate each other's responses to the same test inputs. Thus, we are better able to determine the validity of an expert system. Depending on its purpose, we introduce various ways to express validity as well as a technique to use the validity assessment for the refinement of the rule base. Lastly, the paper describes a partial implementation of the test input minimalization process on a small but nontrivial expert system. The effectiveness of the technique was evaluated by seeding errors into the expert system, generating the appropiate set of test inputs and determining whether the errors could be detected by the suggested methodology.

    Journal Title

    Ieee Transactions on Systems Man and Cybernetics Part B-Cybernetics

    Volume

    32

    Issue/Number

    3

    Publication Date

    1-1-2002

    Document Type

    Article

    Language

    English

    First Page

    281

    Last Page

    295

    WOS Identifier

    WOS:000175449800004

    ISSN

    1083-4419

    Share

    COinS