Performance Evaluation Of A Large Diagnostic Expert System Using A Heuristic Test Case Generator
Abbreviated Journal Title
Eng. Appl. Artif. Intell.
knowledge-based systems; validation and verification (V&V); heuristics; software testing; Automation & Control Systems; Computer Science, Artificial Intelligence; Engineering, Multidisciplinary; Engineering, Electrical & Electronic
Validating the performance of a knowledge-based system is a critical step in its commercialization process. Without exception, buyers of systems intended for serious purposes require a certain level of guarantees about system performance. This is particularly true for diagnostic systems. Yet, many problems exist in the validation process, especially as it applies to large knowledge-based systems. One of the biggest challenges facing the developer when validating the system's performance is knowing how much testing is sufficient to show that the system is valid. Exhaustive testing of the system is almost always impractical due to the many possible test cases that can be generated, many of which are not useful. It would thus be highly desirable to have a means of defining a representative set of test cases that, if executed correctly by the system, would provide a high confidence in the system's validity. This paper describes the experiences of the development ream in validating the performance of a large commercial diagnostic knowledge-based system. The description covers the procedure employed to carry out this task, as well as the heuristic technique used for generating the representative set of test cases. Copyright (C) 1996 Elsevier Science Ltd
Engineering Applications of Artificial Intelligence
"Performance Evaluation Of A Large Diagnostic Expert System Using A Heuristic Test Case Generator" (1996). Faculty Bibliography 1990s. 1626.