Which of the following method cannot be used for handling uncertainty in rule based expert system
Show
Bruce G. Buchanan Richard O. Duda 1982 1. INTRODUCTION: WHAT IS AN EXPERT SYSTEM?An expert system is a computer program that provides expert-level solutions to ��important problems and is:
The key ideas have been developed within Artificial Intelligence (AI) over the last fifteen years, but in the last few years more and more applications of these ideas have been made. The purpose of this article is to familiarize readers with the architecture and construction of one Important class of expert systems, called rule-based systems.
In this overview, many programs and issues are necessarily omitted, but we attempt to provide a framework for understanding this advancing frontier of computer science. IF B IS TRUE B AND B IMPLIES C, OR B --> c THEN C IS TRUE. -------- C Conceptually, the basic framework of a rule-based. system is simple; the variations needed to deal with the complexities of real-world problems make the framework interestingly more complex. For example, the rule B-X is often interpreted to mean ��B suggests C��, and strict deductive reasoning with rules gives way to plausible reasoning. Other methodologies are also mentioned and briefly discussed. 1.1 Example: The MYCIN ProgramMYCIN is a rule-based system developed in the mid-late 1970��s at Stanford University. Its representation and architecture are described in detail in [Davis77b] and [Shortliffe, 1976]. Although it is now several years old, it is
representative of the state of the art of expert systems in its external behavior, which is shown in the following excerpt from a dialogue between the MYCIN program and a physician. It illustrates the interactive nature of most rule-based systems and provides a single example for the rest of the discussion. 1.2 Key ComponentsThe example showed some of the characteristic features of an expert system-- the heuristic nature of MYCIN��S rules, an explanation of its line of reasoning, and the modular form of rules in its knowledge base. We postponed a discussion of the general structure of a system until after the example, and defer entirely specific questions about implementation. For
discussing the general structure, we describe a generalization of MYCIN, called EMYCIN [vanMelle80] for ��essential MYCIN��. It is a framework for constructing and running rule-based systems, like MYCIN.
We turn now to two basic issues that have become the central foci of work on expert systems: (a) how knowledge of a task area is represented in the computer program, and (b) how knowledge is used to provide expert-level solutions to problems.
2. REPRESENTATION OF KNOWLEDGEA representation is a set of conventions for describing the world. In the parlance of AI, the representation of knowledge is the commitment to a vocabulary, data structures,
and programs that allow knowledge of a domain to be acquired and used. This has long .been a central research topic in AI (see [Amarel81, Barr81, Brachman80, Cohen82] for reviews of relevant work). Extendability -- the data structures and access programs must be flexible enough to allow extensions to the knowledge base without forcing substantial revisions. The knowledge base will contain heuristics that are built out of experts�� experience. Not only do the experts fail to remember all relevant heuristics they use, but their experience gives them new heuristics and forces modifications to the old ones. New cases require new distinctions. Moreover, the most effective way we have found for building a knowledge base is by incremental improvement. Experts cannot define a complete knowledge base all at once for interesting problem areas, but they can define a subset and then refine it over many weeks or months of examining its consequences. All this argues for treating the knowledge base of an expert system as an open-ended set of facts and relations, and keeping the items of knowledge as modular as possible. Simplicity -- We have all seen data structures that were so baroque as to be incomprehensible, and thus unchangeable. The flexibility we argued for above requires conceptual simplicity and uniformity so that access routines can be written (and themselves modified occasionally as needed). Once the syntax of the knowledge base is fixed, the access routines can be fixed to a large extent. Knowledge acquisition, for example, can take place with the expert insulated from the data structures by access routines that make the knowledge base appear simple, whether it is or not. However, new reasons will appear for accessing the knowledge base as in explanation of the contents of the knowledge base, analysis of the links among items, display, or tutoring. With each of these reasons, simple data structures pay large benefits. From the designer��s point of vi& there are two ways of maintaining conceptual simplicity: keeping the form of knowledge as homogeneous as possible or writing special access functions for non-uniform representations. There is another sense of simplicity that needs mentioning as well. That is the simplicity that comes ,from using roughly the same terminology as the experts use. Programmers often find ingenious alternative ways of representing and coding what a specialist has requested, a fact that sometimes makes processing more ��efficient�� but makes modifying the knowledge base a nightmare. Explicitness-- The point of representing much of an expert��s knowledge is to give
the system a rich enough knowledge base for high-performance problem solving. But because a knowledge base must be built incrementally, it is necessary to provide means for inspecting and debugging it easily. With items of knowledge represented explicitly, in relatively simple terms, the experts who are building knowledge bases can determine what items are present and (by inference) which are absent.
2.1 Rule-Based Representation Frameworks2.1.1 Production SystemsRule-based expert systems evolved from a more general class of computational models known as production systems [Newel173]. Instead of viewing computation as a prespecified sequence of operations, production systems view computation as the process of applying transformation rules in a sequence determined by the data. Where some
rule-based systems [McDermott80] employ the production-system formalism very strictly, others such as MYCIN have taken great liberties with it. However, the. production system framework provides concepts that are of great use in understanding all rule-based systems. IF In general, the left-hand-side (LHS) or condition part of a rule can be any pattern that can be matched against the database. It is usually allowed to contain variables that might be
bound in different ways, depending upon how the match is made. Once a match is made, the right-hand-side (RHS) or action part of the rule can be executed. In general, the action can be any arbitrary procedure employing the bound variables. In particular, it can result in addition of new facts to the database, or modification of old facts in the database. 2.1.2 EMYCIN Viewed as a Production SystemTo see how EMYCIN uses the production system formalism to represent knowledge, we must see how it represents facts about the current problem in its database, and how it represents
general knowledge in its rules. The In the MYCIN dialog shown above, fact triples are shown in the explanations as
individual clauses of rules. For example, after Question #35, one fact that has been established is ��the type of the infection is bacterial". It can, also be seen that each question is asking for a value to be associated with an attribute of an object. In Question # 35, for example, MYCIN is asking whether or not the infection of the patient is hospital-acquired. The basic EMYCIN syntax for a rule is: PREMISE: ($AND ( ACTION: (CONCLUDE
where the ��$" prefix indicates that the premise is not a logical conjunction, but a plausible conjunction that must take account of the certainty factors associated with each of the clauses. The action taken by this rule is merely the addition of a new fact to the database (or, if the fact were already present, a modification of its certainty). EMYCIN also provide some mechanisms to allow the execution of more complicated actions. For example, in MYCIN we find the following rule (stated in English): RULE160 If: 1) The time frame of the patient's headache is acute, 2) The onset of the patient's headache is abrupt, and 3) The headache severity (using a scale of 0 to 4; maximum is 4) is greater than Then: 1) There is suggestive evidence (.6) that the patient's meningitis is bacterial, 2) There is weakly suggestive evidence (.4) that the patient's meningitis is viral, and 3) There is suggestive evidence (.6) that the patient has blood within the subarachnoid space Thus, this rule has three conclusions. It is represented internally in LISP as follows: PREMISE : ($AND (SAME CNTXT HEADACHE-CHRONICITY ACUTE) (SAME CNTXT HEADACHE-ONSET ABRUPT) (GREATERP" (VAL1 CNTXT HEADACHE-SEVERITY 3))) ACTION : (DO-ALL (CONCLUDE CNTXT MENINGITIS BACTERIAL-MENINGITIS TALLY 600) (CONCLUDE CNTXT MENINGITIS VIRAL-MENINGITIS TALLY 400) (CONCLUDE CNTXT SUBARACHNOID-HEMORRHAGE YES TALLY 600)) These examples illustrate the basic techniques for representing facts and knowledge within the EMYCIN framework. Similar examples could be given for each of the several framework systems that have been developed to facilitate the construction of rule-based expert systems, including: OPS Carnegie-Mellon University [Forgy77] EMYCIN Stanford University [vanMelle80] AL/X University of Edinburgh EXPERT Rutgers University [Weiss79a] KAS SRI International [Reboh81] RAINBOW IBM Scientific Center (Palo Alto) [Hollander79] These framework systems provide important tools (such as editors) and facilities (such as explanation systems) that are beyond the scope of this paper to discuss. They also vary considerably in the syntax and the rule interpreters they employ. For example, in some of them all attributes must be binary. In some, uncertainty is expressed more formally as probabilities, or less formally as ��major�� or ��minor�� indicators, or cannot be expressed at all. And in some, additional structure is imposed on the rules to guide the rule interpreter. Despite these variations, these systems share a commitment to rules as the primary method of knowledge representation. This is at once their greatest strength and their greatest weakness, providing uniformity and modularity at the cost of imposing some very confining constraints. 2.2 Alternatives to Rule-Based Representation of KnowledgeThere are alternatives to representing task-specific knowledge in rules. Naturally, it is sometimes
advantageous to build a new system in PASCAL, FORTRAN, APL, BASIC, LISP, or other language, using a variety of data structures and inference procedures, as needed for the problem. Coding a new system from scratch, however, does not allow concentrating primarily on the knowledge required for high performance. Rather, one tends to spend more time on debugging the procedures that access and manipulate the knowledge. 2.2.1 Frame-Based Representation LanguagesOne approach to representing knowledge that allows rich linkages between facts is a generalization of semantic nets [Brachman77] known as frames [Minsky75], A frame is an encoding of knowledge about an object, including not only properties (often called. ��slots��) and values, but pointers to other frames and attached procedures for computing values. The pointers indicate semantic links to other concepts, e.g., brother-of and also indicate more general concepts from which properties may be inherited and more specialized concepts to which its properties will be manifested. Programming with this mode of representation is sometimes called object-centeredprogramming because knowledge is tied to objects and classes of objects. Some well-known frame-based representation languages are KRL Xerox PARC [Bobrow77] OWL M.I.T. [Szolovits77] UNITS Stanford University [Stefik79] FRL M.I.T. [Roberts77] AIMDS Rutgers University [Sridharan80] KL-ONE Bolt, Beranek & Newman [Brachman78] 2.2.2 Logic-Based Representation LanguagesA logic-based representation scheme is one in which knowledge about the world is represented as assertions in logic, usually first-order predicate logic or a variant of it. This mode of representation is normally coupled with an inference procedure based on theorem proving. Logic-based languages allow quantified statements and all other well-formed formulas as assertions, The rigor of logic is an advantage in specifying precisely what is known and knowing how the knowledge will be used. A disadvantage has been dealing with the imprecision and uncertainty of plausible reasoning. To date there have been few examples of logic-based expert systems, in part because of the newness of the languages. Some logic-based representation languages are: PLANNER M.I.T.[Hewitt72] PROLOG Edinburgh University [Warren77] ALICE University of Paris [Lauriere78] FOL Stanford University [Weyhrauch80] 2.2.3 Generalized LanguagesThere is research in progress on general tools for helping a designer construct expert systems of various sorts. Designers specify the kind of representation and control and then add the task-specific knowledge within those constraints. The main advantage of such an approach is freedom -- designers specify their own constraints. The main disadvantage is complexity -- designers must be very knowledgeable about the range of choices and must be very patient and systematic about specifying choices. These tools look even more like high-level programming languages, which they are. The best known are: ROSIE Rand Corp [Fain81] AGE Stanford University [Nii79] RLL Stanford University [Greiner80] HEARSAY-III USC/ISI [Erman81] MRS Stanford University [Genesereth81a]
2.3 Knowledge Representation IssuesRegardless of the particular choice of representation language, a number of issues are important in the construction of
knowledge bases for expert systems. We mentioned extendability, simplicity and explicitness as three global criteria. In addition the issues of consistency, completeness, robustness and transparency are major design considerations for all systems. For specific problems, it may be essential to represent and reason with temporal relations, spatial models, compound objects, possible worlds, beliefs, and expectations. These are discussed below.
3. INFERENCE METHODS IN EXPERT SYSTEMS3.1 Logical and Plausible InferenceAlthough the performance of most expert systems is determined more by the amount and organization of the knowledge possessed than by the inference strategies employed, every expert system needs inference methods to apply its knowledge. The resulting deductions can be strictly logical or merely plausible. Rules can be used to support either kind of deduction. Thus, a rule such as Has(x, feathers) OR (Able(x, fly) & Able(x, lay-eggs)) --> Class(x, bird) amounts to a definition, and can be used, together with relevant facts, to deduce logically whether or not an object is a bird. On the other hand, a rule such as State.(engine, won't turn over) & State(headlights, dim) --> State(battery, discharged) is a ��rule-of-thumb�� whose conclusion, though plausible, is not always correct. Clearly, uncertainty is introduced whenever such a judgmental rule is
employed. In addition, the conclusions of a logical rule can also be uncertain if the facts it employs arc uncertain. Both kinds of uncertainty are frequently encountered in expert systems applications. However, in either case we arc using the rule to, draw conclusions from premises, and there are many common or analogous issues. In this section we temporarily ignore the complications introduced by uncertainty, and consider methods for using rules when everything is certain. 3.2 ControlIn this section we describe three commonly used control strategies: (1) data-driven, (2) goal-driven, and (3) mixed. Since control concerns are procedural, we shall describe these strategies semi-formally as if they were programs written in a dialect of PASCAL, a ��Pidgin PASCAL.�� It is important to note at the outset, however, that these procedures are formal, employing no special knowledge about the problem domain; none of them possesses an intrinsic power to prevent combinatorial explosions. This has led to the notion of incorporating explicitly represented control knowledge in the rule interpreter, and idea that we discuss briefly at the end of this section. 3.2.1 Data-Driven ControlWith data-driven control, rules are applied whenever their left-hand-side conditions are satisfied. To use this strategy, one must begin by entering information about the current problem as facts in the database. The following simplified procedure, which we shall call ��Respond,�� can then be used to execute a basic data-driven strategy. Procedure Respond; Scan the database for the set S of applicable rules; While S is non-empty and the problem is unsolved do begin Call Select-Rule(S) to select a rule R from S; Apply R and update the database; Scan the database for the set S of applicable rules end. Here we assume that a rule is applicable whenever there are facts in the
database that satisfy the conditions in its left-hand side. If there are no applicable rules, there is nothing to be done, except perhaps to return to the user and ask him or her to supply some additional information. (And, of course, if the problem is solved, there is nothing more to do.) 3.2.2 Goal-Driven ControlA goal-driven control strategy focuses its efforts by only considering rules that are applicable to some particular goal. Since we are limiting ourselves to rules that can add simple facts to the database, achieving a goal G is synonymous with showing that the fact statement corresponding to G is true. In nontrivial problems, achieving a goal requires setting up and achieving subgoals. This can also lead to fruitless wandering if most of the subgoals are unachievable,
but at least there is always a path from any subgoal to the original goal. Procedure Achieve(G); Scan the knowledge base for the set S of rules that determine G; If S is empty then ask the user about G else While G is unknown and rules remain in S do begin Call Choose-Rule(S) to choose a rule R from S; G' <-- condition(R): If G' is unknown then call Achieve(G'); If G' is true then apply R end. Thus, the first step is to gather together all of the rules whose right-hand-sides can establish G. If there is more than one relevant rule, procedure Choose-Rule receives the problem of making the choice. Once a rule R is selected, its left-hand-side G�� is examined to see if R is applicable. If there is no information in the database about G��, the determination of its truth or falsity becomes a new subgoal, and the same procedure Achieve is applied to G�� recursively. 3.2.3 Mixed StrategiesData-driven and goal-driven strategies represent two extreme approaches to control. Various mixtures of these. approaches have been investigated in an attempt to secure their various advantages while minimizing their disadvantages. The following simple procedure combines the two by alternating between the two modes. Procedure Alternate; Repeat Let user enter facts into global database: Call Respond to deduce consequences; Call Select-Goal to select a goal G; Call' Achieve(G) to try to establish G until the problem is solved.. Here Respond and Achieve are the data-driven and goal-driven procedures described previously. Select-Goal, which we do not attempt to specify, uses the partial conclusions obtained from the data-driven phase to determine a goal for the goal-driven phase. Thus, the basic idea is to
alternate between these two phases, using information volunteered by the user to determine a goal, and then querying the user for more information while working on that goal. 3.3 Explicit Representation of Control KnowledgeThe advantages of making the task-specific knowledge modular and explicit extend to control knowledge as well. The strategy by which an expert system reasons about a task depends
on the nature of the task and the nature of the knowledge the system can USC. Neither data-driven, goal-driven, nor any particular mixed strategy is good for every problem. Different approaches are needed for different problems. Indeed, one kind of knowledge possessed by experts is knowledge of procedures that are effective for their problems.
4. REASONING WITH UNCERTAINTYThe direct application of these methods of deduction to real-world problems is complicated by the fact that both the data and the expertise are often uncertain. This fact has led the designers of expert systems to abandon the pursuit of logical completeness in favor of developing effective heuristic ways to exploit the fallible and but valuable judgmental knowledge that human experts bring to particular classes of problems. Thus, we now turn to comparing methods that have been used to accommodate uncertainty in the reasoning. 4.1 Plausible InferenceLet A be an assertion about the world, such as an attribute-object-value triple. How can one treat the uncertainty that might be associated with this assertion? The classical formalism for quantifying uncertainty is probability theory, but other alternatives have been proposed and used. Among these are certainty theory, possibility theory, and the Dempster/Shafer theory of evidence. We shall consider all four of these approaches in turn;with emphasis on the first two. 4.2 Bayesian Probability TheoryWith probability theory, one assigns a probability value P(A) to every assertion A. In expert systems applications, it is usually assumed that P measures the degree to which P is believed to be true, where P = 1 if A is known to be true, and P = 0 if A is known to be false! In general, the degree of be��lief in A will change as new information is obtained. Let P(A) denote our initial or prior belief in A, and let the conditional probability P(A��B) denote our revised
belief in A upon learning that B is true. If this change in probability is due to the application of the rule B --> A in a rule-based system, then some procedure must be invoked to change the probability of A from P(A) to P(A��B) whenever this rule is applied. where P(BI-A) is the probability of observing effect B when cause A is absent. If we think of the link between B and A as being expressed by a rule of the form B -4 A, then we can think of the logarithm of the likelihood ratio L as representing the strength or weight of the rule; rules with
positive weights increase the probability of A, and rules with negative weights decrease it. 4.2.1 Corn bining RulesSuppose that we have n plausible rules of the form each with its own weight. Formally, the generalization of Bayes�� Rule is simple. We merely consider B to be the conjunction , and use the likelihood ratio The problem with this solution is that it implies that we not only have weights for the individual rules connecting the
, to A, but that we also have weights for the pairs and B,, and so on, not to mention combinations involving negations when the evidence is known to be absent. This not only leads to extensive, nonintuitive computations, not directly related to the
rules, but also requires forcing the expert to estimate a very large number of weight values. {( --> A with weight Ll) ( --> A with weight L2) } with the rule ( & --> A with weight-m). Thus, rather than viewing probability theory as a paradigm that prescribes how information should be processed, the knowledge engineer employs it as a tool to obtain the desired behavior. 4.2.2 Uncertain EvidenceThere are two reasons why an assertion B might be uncertain: (1) the user
might have said that B is uncertain, or (2) the program might have deduced B using a plausible rule. If we want to use B in a rule to compute P(A��B), the question then arises as to how to discount the conclusion because of the uncertainty associated with B. P(A��E) = P(A��B)*P(B��E) + P(A����B)*[1 - P(B��E)] This formula certainly works in the extreme cases of complete certainty. That is, if we know that B is true we obtain P(A��B), and if we know that B is false we obtain P(A����B). Unfortunately, a serious problem arises in intermediate cases. In particular, suppose that E actually supplies no information about B, so that
P(B��E) is the same as the prior probability P(B). While the formula above promises to yield the prior probability P(A), when the computation is based on numerical values obtained from the expert, the resulting value for P(A��E) will usually not agree with the expert��s estimate for the prior probability P(A). That is, the four quantities P(A), P(B), P(A��B) and P(A����B) are not independent, and the expert��s subjective estimates for them arc almost surely
4.3 Certainty TheoryWe have seen several problems that arise in using traditional probability theory to quantify uncertainty in expert systems. Not the least of these is the need to specify numerical values for prior probabilities. While an expert may be willing to say how certain he or she feels about a conclusion A when evidence B is present, he or she may be most reluctant to specify a probability for A in the absence of any evidence,
particularly when rare but important events are involved. Indeed, some of the problems that are encountered in obtaining consistent estimates of subjective probabilities may well be due to the fact that the expert is not able to separate probability from utility or significance, and is really expressing some unspecified measure of importance. 4.3.1 Cornbining EvidenceSuppose that (1) the present certainty of an event A is CA (which may be non zero because of the previous application of rules that conclude A), (2) there is an unused rule of the form B --> A with a certainty factor CF, and (3) B is observed to be true. Then the EMYCIN formula for updating C(A) to C(A��B) is
This is the EMYCIN analog of the procedure of multiplying likelihood ratios to
combine ��independent�� evidence. By applying it repeatedly, one can combine the conclusions of any number of rules . Aside from being easy to compute, it has several other desirable properties. First, the resulting certainty C(A��B) always lies between -1 and 1, being +l if CA or CF is +l, and -1 if CA or CF is -1. When contradictory conclusions are combined (so that CA
= -CF), the resulting certainty is 0. Except at the singular points (1, -1) and (-1, l), C(A��B) is a continuous function of CA and CF, increasing monotonically in each variable. The formula is symmetric in CA and CF, and the results it yields when more than two pieces of evidence are combined are independent of the order in which they are considered. 4.3.2 Uncertain EvidenceWhen the evidence B for a rule B --> A is itself uncertain, it is clear that the strength of the conclusion must be reduced. The EMYCIN procedure is to multiply the certainty factor CF for the rule by the certainty of B, provided that the certainty of B is positive. If the certainty of B is negative, the rule is considered to be inapplicable, and is not used. EMYCIN assumes that a rule cannot be employed unless the certainty of its antecedent is greater than a threshold value of
0.2. This heuristic -- which implies that the certainty of a conclusion is not a strictly continuous function of the certainty of the evidence -- saves time by inhibiting the application of many marginally effective rules, and saves confusion by making explanations provided by the system more understandable. These formulas are essentially the same as the corresponding formulas of possibility theory, which is discussed briefly in the next section. 4.4 Possibility TheoryProbability theory captures only some of the important aspects of uncertainty, and a variety of alternative approaches, such as certainty theory, have
been developed to overcome its limitations. One of the most interesting of the recent alternatives is Zadeh��s theory of possibility [Zadeh78]. It is based on his earlier development of the theory of fuzzy sets [Zadeh65], much as probability theory is based on measure theory. Poss{ X=s OR Y=t } = rnax[ Poss{X=s}, Poss{Y=t) ] Poss{ x =s & Y =t } = min[ Poss{X=s}, Poss{Y=t} ] and Pass{ x ��s } = 1 - Poss{x=s}. For most of the concepts of probability theory there is a corresponding concept in possibility
theory. For example, it is possible to define multivariate possibility distributions, marginal possibility distributions, and conditional possibility distributions (see [Zadeh78]). Thus, in principle one can use fuzzy possibility theory much like probability theory to quantify the uncertainty introduced by vagueness, whether the vagueness comes from the data or from the rules. 4.5 The Dempster/Shafer Theory of EvidenceWe conclude this overview of formalisms for treating uncertainty with a brief consideration of
a generalization of probability theory created by Dempster and developed by Shafer that has come to be known as the Dempster/Shafer theory of evidence [Shafer76, Barnett81].
5. KEY CONCEPTSIn the previdus three sections we focused on three central issues in the design of expert systems, with special attention to
rule-based systems. The representation, inference methods and methods for reasoning under uncertainty are the elements of the design of rule-based systems that give them power. We turn now to a broader look at several less technical aspects of building an expert system. These are observations derived from our own experience and constitute suggestions for designing an expert system. They also reflect the current state of the art. NATURE OF THE PROBLEM: Narrow scope -- The task for the system must be carefully chosen to be narrow enough that the relevant expertise can be encoded, and yet complex enough that expertise is required. This limitation is more because of the time it takes to engineer the knowledge into a system including refinement and debugging, than because space required for the knowledge base.�� Existence of an expert -- These are problems so new or so complex that no one ranks as an expert in the problem area. Generally speaking, it is unwise to expect to be able to construct an expert system in areas where there are no experts. Agreement among experts -- If current problem solving expertise in a task area leaves room for frequent and substantial disagreements among experts, then the task is not appropriate for an expert system. Data available -- Not only must the expertise be available, but test data must be available (preferably online). Since an expert system is built incrementally, with knowledge added in response to observed difficulties, it is necessary to have several test cases to help explore the boundaries of what the system knows. Milestones definable -- A task that can be broken into subtasks, with measurable milestones, is better than one that cannot be demonstrated until all the parts are working. REPRESENTATION: Separation of task-specific knowledge from the rest of the program -- This separation is essential to maintain the flexibility and understandability required in expert systems. Attention lo detail -- Inclusion of very specific items of knowledge about the domain, as well as general facts, is the only way to capture the expertise that experience adds to textbook knowledge. Uniform data structures -- A homogeneous representation of knowledge makes it much easier for the system builder to develop acquisition and explanation packages. INFERENCE: Symbolic reasoning - It is commonplace in AI, but not elsewhere, to regard symbolic, non-numeric reasoning as a powerful method for problem solving by computers. In applications areas where mathematical methods are absent or computationally intractable, symbolic reasoning offers an attractive alternative. Combination of deductive logic and plausible reasoning -- Although deductive reasoning is the standard by which we measure correctness, not all reasoning -- even in science and mathematics -- is accomplished by deductive logic. Much of the world��s expertise is in heuristics, and programs that attempt to capture expert-level knowledge need to combine methods for deductive and plausible reasoning. Explicit problem solving strategy -- Just as it is useful to separate the domain-specific knowledge from the inference method, it is also useful to separate the problem solving strategy from both. In debugging the system it helps to remember that the same knowledge base and inference method can produce radically different behaviors with different strategies. For example, consider the difference between ��find the best�� and ��find the first over threshold��. Interactive user interfaces -- Drawing the user into the problem solving process is important for tasks in which the user is responsible for the actions recommended by the expert system, as in medicine. For such tasks, the inference method must support an interactive style in which the user contributes specific facts of the case and the program combines them in a coherent analysis. EXPLANATION Static queries of the knowledge base -- The process of constructing a large knowledge base requires understanding what is (and is not) in it at any moment. Similarly, using a system effectively depends on assessing what it does and does not know. Dynamic queries about the line of reasoning -- As an expert system gathers data and makes intermediate conclusions, users (as well as system builders) need to be able to ask enough questions to follow the line of reasoning. Otherwise the system��s advice appears as an oracle from a black box and is less likely to be acceptable. KNOWLEDGE ACQUISITION: Bandwidth -- An expert��s ability to communicate his/her expertise within the framework of an expert system is limited by the restrictions of the framework, the degree to which the knowledge is already well-codified, and the speed with which the expert can create and modify data structures in the knowledge base. Knowledge engineer -- One way of providing help to experts during construction of the knowledge base is to let the expert communicate with someone who understands the syntax of the framework, the rule interpreter, the process of knowledge base construction, and the practical psychology of interacting with world-class experts. This person is called a ��knowledge engineer��. VALIDATION: Level of performance -- Empirical measures of adequacy are still the best indicators of performance, even though they arc not sufficient for complete validation by any means. As with testing new drugs by the pharmaceutical industry, testing expert systems may best be accomplished by randomized studies and double-blind experiments. Static evaluation -- Because the knowledge base may contain judgmental rules as well as axiomatic truths, logical analysis of its completeness and consistency will be inadequate. However, static checks can reveal potential problems, such as one rule subsuming another and one rule possibly contradicting another. Areas of weakness in a knowledge base can sometimes be found by analysis as well.
5.1 Classes of Problems for Expert SystemsThe first of the key concepts listed above was the nature of the problem. We examine this issue in somewhat more detail in this and the next two sections. While there are many
activities an expert performs, the activities for which expert systems have been built fall into three categories: analysis, synthesis, and interface problems.
An expert system working on one of these problems analyzes a description of a situation, and provides plausible interpretations of what the data seem to indicate. The data may come from a variety of sources ranging from subjective opinion
to precise readings of instruments.
In addition to analysis and synthesis problems, expert systems have been built to provide advice on how to USC a complex system [Anderson76, Bennett79, Gencscreth78, Hewitt75, Krueger81, Rivlin80, Waterman791 or to tutor a novice in the use or understanding of a body of knowledge [Brown82, Clancey79, O��Shea79]. These problems arc partly analytic, since the advice or tutorial must be guided by an analysis of the context, and partly synthetic since the advice must be tailored to the user and the problem at hand. 5.2 The DataOne of the central concerns in choosing a task for an expert system is the nature of the
data. In problems of analysis, the data require interpretation by means of some model or theory. Yet in many interesting problems, the data are not as complete or ��clean�� as the theory seems to require. In applying a theory to individual cases, the data are not always available to ��plug into�� formulas and principles. In the absence of perfect data, however, experts can still provide good suggestions, when a novice can not. We have identified several important concerns, briefly discussed
below: incompleteness, noise and non-independence. 5.3 The ExpertiseThe proficiency of an expert system is dependent on the amount of domain-specific expertise it contains. But expertise about interesting problems is not always neatly codified and waiting for transliteration
into a program��s internal representation. Expertise exists in many forms and in many places, and the task of knowledge engineering includes bringing together what is known about a problem as well as transforming (not merely transcribing) it into the system.
6. CONCLUSIONSExpert systems represent an important set of applications of Artificial Intelligence to problems of commercial as well as scientific importance. There appear to be three main motivations for building an expert system, apart from research purposes:
Rule-based systems arc
currently the most advanced in their system-building environments and explanation capabilities, and have been used to build many demonstration programs. Most of the programs work on analysis tasks such as medical diagnosis, electronic troubleshooting, or data interpretation. The capability of current systems is difficult to define. It is clear, however, that they are specialists in very narrow areas and have very limited (but not totally missing) abilities to acquire new knowledge or explain
their reasoning. Which of the following methods can be used for handling uncertainty in ruleFuzzy logic is a method of choice for handling uncertainty in some expert systems.
How uncertainty is managed in an expert system?In Expert systems the word uncertainty is related to the working with inexact data, imprecise information, handling identical situation, reliability of the results etc. An expert system allows the user to assign probabilities, certainty factors, or confidence levels and many more techniques to any or all input data.
What is ruleA rule-based expert system is the simplest form of artificial intelligence and uses prescribed knowledge-based rules to solve a problem 1. The aim of the expert system is to take knowledge from a human expert and convert this into a number of hardcoded rules to apply to the input data.
What are the 4 components of expert system?An expert system generally consists of four components: a knowledge base, the search or inference system, a knowledge acquisition system, and the user interface or communication system.
Which is the weakness of a ruleThe disadvantages of the RB system are as follows: Lot of manual work: The RB system demands deep knowledge of the domain as well as a lot of manual work. Time consuming: Generating rules for a complex system is quite challenging and time consuming.
|