# Nonmonotonic Temporal reasoning

发布时间:2021-10-23 20:56:51

Nonmonotonic Temporal Reasoning

Andrew B. Baker Yoav Shoham Department of Computer Science Stanford University Stanford, California 94305 July 14, 1989

1 Introduction

Both nonmonotonic reasoning and temporal reasoning are rich and multifaceted subjects, with relevance to many areas of arti cial intelligence. The Handbook deals with each in detail, but, with the exception of the present article, in isolation. Our purpose here is to focus on the interaction between nonmonotonic reasoning and temporal reasoning, an interaction that has been the subject of much recent research. In Section 2, we review three problems that arise when one tries to formalize temporal information: the frame problem, the rami cation problem, and the quali cation problem. It has often been suggested that these problems can be solved by the use of nonmonotonic reasoning. Unfortunately, the naive use of nonmonotonic logics for temporal reasoning leads to a serious di culty. The di culty was pointed out by Hanks and McDermott 8] by way of an example that has come to be called the Yale shooting problem. The Yale shooting problem is discussed in Section 3, and Section 4 covers the various solutions that have been proposed to the shooting problem. Even after solving the Yale shooting problems, there are of course other di culties that will remain; some of these are reviewed in Section 5. Finally, there is the issue of whether we should be using logic at all in this endeavor; Section 6 mentions an alternative approach.

2 Frame, rami cation, and quali cation problems

The problems that we will be discussing are the result of trying to tractably formalize temporal information. The best-known one is the frame problem, described by McCarthy and Hayes in 20]. Although the problem was de ned in the context of a particular temporal 1

formalism, the situation calculus, it was recognized that it had a universal nature transcending that particular framework. The same is true of two other related problems, the quali cation problem (introduced rst by McCarthy in 17]) and the rami cation problem (so named by Finger in 4]). In 32], Shoham and McDermott attempt to de ne the problems in a formalism-independent manner, although the examples they provide of these problems have come under recent criticism 27]. Certainly all solutions to the various problems have each been couched in a particular temporal framework | usually in a variant of the situation calculus, but occasionally also in a temporal logic. For concreteness, in this article we will discuss explicitly the situation calculus, but also point to how the discussion applies to temporal logics as well.1 The situation calculus and other formalisms of change are discussed in Galton's Handbook article; make sure notation is compatible]]. Recall the following de nitions. A situation is the state of the world at a particular time. Given an action a and a situation s, Result(a; s) denotes the new situation after action a is performed in situation s. Given a property p and a situation s, Holds(p; s) indicates that the property p is true in situation s. With these conventions, one might use standard rst-order logic to formalize the e ects of various actions. For instance, the blocks-world axiom

Holds(Clear(x),s) ^ Holds(Clear(y ),s) Holds(On(x; y ),Result(Stack(x; y ),s))

indicates that if both blocks x and y are clear in some situation, then x will be on y after the performance of the Stack action. There are three classical problems, however, with this approach. The frame problem is that we would need to write down a great many frame axioms specifying those properties that are un changed by each action. For the stacking example, we would need axioms stating that the stack action does not change the colors of the blocks, the sizes of the blocks, the number of planets in the solar system, and so on. This is both a representational and a computational problem. From a representational standpoint, it would be rather tedious to write all of these axioms; from a computational standpoint, it would be time consuming for the system to reason with them. We have explained the frame problem in terms of the situation calculus, but the problem itself is more general, and it arises in every temporal logic. Consider a property that holds at some point in time. In situation calculus, we wanted to conclude that unless this property had a reason to change, it would continue to hold after the performance of an action. An arbitrary temporal logic need not have the concept of an action (and if time is dense, there will not even be a \next" time point). Still, we would like to conclude that the property persists up to some future time, unless there is a reason for it not to persist. So we see that the basic nature of the frame problem transcends the di erences between the various temporal formalisms.

The generality of the frame problem has also drawn the attention of cognitive scientists not directly interested in either temporal reasoning or nonmonotonic reasoning. Their much broader interpretations of the frame problem are collected in 26].

1

2

The rami cation problem is the di culty of formalizing all of the things that do change as the result of an action, given that the e ects of an action may be very complex. For instance, if you drive your car from A to B, then as a result the car is at B and so is its engine, its tires, etc. Rather than listing all of these e ects explicitly, we would like to simply say that if the position of the car changes, then everything in it moves as well. In general, when formalizing an action, it is not feasible to explicitly list all of its e ects. Therefore, we would like to list only some of the action's e ects, and let the other e ects be logical consequences of these primary e ects and of our other knowledge. Like the frame problem, the rami cation problem is not limited to the situation calculus. Finally, the quali cation problem is that actions may have many preconditions. Normally the result of turning the ignition key of the car is to start the engine, but there are many exceptions, or quali cations, to this rule: the battery may be dead, the gas tank may be empty, there may be a potato in the tailpipe, and so on. When doing everyday reasoning, we do not want to waste time worrying about all of these unlikely possibilities. Instead, we would like to jump to the conclusion that the action will succeed, and only think about the quali cations if something goes wrong. It should be clear that, like the frame and rami cation problems, the quali cation problem does not depend on the details of which particular temporal formalism we use.

3 The Yale shooting problem

Of the three problems, the frame problem is probably the most well-de ned. Rather than writing out all of the frame axioms, we would prefer instead to specify only the positive e ects of the actions, and then somehow say that nothing else changes. The hope was that nonmonotonic logics will provide the technical means of achieving this e ect; indeed, the frame problem was a large part of the motivation behind the development of nonmonotonic logics. The various nonmonotonic logics are discussed elsewhere in the Handbook. In the following we will use McCarthy's circumscription 18, 19, 12], although we could have instead used one of the other well-known logics, such as Reiter's default logic 28] or Moore's autoepistemic logic 22]. The intuitive idea was to use a default frame axiom of the following form:

:Ab(p; a; s)

Holds(p; s)

Holds(p,Result(a; s))].

This axiom says that the truth-value of a property persists from one situation to the next unless something is abnormal (Ab is McCarthy's \abnormality" predicate). In order to assume that there are as few abnormalities as possible, we now have to minimize the extent of the Ab predicate. To be precise, we must circumscribe Ab with Holds varied. It was hoped that this minimization of abnormality would ensure that a property would persist unless a speci c action caused this property to change. Unfortunately, the approach is awed. Hanks and McDermott 8] demonstrated this fact with their famous Yale shooting problem. In the Yale shooting problem, which we are simplifying slightly from the Hanks 3

and McDermott version, there are two properties, Loaded and Alive, and two actions, Wait and Shoot. The story is that if the gun is shot while it is loaded, a certain person (typically named Fred) dies. In the initial situation S0, the gun is loaded and Fred is alive. Formally, we have the following domain axioms:

Holds(Loaded,s) Holds(Loaded,S0), Holds(Alive,S0),

:Holds(Alive,Result(Shoot,s)),

along with the default frame axiom. We are interested in what would happen if the actions Wait and then Shoot were performed in succession. Since there are no axioms about Wait, we would expect that the Wait action would have no e ect, and thus the gun would remain loaded, and the shooting would kill Fred. Circumscription, however, is not so cooperative. Another possibility according to circumscription is that the gun mysteriously becomes unloaded during the waiting action, and Fred survives. This second model contains an abnormality during the waiting that was not present in the rst model, but there is no longer an abnormality during the shooting. (Since the gun is unloaded, Fred does not change from Alive to not Alive). Therefore, both models are minimal, and the formalization must be altered in some way to rule out the anomalous model. Although the Yale shooting problem involves the frame problem, similar di culties arise with default approaches to the quali cation problem. One such example, that was discovered by Lifschitz (independently of Hanks and McDermott) and reported by McCarthy 19], is explained in terms of a simple blocks world. Suppose we state that normally the action of stacking x on y succeeds:

:Ab-q(a; s)

Holds(On(x; y ),Result(Stack(x; y ),s)),

but the action is quali ed if anything is on x (the robot is too weak to lift more than a single block):

Holds(On(w; x),s) Ab-q(Stack(x; y),s).

Suppose we try to rst move block A onto block B, and then move B onto something else. Intuitively, the rst action should succeed, and then since B would no longer be clear, the second action would fail. But we cannot reach this conclusion by circumscribing Ab-q, because there will be an alternate minimal model in which the rst action fails so that the second can succeed.

4 Solutions to the shooting problem

Because the Yale shooting problem is such a fundamental roadblock in the path of nonmonotonic temporal reasoning, there has been a great deal of research into various ways around it. 4

We will discuss the solutions in roughly the order in which they were proposed. Although the original shooting story involved temporal prediction, its variants have been used to explore temporal explanation (i.e. reasoning backward in time), rami cations, etc. Thus, for each of the solutions to the shooting problem, we will also remark on how the solution fares on these more complex temporal reasoning problems. The rst idea was that we should reason forward in time; that is, apply the default assumptions in temporal order. Following this principle in the shooting scenario, we would rst conclude that the waiting action is not abnormal. Then, since the gun would remain loaded, we would be able to conclude that Fred dies. Three separate authors formalized this principle. Kautz 10] used the logic of persistence, Lifschitz 15] used pointwise circumscription, and Shoham 31] used chronological minimization and then chronological ignorance. All of these solutions basically amount to modifying circumscription's partial order so as to prefer models in which the abnormalities happen as late in time as possible. In other words, they minimize the abnormalities in earlier situations at a higher priority than those in later situations. We will use Shoham's \chronological minimization" as a generic term for all of these solutions. While this approach does in fact give the intuitively correct answer to the Yale shooting problem, it is nevertheless highly problematic. Its applicability seems to be limited to temporal prediction problems, or in other words, problems in which given the initial conditions, we are asked to predict what will be the case at a later time. For more complicated temporal reasoning problems, it can give unintuitive answers. Suppose we are told that if we wait twice, the gun becomes unloaded for some reason or other. One of the waiting actions must have been abnormal, and chronological minimization forces us to break this ambiguity by concluding that the gun must have become unloaded at the last possible moment. This is obviously an unreasonable conclusion (the example is due to Kautz 10]). The above example, while it is instructive, is also a bit misleading. The purpose of the default frame axiom was to get the same answer as if we entered all of the monotonic frame axioms. For this example, the monotonic frame axioms would state that nothing changes when the Wait action is performed. This, of course, contradicts the fact that the gun became unloaded after two waiting actions. So, while chronological minimization gets a strange answer here, the monotonic theory would be outright inconsistent. Nonmonotonic reasoning about \surprises" evidently requires an additional layer of complication beyond that needed to handle the frame problem. Since all of the solutions to the Yale shooting problem have di culty with this example, we will postpone further discussion of it until Section 5. Unfortunately, chronological minimization also goes astray on examples that do not involve surprises | examples for which the monotonic axioms would be completely consistent. Suppose, we know that Fred is initially alive:

Holds(Alive,S0),

4.1 Chronological minimization

5

but we do not know whether the gun is loaded or not. What happens if we shoot Fred? Chronological minimization will conclude that the gun must be unloaded so that Fred will not die. While Fred may be happy to hear this, it is an unjusti ed conclusion, and it would not follow from the monotonic frame axioms. Suppose in addition, we are told that Fred dies after being shot. It now follows that there was an abnormality:

Ab(Alive,Shoot,S0).

Was the gun originally loaded? Intuitively, it must have been as this is the only explanation for Fred's death, and again this conclusion would follow from the monotonic frame axioms. But here again, the intuitive conclusion does not follow from chronological minimization. We should note that these last two example are similarly mishandled by ordinary circumscription; our point is merely that chronological minimization does not help. Finally, it is not di cult to construct examples on which chronological minimization does strictly worse than ordinary circumscription (see the \murder mystery" in 1]). Because of its inability to handle these simple temporal explanation problems, the basic version of chronological minimization is not a completely satisfactory solution. Sandewall 30] discusses two modi cations to chronological minimization that together allow it to handle temporal explanation. The rst modi cation rede nes the partial order on models. In his new de nition of chronological minimization, model M1 is preferred to model M2 if there is some time point t such that the abnormalities at t in M1 are a strict subset of the abnormalities at t in M2, and if M1 and M2 agree on which properties held for all times up to t. (The old de nition of chronological minimization did not require that M1 and M2 agree on the properties for previous time points; it merely required that M1 and M2 agree on the abnormalities for previous time points.) This brings it close to the spirit of chronological ignorance 31] which chronologically minimizes knowledge of all propositions. Note how this handles the example in which we are told that Fred is originally alive, but we are not told whether the gun is loaded or not. The original chronological minimization preferred that the gun be unloaded so that the aliveness of Fred would persist; with the new de nition, these two possibilities are not comparable. The second modi cation delimits the set of axioms to which chronological minimization is applied. Suppose ? is the set of causal axioms (i.e. those axioms specifying the laws of motion), and ?o is the set of observation axioms (i.e. those axioms specifying which properties hold at various times). Ordinarily, chronological minimization is applied to all of the axioms: CHRONMIN(? ?o ). Sandewall's suggested revision is to apply it only to the causal axioms, and then to conjoin the observation axioms: CHRONMIN(?) ?o . In some cases, this will entail stronger conclusions. Consider the example in which we are told that Fred dies after being shot, but we are not told whether the gun was originally loaded 6

or not; recall that ordinary chronological minimization would not let us conclude that the gun was originally loaded even though this is the only available explanation for Fred's death. With the new approach, CHRONMIN(?) implies that shooting will not kill someone unless the gun is loaded; when we conjoin the fact that Fred does in fact die, it will follow that the gun must have been loaded. Sandewall calls this new method lter preferential entailment, since it rst selects the preferred models of the causal axioms, and then lters out those models that disagree with the observations. Finally, we note that approaches similar to chronological minimization have been explored by other authors as well. Pearl 25], for example, represents defaults as probabilistic statements with probabilities very close to 1. He then shows that one can get the right answer to the Yale shooting problem by making certain assumptions regarding conditional independence; Pearl argues that his logic of probabilistic independence gives chronological minimization \its operational and probabilistic legitimacy." Another approach, developed independently by Haugh 9] and Lifschitz 14] is that of causal minimization. This method modi es the original domain description in a manner that captures the notion of causality directly. The intuition here is that there is a crucial di erence between the abnormality of a gun becoming unloaded while waiting, and the abnormality of Fred dying when shot with a loaded gun: there is a cause for the second while the rst is totally arbitrary. For concreteness, we will discuss Lifschitz's version. He begins by stating explicitly that shooting causes the person to die:

Causes(Shoot,Alive,False),

4.2 Causal minimization

and that the Shoot action has the single precondition of the gun being loaded:

Precond(Loaded,Shoot).

Axioms are added stating that a property changes its value if and only if a successful action causes it to do so (an action succeeds in some situation if all of the action's preconditions hold). Finally, both Causes and Precond are circumscribed with Holds varied (the circumscription of Precond is an attempt to deal with the quali cation problem). Now, if the gun becomes unloaded during the waiting, it must be because the waiting caused it to do so:

Causes(Wait,Loaded,False).

This would not eliminate Causes(Shoot,Alive,False), however; therefore, this new model is not causally minimal since it has more causal facts than are necessary. It is easy to see that in all causally minimal models, the gun remains loaded and Fred dies. The main drawback of this proposal is that it does not allow us to write our domain axioms in unrestricted situation calculus. Instead, we must use the Causes predicate. There is no way, for instance, to use the Causes predicate to express context-dependent e ects; 7

that is, we cannot say that an action causes one thing in one circumstance, and something else in another circumstance. More seriously, causal minimization does not allow any of the e ects of an action to be described implicitly by means of domain constraints. A simple example of this limitation of causal minimization (due to Ginsberg) can be obtained from Hanks and McDermott's original version of the Yale shooting problem in which the gun was initially unloaded and there is a Load action that loads it:

Causes(Load,Loaded,True).

Suppose we know that in order to be breathing, you have to be alive (this is the domain constraint):

Holds(Breathing,s) Holds(Alive,s).

It would be nice if by minimizing Causes, we could conclude that not only does shooting make Fred not alive; it also makes him not breathing:

Causes(Shoot,Breathing,False).

Unfortunately, this does not follow; there is another causally minimal model in which Load kills Fred. In this model, there is no situation in which the gun is loaded while Fred is alive; therefore, Causes(Shoot,Breathing,False) does not have to be added. We see that unwanted minimal models are likely to arise if some of the e ects of an action are left implicit. For causal minimization to work reliably, we will have to explicitly record all of the e ects of the actions. In short, causal minimization solves the frame problem by ignoring the rami cation problem. Morris 24] and Gelfond 5] tried to solve the problem by using nonnormal default rules. They start with the following domain axioms:

Holds(Loaded,s) Holds(Loaded,S0), Holds(Alive,S0).

4.3 Nonnormal defaults

:Holds(Alive,Result(Shoot,s)) ^

Ab(Alive,Shoot,s),

These di er from the ones presented earlier in this paper only in the Ab on the right-hand side of the causal axiom (this Ab was also present in Hanks and McDermott's original formulation). For nonmonotonicity, Morris and Gelfond used default logic rather than circumscription. Recall the de nitions of default logic, as given in Ginsberg's Handbook article; ensure compatibility]]. A default rule of the form :M 8

means, roughly, that if is consistent with our knowledge, then conclude ; the rule is nonnormal if is di erent than . One builds up an extension of a default theory by starting with the axioms, and then applying the various default rules in some arbitrary order until all the remaining rules are inapplicable. If any of the later default rules undermines an earlier one, then this ordering does not yield a valid extension. The frame axiom was rewritten as a default rule in Reiter's default logic 28]: : M:Ab(p; a; s) : Holds(p; s) Holds(p; Result(a; s)) (This is actually a slightly modi ed version of Morris's default rule; Gelfond used autoepistemic logic 22] instead, but autoepistemic logic and default logic are intimately related 11].) The frame default rule is a schema representing all its ground instances. In the current example there are two ground instances that are of interest: : M:Ab(Loaded; Wait; S0) Holds(Loaded; S0) Holds(Loaded; S1) and : M:Ab(Alive; Shoot; S1) Holds(Alive; S1) Holds(Alive; S2) where S1 is an abbreviation for Result(Wait,S0) and S2 abbreviates Result(Shoot,S1). If we initially apply the rst default rule, its antecedent will be consistent, so we will conclude Holds(Loaded,S1). Then, from the causal axiom it follows that Fred dies, and thus Ab(Alive,Shoot,S1). At this point, the second default rule is disabled, so our extension is complete. This is the intuitive extension. If we initially apply the second default rule, its antecedent :Ab(Alive,Shoot,S1) will be consistent so we will conclude that Fred survives. The rst default rule, however, will still be viable at this point (:Ab(Loaded,Wait,S0) is also consistent), so we conclude that the gun remains loaded, and thus from the causal axiom it follows that Ab(Alive,Shoot,S1). Since this fact contradicts the antecedent of the previously applied default rule, this fails to be a valid extension. Therefore, the only extension is the one in which Fred dies. To the credit of the nonnormal default approach, it is extremely simple; in fact, Gelfond's version requires but a single change to Hanks and McDermott's axioms: changing :Ab(p; a; s) to :LAb(p; a; s) where L is the belief operator in autoepistemic logic. Furthermore, this solution is not restricted to temporal reasoning; both Morris and Gelfond show that it can handle some troublesome inheritance hierarchy examples. Unfortunately, the solution does not do very well on the additional examples that we introduced in Sections 4.1 and 4.2. In the rst temporal explanation example where we are not told whether the gun is loaded or not, the nonnormal default approach will mistakenly conclude that the gun must have been unloaded, and that Fred will survive being shot. In the second temporal explanation example where we are then told that Fred dies after being shot, the system will fail to have any extensions, since there is nothing preventing the default rule for the persistence of Alive from being activated. Finally, the nonnormal default approach is unable to handle rami cations. If we merely tell it that 9

Holds(Breathing,s)

Holds(Alive,s)

without explicitly adding Ab(Breathing,Shoot,s) on the right-hand side of the causal axiom, then the system will again have no extensions since the default rule for the persistence of Breathing will not be disabled. Since the nonnormal default approach can handle neither temporal explanation nor rami cations, it seems to share the disadvantages of both chronological and causal minimization. Recall that in our discussion of the default frame axiom, the straightforward method of minimizing abnormality was to circumscribe Ab with Holds varied. Baker 1] suggests that the shooting problem can be solved by varying the Result function instead of the Holds predicate during this circumscription. This is an important di erence because in circumscription's partial order on models, model M1 is preferred to model M2 if the predicate being circumscribed has a smaller extent in M1 and if the only other predicates and functions that di er between M1 and M2 are the ones that are varied. The standard approach amounts to thinking of Result(Wait,S0) as a xed situation, with circumscription being used to determine which properties hold in this situation. Baker's approach, on the other hand, assumes that for each consistent combination of properties, there is some situation in which these properties hold (an \existence of situations" axiom has to be added to achieve this). Circumscription is then used to determine which of these situations might be the result of waiting in S0. To see why this works, consider the model in which waiting unloads the gun. This model has the abnormality Ab(Loaded,Wait,S0). This abnormality can be eliminated by varying Result so that the result of waiting in S0 is equal to some situation in which the gun is still loaded. Since this operation need not introduce any new abnormalities, the gun will remain loaded (and Fred will die) in all minimal models. This solution has the advantage that it can handle both temporal prediction and temporal explanation problems since unlike chronological minimization, it does not impose temporal priorities. Furthermore, unlike causal minimization, it can handle rami cations successfully. The solution has the disadvantage that (at least at rst glance) it seems to be limited to the situation calculus since most temporal logics do not have a Result function. Also, varying a function | while conceptually ne | turns out to be awkward from the standpoint of automated reasoning. A follow-up paper by Baker and Ginsberg 2] attempts to address these di culties.

4.4 Varying Result

5 Related problems

Besides the basic problems that we have discused already, there are a number of related issues that we will now mention brie y. Recall the problem from Section 4.1 in which a property changed unexpectedly. A gun is loaded in the initial situation S0, and it is no longer loaded after two Wait actions have 10

been performed. Given that the Wait action has no known e ects, what should the system conclude? At a minimum, it should avoid collapsing in a contradiction, and with the right background knowledge, one would hope that the system could come up with a reasonable explanation. Lifschitz and Rabinov 16] handle this scenario by introducing \miracles," or in other words, unexpected changes. Their solution is built on top of Lifschitz's causal minimization 14] in which nothing could change without a cause. In the new solution, nothing can change without a cause unless a miracle occurred; their new predicate Miracle is circumscribed at a lower priority than Causes. Therefore in the above example, Lifschitz and Rabinov's system would not mistakenly conclude that waiting caused the gun to unload; instead, it would simply conclude that a miracle must have happened. This approach does successfully avoid collapsing in a contradiction, and it is probably as good as one can do while sticking with situation calculus. Morgenstern and Stein 23] take a more general approach to this problem. They use a temporal logic that allows multiple actions to happen in parallel. When an unexpected change occurs, instead of positing a miracle, they can simply conclude than an extra action must have been performed. This assumes of course that one knows about the various actions; for the above example, we would need an Unload action. We also need to prevent the system from unnecessarily positing additional actions; they accomplish this by circumscribing \unmotivated" actions, where roughly speaking an unmotivated action is one that is not triggered by a previous action. Most of the research on nonmonotonic temporal reasoning has used simple temporal ontologies, such as the situation calculus or temporal logics with discrete time. For many domains, however, it is necessary to use real-valued time so that the state of the world can evolve continuously. Sandewall 29] has done some work on this problem. He assumes that the domain that is being axiomatized develops continuously in time, except at a number of \breakpoints." Di erential equations are used for reasoning about the continuous behavior, and a nonmonotonic logic similar to chronological minimization is used to determine the times of the discontinuities.

6 Computational approaches

So far we have assumed that logic is the appropriate formalism for reasoning about change. In fact, most working AI systems have not followed this approach. When using logic, one is forced to have an extra parameter with every fact indicating at what time (or situation) the fact is true. As we saw, this lead to the frame problem because we needed to worry about how the vast majority of facts would carry over to the next situation. One can avoid the frame problem by using the strips approach. The strips system of Fikes and Nilsson 3] used a single database containing all the properties that are true in some situation. Each action has associated with it an add list of the properties that it makes true, and a delete list of the properties that it makes false; the strips assumption is that no other properties are a ected by the action. To compute the e ect of the action, one simply updates the database by making the appropriate additions and deletions. Those properties 11

in the database that are not deleted are left alone, and thus automatically carried over to the new situation. We do not have to worry about frame axioms, nonmonotonic or otherwise. Since the strips approach is not logic-based, one has to be fairly careful to make sure that it \does the right thing." Lifschitz 13] has given a su cient condition for the soundness of the updating scheme. Strips has the advantage over nonmonotonic reasoning that it actually handles the computational aspect of the frame problem. It has the disadvantage that it is not nearly as exible as logic; for instance, it does not allow actions with context-dependent e ects and rami cation, and in general does not allow full use of the representational power of the situation calculus, which is itself not as expressive as a full- edged temporal logic. There has been more recent work on relaxing some of these restrictions. Ginsberg and Smith 6, 7] present a strips-style system that allows rami cations, and most modern planners (e.g. Wilkins's sipe 33]) continue to use the strips solution to the frame problem, again with added features such as context-dependency and rami cations. Still, none of these systems support arbitrarily complicated temporal reasoning. Perhaps, for more complicated problems, the discipline of logic will begin to yield performance gains over the more ad hoc approaches. Whether this will happen or not is, at this point, an open question. For some speculations on this issue see McDermott's article and responses to it in 21].

7 Summary

We have reviewed the problems that lie at the intersection of nonmonotonic reasoning and temporal reasoning, as well as the major proposals for overcoming these problems. The area of nonmonotonic temporal reasoning is still very active, and undoubtedly new results await us. It is possible, though, to make some observations already at this point. The rst is that the renewed interest in the area has led to signi cantly increased attention to general temporal reasoning and (especially) general nonmonotonic reasoning. The second observation is that as solutions keep coming in, the pressure is mounting to better understand the problems that these solutions are supposed to be solving. It is not uncommon to encounter con icting intuitions not about how to achieve a certain effect in logic, but about which e ect should be achieved in the rst place. Even in cases where the intuition is fairly unproblematic, it is not clear that a particular formal machinery should be responsible for capturing that intuition. For example, we mentioned in Section 4.1 that chronological minimization yields the \wrong" answer for some temporal explanation problems. However, we then saw that by viewing explanation as a two-staged process (as suggested by Sandewall 30]), chronological minimization would be useful for one of these stages. This leads to our nal observation, which is that most logically-rigorous work on nonmonotonic temporal reasoning has ignored the reasoning process. The initial de nition of the frame problem was constrained enough for implicit understanding to be su cient. However, as the requirements from the notation increase, so does the need for explicit discussion of 12

the reasoning processes that are to make use of the notation.

References

1] Andrew B. Baker. A simple solution to the Yale shooting problem. In Proceedings of the First International Conference on Principles of Knowledge Representation and Reasoning, pages 11{20, 1989. 2] Andrew B. Baker and Matthew L. Ginsberg. Temporal projection and explanation. In Proceedings of the Eleventh International Joint Conference on Arti cial Intelligence, 1989. 3] Richard E. Fikes and Nils J. Nilsson. STRIPS: A new approach to the application of theorem proving to problem solving. Arti cial Intelligence, 2:189{208, 1971. 4] J. J. Finger. Exploiting Constraints in Design Synthesis. PhD thesis, Stanford University, Stanford, CA, 1987. 5] Michael Gelfond. Autoepistemic logic and formalization of common-sense reasoning. In Non-Monotonic Reasoning: Second International Workshop (Lecture Notes in Arti cial Intelligence 346), pages 176{186. Springer-Verlag, 1989. 6] Matthew L. Ginsberg and David E. Smith. Reasoning about action I: A possible worlds approach. Arti cial Intelligence, 35:165{195, 1988. 7] Matthew L. Ginsberg and David E. Smith. Reasoning about action II: The quali cation problem. Arti cial Intelligence, 35:311{342, 1988. 8] Steve Hanks and Drew McDermott. Nonmonotonic logics and temporal projection. Arti cial Intelligence, 33:379{412, 1987. 9] Brian A. Haugh. Simple causal minimizations for temporal persistence and projection. In Proceedings of the Sixth National Conference on Arti cial Intelligence, pages 218{223, 1987. 10] Henry A. Kautz. The logic of persistence. In Proceedings of the Fifth National Conference on Arti cial Intelligence, pages 401{405, 1986. 11] Kurt Konolige. On the relation between default and autoepistemic logic. Arti cial Intelligence, 35:343{382, 1988. 12] Vladimir Lifschitz. Computing circumscription. In Proceedings of the Ninth International Joint Conference on Arti cial Intelligence, pages 121{127, 1985.

13

13] Vladimir Lifschitz. On the semantics of STRIPS. In Michael P. George and Amy L. Lansky, editors, Reasoning about Actions and Plans: Proceedings of the 1986 Workshop. Morgan Kaufmann Publishers, Los Altos, CA, 1986. 14] Vladimir Lifschitz. Formal theories of action. In F. Brown, editor, The Frame Problem in Arti cial Intelligence: Proceedings of the 1987 Workshop, pages 35{58. Morgan Kaufmann Publishers, Los Altos, CA, 1987. 15] Vladimir Lifschitz. Pointwise circumscription. In Matthew L. Ginsberg, editor, Readings in Nonmonotonic Reasoning. Morgan Kaufmann Publishers, Los Altos, CA, 1987. 16] Vladimir Lifschitz and Arkady Rabinov. Miracles in formal theories of action. Arti cial Intelligence, 38:225{237, 1989. 17] John McCarthy. Epistemological problems of arti cial intelligence. In Proceedings of the Fifth International Joint Conference on Arti cial Intelligence, pages 1038{1044, 1977. 18] John McCarthy. Circumscription | a form of non-monotonic reasoning. Arti cial Intelligence, 13:27{39, 1980. 19] John McCarthy. Applications of circumscription to formalizing common-sense knowledge. Arti cial Intelligence, 28:89{116, 1986. 20] John McCarthy and Patrick J. Hayes. Some philosophical problems from the standpoint of arti cial intelligence. In B. Meltzer and D. Mitchie, editors, Machine Intelligence 4, pages 463{502. Edinburgh University Press, 1969. 21] Drew McDermott. A critique of pure reason. Computational Intelligence, 3:151{160, 1987. (Various responses are contained on pages 161{237 of the same isssue of Computational Intelligence.). 22] Robert C. Moore. Semantical considerations on nonmonotonic logic. Arti cial Intelligence, 25:75{94, 1985. 23] Leora Morgenstern and Lynn Andrea Stein. Why things go wrong: A formal theory of causal reasoning. In Proceedings of the Seventh National Conference on Arti cial Intelligence, pages 518{523, 1988. 24] Paul H. Morris. The anomalous extension problem in default reasoning. Arti cial Intelligence, 35:383{399, 1988. 25] Judea Pearl. On logic and probability. Computational Intelligence, 4:99{103, 1988. 26] Zenon W. Pylyshyn, editor. The Robot's Dilemma: The Frame Problem in Arti cial Intelligence. Ablex, Norwood, NJ, 1987. 14

27] Manny Rayner. Did Newton solve the \extended prediction problem"? In Proceedings of the First International Conference on Principles of Knowledge Representation and Reasoning, pages 381{385, 1989. 28] Raymond Reiter. A logic for default reasoning. Arti cial Intelligence, 13:81{132, 1980. 29] Erik Sandewall. Combining logic and di erential equations for describing real-world systems. In Proceedings of the First International Conference on Principles of Knowledge Representation and Reasoning, pages 412{420, 1989. 30] Erik Sandewall. Filter preferential entailment for the logic of action in almost continuous worlds. In Proceedings of the Eleventh International Joint Conference on Arti cial Intelligence, 1989. 31] Yoav Shoham. Chronological ignorance: Experiments in nonmonotonic temporal reasoning. Arti cial Intelligence, 36:279{331, 1988. 32] Yoav Shoham and Drew McDermott. Problems in formal temporal reasoning. Arti cial Intelligence, 36:49{61, 1988. 33] David E. Wilkins. Practical Planning: Extending the Classical AI Planning Paradigm. Morgan Kaufmann Publishers, San Mateo, CA, 1988.

15