Example 2: Dead Plant
July 13, 2017Imagine that you come home from vacation and you find that your favorite plant is dead, wilted up from lack of moisture. That sucks. But wait, you told your neighbor Mary to water it while you away, right? So who really is to blame? Let's use Etcetera Abduction to solve this problem.
There are three events that are indisputable facts in this murder mystery:
- You tasked Mary to water your plants while you were away,
- you went on vacation, and
- your plant is dead.
These three facts are the observables in this abduction problem. We could represent these three facts as follows:
;; One possible way to represent the observables (tasked-to-water-while-away Me Mary Plant) (vacation Me) (dead Plant)
This is perfectly acceptable as first-order logic, but the first predicate "tasked-to-water-while-away" is pretty unwieldy. Conceptually, there are really two parts to the fact that we're trying to express. First, there is the real event of Me tasking Mary to do something. This really happened in the world of our problem. Second, there is the act of watering my Plant, which may or may not have happened in the world of our problem. The thing we're trying to represent as an observable is that the "something" that Mary was tasked to do was the watering of the plant.
Logicians, in the past, have invented all sorts of complicated ways to represent a "second-order" idea such as this, where the watering event is in some special modality defined by the speaking event. There's even a whole logic specially designed for obligations of this sort (deontic logic). Etcetera Abduction, however, is strictly limited to plain old first-order logic. It's actually super-restricted: it does not handle logical negation of literals, and all of the axioms must be written as definite clauses.
Fortunately, there is no need to resort to specialized logics to represent the watering task. Instead, we just need to figure out how we can make the "watering" event an actual "thing" in the world of our problem, so that we stick it can be an argument of some "tasking" literal. To do this, we're going to "reify" the watering event. This word "reify" comes from the Latin "res", meaning "thing", so instead we might say we're going to "thing-ify" the watering event.
Here's what a reified watering event might look like:
;; Reification of (water Mary Plant) (water' W Mary Plant)
In this example, we have created a new predicate water', that takes three arguments, where the first one is "W". This "W" is something we've made up to help us represent this problem in first-order logic. It's the "reificiation" of a watering event involving Mary and my Plant. We've created a new "thing" in our problem world, which is the eventuality of Mary watering my Plant. That is, "W" is a constant that is the name of a watering event that may or may not have actually happened.
Now that we have reified the event, we can stick this same "W" in as an argument of a tasking literal, as follows:
;; Another possible way to represent the observables (task Me Mary W) (water' W Mary Plant) (vacation Me) (dead Plant)
That's better. This business of "reifying" a literal turns out to be pretty useful, and has been explored thoroughly by Jerry Hobbs and Donald Davidson. Some like to call it "Davidsonian" notation, but we will use the term "eventuality notation," and adhere to a set of conventions. When representing an eventuality, we make sure that the predicate and arguments are all the same as the relation it is supposed to be reifying, with two exceptions. First, the predicate gets a new last character, an apostrophe, which distinguishes it from the predicate used in the un-refied version of the literal. Second, one more argument is added, always in the first position, which is meant to represent the reified eventuality of the event.
Eventuality notation turns out to be so useful in knowledge representation that we may want to use it in other places, as well. We may need to write axioms that rely on second-order relations between literals. For example, we may want to say that the eventuality of the death of the plant should have been a concern of Mary, or that the eventuality of me tasking her should have obligated her to actually do the task.
For this reason, we might as well represent all of the literals in our problem in eventuality notation, both the observables and in the knowledgebase. That is, every predicate will get the apostrophe at the end, and every first argument reifies an eventuality. Here's what that would look like:
;; The best way to represent the observables (task' E1 Me Mary E2) (water' E2 Mary Plant) (vacation' E3 Me) (dead' E4 Plant)
Here, each reified eventuality argument gets a unique name, composed from the capital letter "E" for the eventuality (a constant), and an integer. If there are 100 observables, we'd have "E100" as the last one. All of these observables are free to incorporate these eventualities among their own arguments, allowing for the representation of complex relational structures in plain-old first-order logic.
Next we need to provide some prior probabilities for each of these observables. In the following four prior-probability axioms, the eventuality arguments are all universally quantified variables, so that they can be matched to the constants in our observables.
;; The priors of the observables (if (etc0_task 0.1 e x y z) (task' e x y z)) (if (etc0_water 0.02 e x y) (water' e x y)) (if (etc0_vacation 0.01 e x) (vacation' e x)) (if (etc0_dead 0.01 e x) (dead' e x))
And with that, we can get our first solution to the dead plant problem by invoking the Etcetera Abduction reasoning engine:
$ python -m etcabductionpy -i dead-plant.lisp ((etc0_dead 0.01 E4 Plant) (etc0_task 0.1 E1 Me Mary E2) (etc0_vacation 0.01 E3 Me) (etc0_water 0.02 E2 Mary Plant)) 1 solutions.
Given only knowledge of prior probabilities, the best that we can come up with is that the cause of the death of the plant is simply random chance. Plants sometimes die. That was its fate. It was possible, and it happened. Bad luck. Or more accurately, the conditions of the universe were just right to cause this plant to be dead. This is what the prior probability is meant to encode.
Now let's add some knowledge that can help us come up with a better explanation. Basically, we need to encode the following commonsense intuitions:
- Plants are likely to die if nobody waters them.
- People don't do something when they can't do it.
- People sometimes forget to do the tasks they are given.
- People sometimes can't do things because they are on vacation.
- People often task other people to do something when they can't do it themselves.
There are a lot of different ways that we might encode this knowledge, at various levels of detail. For this problem, I've tried to be as consise as possible:
;; Why dead? Nobody watered (if (and (didnt' e1 y e2) (water' e2 y x) (etc1_dead 0.9 e e1 e2 x y)) (dead' e x)) ;; why didnt? couldnt (if (and (couldnt' e2 x e) (etc1_didnt 0.9 e e1 e2 x)) (didnt' e x e1)) ;; why didnt? forgot (if (and (forgot' e2 x e1) (task' e3 y x e1) (etc2_didnt 0.9 e e1 e2 e3 x y)) (didnt' e x e1)) ;; why couldnt? On vacation (if (and (vacation' e1 x) (etc1_couldnt 0.9 e e1 e2 x)) (couldnt' e x e2)) ;; why task? couldnt oneself (if (and (couldnt' e2 x e1) (etc1_task 0.9 e e1 e2 x y)) (task' e x y e1))
Lastly, we'll need to provide the prior probabilities of the three new literals that we've introduced in these axioms, namely for "didnt", "couldnt" and "forgot".
;; priors on assumptions (if (etc0_didnt 0.01 e1 y e2) (didnt' e1 y e2)) (if (etc0_couldnt 0.01 e1 y e2) (couldnt' e1 y e2)) (if (etc0_forgot 0.05 e x e1) (forgot' e x e1))
That should do it. Let's see what we get now when we run the Etcetera Abduction engine. We'll use the "--all" (or "-a") flag to say that we want to list all solutions, just to see how many we get -- even if there are a million of them.
$ python -m etcabductionpy -i dead-plant.lisp -a ((etc0_forgot 0.05 $1 Mary E2) (etc0_vacation 0.01 E3 Me) (etc0_water 0.02 E2 Mary Plant) (etc1_couldnt 0.9 $2 E3 E2 Me) (etc1_dead 0.9 E4 $3 E2 Plant Mary) (etc2_didnt 0.9 $3 E2 $1 E1 Mary Me) (etc1_task 0.9 E1 E2 $2 Me Mary)) ((etc0_forgot 0.05 $1 Mary E2) (etc0_vacation 0.01 E3 Me) (etc0_water 0.02 E2 Mary Plant) (etc1_couldnt 0.9 $3 E3 E2 Me) (etc1_dead 0.9 E4 $4 E2 Plant Mary) (etc2_didnt 0.9 $4 E2 $1 $2 Mary Me) (etc1_task 0.9 $2 E2 $3 Me Mary) (etc1_task 0.9 E1 E2 $3 Me Mary)) ((etc0_forgot 0.05 $1 Mary E2) (etc0_vacation 0.01 E3 Me) (etc0_water 0.02 E2 Mary Plant) (etc1_couldnt 0.9 $4 E3 E2 Me) (etc1_couldnt 0.9 $3 E3 E2 Me) (etc1_dead 0.9 E4 $5 E2 Plant Mary) (etc2_didnt 0.9 $5 E2 $1 $2 Mary Me) (etc1_task 0.9 $2 E2 $3 Me Mary) (etc1_task 0.9 E1 E2 $4 Me Mary)) ((etc0_vacation 0.01 E3 Me) (etc0_water 0.02 $2 Me Plant) (etc0_water 0.02 E2 Mary Plant) (etc1_couldnt 0.9 $1 E3 E2 Me) (etc1_dead 0.9 E4 E2 $2 Plant Me) (etc1_didnt 0.9 E2 $2 $1 Me) (etc1_task 0.9 E1 E2 $1 Me Mary)) ((etc0_vacation 0.01 E3 Me) (etc0_water 0.02 $3 Me Plant) (etc0_water 0.02 E2 Mary Plant) (etc1_couldnt 0.9 $1 E3 E2 Me) (etc1_couldnt 0.9 $4 E3 $2 Me) (etc1_dead 0.9 E4 $2 $3 Plant Me) (etc1_didnt 0.9 $2 $3 $4 Me) (etc1_task 0.9 E1 E2 $1 Me Mary)) ((etc0_dead 0.01 E4 Plant) (etc0_vacation 0.01 E3 Me) (etc0_water 0.02 E2 Mary Plant) (etc1_couldnt 0.9 $1 E3 E2 Me) (etc1_task 0.9 E1 E2 $1 Me Mary)) ((etc0_didnt 0.01 $2 Mary E2) (etc0_vacation 0.01 E3 Me) (etc0_water 0.02 E2 Mary Plant) (etc1_couldnt 0.9 $1 E3 E2 Me) (etc1_dead 0.9 E4 $2 E2 Plant Mary) (etc1_task 0.9 E1 E2 $1 Me Mary)) ((etc0_couldnt 0.01 $3 Mary $2) (etc0_vacation 0.01 E3 Me) (etc0_water 0.02 E2 Mary Plant) (etc1_couldnt 0.9 $1 E3 E2 Me) (etc1_dead 0.9 E4 $2 E2 Plant Mary) (etc1_didnt 0.9 $2 E2 $3 Mary) (etc1_task 0.9 E1 E2 $1 Me Mary)) ((etc0_vacation 0.01 $1 Mary) (etc0_vacation 0.01 E3 Me) (etc0_water 0.02 E2 Mary Plant) (etc1_couldnt 0.9 $2 E3 E2 Me) (etc1_couldnt 0.9 $4 $1 $3 Mary) (etc1_dead 0.9 E4 $3 E2 Plant Mary) (etc1_didnt 0.9 $3 E2 $4 Mary) (etc1_task 0.9 E1 E2 $2 Me Mary)) ((etc0_forgot 0.05 $1 Mary E2) (etc0_task 0.1 E1 Me Mary E2) (etc0_vacation 0.01 E3 Me) (etc0_water 0.02 E2 Mary Plant) (etc1_dead 0.9 E4 $2 E2 Plant Mary) (etc2_didnt 0.9 $2 E2 $1 E1 Mary Me)) <...snip...> 87 solutions.
One million was a bit of an over-estimate: there are only 87 intepretations of the observables given our knowledge base. Let's take a look at the graph of the most probable interpretation:
Let's look at the chain of explanation leading to each of the four observed literals.
- My tasking of Mary is because I couldn't do E2, because I was on vacation, for some unknown reason.
- The eventuality of watering (E2) is for some unknown reason.
- My going on vacation is because of some unkonwn reason.
- The plant is dead because Mary didn't do E2, which is the watering of my plant. She didn't do it because she was tasked with doing it (see #1, above), but forgot. She forgot for some unkonwn reason.
In this interpretation, Mary is to blame. Her not watering the plant is the direct cause of its death. But if I hadn't tasked her with watering the plant, it would be a different story. We can see this alternative by commenting out the two observables having to do with the tasking of Mary.
;; The observables, but without the tasking of Mary ;(task' E1 Me Mary E2) ;(water' E2 Mary Plant) (vacation' E3 Me) (dead' E4 Plant)
When we graph the most probable interpretation now, a new person is to blame.
Now we have the following explanations for the observations:
- I went on vacation for some unknown reason.
- The plant is dead because I didn't water it, because I couldn't water it, because I was on vacation for some unknown reason.
In this example we can see that adding information to the situation can radically change its interpretation. In AI, this is called nonmonotonic reasoning, in that the pool of conclusions does not exclusively grow with each additional input. With abductive reasoning, the more you observe, the better your overall intepretation of the whole situation and the better your explanation for any of its component observations.
This example also highlights the importance of choosing appropriate representations for your observations and axioms. In particular, the reification of watering plants (using eventuality notation) was super-important in figuring out who was to blame for the death of the plant. When Mary is specifically asked to do so, her watering the plant can be connected to its explanation. Without this tasking, it's my not-doing-something (because of vacation) that is connected into the explantion. Both of these explanations beat the prior probabilities of the observables; its more likely that one of us is to blame, given the evidence, than attributing the death to bad luck or random chance.