Example 5: Twenty Ticks
August 22, 2017Imagine that you are lying in bed trying to sleep, but you hear a repetitive ticking sound. After listening to 20 ticks, you are awake enough to engage your brain and come up with some interpretation.
The observables are twenty unique ticks.
;; The observables (tick T1) (tick T2) (tick T3) (tick T4) (tick T5) (tick T6) (tick T7) (tick T8) (tick T9) (tick T10) (tick T11) (tick T12) (tick T13) (tick T14) (tick T15) (tick T16) (tick T17) (tick T18) (tick T19) (tick T20)
Ticks like this are pretty rare, and have a low prior probability.
;; The prior (if (etc0_tick 0.01 x) (tick x))
Armed with this knowledge only, the best interpretation is that these ticks are all unrelated random rare events.
$ python -m etcabductionpy -i twenty-ticks.lisp ((etc0_tick 0.01 T1) (etc0_tick 0.01 T10) (etc0_tick 0.01 T11) (etc0_tick 0.01 T12) (etc0_tick 0.01 T13) (etc0_tick 0.01 T14) (etc0_tick 0.01 T15) (etc0_tick 0.01 T16) (etc0_tick 0.01 T17) (etc0_tick 0.01 T18) (etc0_tick 0.01 T19) (etc0_tick 0.01 T2) (etc0_tick 0.01 T20) (etc0_tick 0.01 T3) (etc0_tick 0.01 T4) (etc0_tick 0.01 T5) (etc0_tick 0.01 T6) (etc0_tick 0.01 T7) (etc0_tick 0.01 T8) (etc0_tick 0.01 T9)) 1 solutions.
As your brain starts to warm up a bit, it engages a tiny bit of associative knowledge about ticks - namely that they can be explained by clocks.
Here is what that axiom might look like, along with the prior for clocks.
;; Maybe a clock (if (and (clock c) (etc1_tick 0.9 c x)) (tick x)) (if (etc0_clock 0.001 c) (clock c))
Armed with this additional axiom, let's see what the best interpretation might be:
$ python -m etcabductionpy -i twenty-ticks.lisp <...silence...>
Uh oh! You broke it. Interpreting this input file seems like it is taking forever. Even after a few hours of interpretation, you still don't have a good answer. You let it run for a few days, weeks, years, and still there is nothing. What went wrong?
The good news is that nothing is broke. The bad news is that you have just encountered the central challenge that makes interpretation problems computationally difficult to solve: combinatorial search. It turns out that there are quite a lot of solutions to the twenty-ticks problem, even with the minimal set of axioms that we've provided our reasoning engine.
To illustrate the problem, try commenting out nineteen of the twenty observable ticks, and see how many solutions there are to the problem. We'll use the "--all" flag (or just "-a") to ask the engine to show us all solutions.
$ python -m etcabductionpy -i twenty-ticks.lisp -a ((etc0_tick 0.01 T1)) ((etc0_clock 0.001 $1) (etc1_tick 0.9 $1 T1)) 2 solutions.
Here we see that there are two interpretations of a single tick. Either it was because ticks sometimes happen (the prior), or maybe there is a clock.
With just two ticks, there are now five solutions. Already, the best interpretation is that there must be a clock.
$ python -m etcabductionpy -i twenty-ticks.lisp -a ((etc0_clock 0.001 $1) (etc1_tick 0.9 $1 T1) (etc1_tick 0.9 $1 T2)) ((etc0_tick 0.01 T1) (etc0_tick 0.01 T2)) ((etc0_clock 0.001 $1) (etc0_tick 0.01 T1) (etc1_tick 0.9 $1 T2)) ((etc0_clock 0.001 $1) (etc0_tick 0.01 T2) (etc1_tick 0.9 $1 T1)) ((etc0_clock 0.001 $1) (etc0_clock 0.001 $2) (etc1_tick 0.9 $1 T1) (etc1_tick 0.9 $2 T2)) 5 solutions.
But notice the other, less probable, solutions to this problem. The second solution is that they are unrelated random tick events, as you might expect. The third and fourth solution each hypothesize a clock, but attribute only one of the two ticks to it. The fifth solution is even more odd: perhaps there are two clocks, $1 and $2, each responsible for one of the ticks!
python -m etcabductionpy -i twenty-ticks.lisp -g -s 5 | util/dot2safari
As improbable as they are, each of these solutions is perfectly valid: each of these five sets of assumptions logically entails the observations given the knowledge base. We've asked the reasoning engine to exhaustively find all of the solutions, and that is exactly what it has done.
The trouble is that any time Etcetera Abduction finds a pair of assumptions that could be actually the same (i.e. they are unifiable), it considers two possibilities: either they are indeed the same and we've identified a common factor, or they are actually different and we have two similar but not identical assumptions in our solutions. Either the assumptions unify or they do not, and it considers both cases.
Every time we consider that the tick might be caused by a clock, we have to wonder whether it was the same clock or a different clock than all of the other clocks that we're imagining in the set of solutions. The result is a explosion of imaginary clocks. Any subset of ticks might be explained by their own individual clock, giving us a powerset of clock explanations for these twenty ticks.
How bad is this problem? Let's continue to uncomment our ticks one-by-one, and see how many solutions are found.
Ticks | Solutions |
---|---|
1 | 2 |
2 | 5 |
3 | 15 |
4 | 52 |
5 | 203 |
6 | 877 |
7 | 4,140 |
8 | 21,147 |
9 | 115,975 |
10 | 678,570 |
... | too many |
"Well that sucks!" you might be thinking.
Yes, indeed, this is cause for serious concern. The exponential growth in solutions means that Etcetera Abduction has little chance of exhaustively interpreting all 20 observable ticks: the problem is simply too big.
What are we going to do about it? Maybe we could modify Etcetera Abduction to look out for these cases and do something special? Sure that would work for simple degenerate examples such as this, but more realistic problems with large knowledge bases are also going to have similar explosions. We need a more general solution. Maybe we could optimize the search using fancy technologies like Integer Linear Programming? Sure, that helps when the problems are of modest size, but bigger problems see an explosion in variables that indicate unifications, crippling the best linear solvers. Maybe we could simply optimize the algorithm we have using a fast, typed, compiled programming language? Yes, that helps quite a bit, but gains you only a couple of more interpretable ticks for a given amount of processing time. Maybe we could try massive parallelization and GPU-optimized unification? Perhaps that would work, but nobody has figured out how to do that yet.
Unfortunately, for the time being, there is no fast way to find the global optimal solution to big interpretation problems - the combinatorics will eventually catch up with you.
But wait! Don't give up hope. Instead, give up something else: global optimal solutions. Do you really need a guarantee that you've found the absolute best interpretation in every situation? If we give up on optimality, we can make Etcetera Abduction fast.
One way to do it is to break up big interpretation problems into smaller problems, where the exponential explosion of possible solutions is still manageable. In the influential words of my undergraduate advisor:
"Small exponents make nice pets." - Ken Forbus, Northwestern University
We need to break up monster-sized interpretation problems into nice-sized pet problems that can be quickly solved, then combine good solutions for these smaller problems into solutions for the big monster problem.
Fortunately, there's a flag for that.
The special "--incremental" (or just "-c") flag tells the Etcetera Abduction engine to break up the input observations into a sequence of smaller chunks, and incrementally build up a global interpretation. It works like this:
- Etcetera Abduction creates an empty "beam" of a certain size, representing the running interpretations that have been found so far.
- A "window" of a certain number of uninterpreted observations is pulled from the larger set, and Etcetera Abduction finds the n-best interpretations of these observations combined with those in the beam. Here the "n" size is the same as the "beam" size.
- The n-best solutions replace the beam.
- Etcetera Abduction repeats the process starting at #2, until there are no more uninterpreted observations.
Effectively, this approach allows Etcetera Abduction to chip away at the interpretation problem in manageable-sized chuncks. Let's try it out on the "twenty-ticks" problem:
$ python -m etcabductionpy -i twenty-ticks.lisp -c -g | util/dot2safari
Fantastic! In just a couple of seconds, Etcetera Abduction finds a great interpretation of the twenty ticks: it must be a (single) clock! In fact, this is the optimal solution to this problem - the same one we would have got if we had let the previous call run for years or more.
In real interpretation problems with large knowledge bases and many observations, we may not be so lucky. This incremental approach is most susceptible to missing the best solution when there is an extremely unlikely common factor between observations that are far away from each other in the input list. In those cases, that important factor may fall off the beam before it has a chance to unify with explanations of subsequent observations.
Because of this, you may find it necessary to tune the various parameters of incremental Etcetera Abduction to find acceptable solutions in your problem domain. The three important parameters to tune are these:
- BEAM
- The number of running interpretations that should be kept around as subsequent chunks of the problem are considered. Set with the "--beam" (or just "-b") flag, defaults to 10.
- WINDOW
- The number of observations in each chunk. Set with the "--window" (or just "-w") flag, defaults to 4.
- DEPTH
- The depth of interpretations considered for each chunck. Set with the "--depth" (or just "-d") flag, defaults to 5.
It is interesting to note that if either beam or window were infinite, then incremental Etcetera Abduction would be exactly the same as regular exhaustive-search Etcetera Abduction. An infinite window size would mean that the whole problem would appear in the first chunk. An infinite beam size would mean that all possible solutions to previous chuncks would be combined with all of those for the next one. The challenge is finding parameters that are less-than-infinite, but that still effectively meet the needs of a given interpretation task.
My advice is to first set the depth parameter, making it just large enough to get the interpretations you want on your smaller-sized interpretation problems. Then play adjust the beam and window sizes in parallel - making one bigger while making the other one smaller. Window sizes can go up to maybe 10 or so, and beam sizes can be very large: 10, 100, 1000, or even 10000 might work for you. If a particular set of parameters is too slow for you, try inching down either beam or window until speed is acceptable. If you have more time to spare, or the interpretations are coming out blazingly fast, try inching up either beam or window so that you don't miss out on better interpretations of your problems.