|
Authored by: Anonymous on Sunday, May 06 2012 @ 06:57 PM EDT |
I noticed an odd thing about Turing-equivalence back when I was doing
my CS degree.
There's a type of automata called a DFA (deterministic finite automata). It
has a fixed amount of state and no other 'memory'. Non-deterministic
variants can all be transformed into DFA's. DFA's map directly to basic
regular expressions (pattern-matching machines with no context).
If you add an infinite stack to a DFA, you get a PDA (push-down
automata). Where a DFA can only parse regular expressions, a PDA is
like a CFL language parser; it can remember context info on its stack
while it parses a subtree (i.e. it can "call a function" and then
"return" to
the caller, and it can do this recursively).
While the addition of the stack makes a PDA strictly more powerful than a
DFA (i.e. it can do computations on arbitrary-sized inputs that are
impossible to do with a DFA), it is still not as powerful as a Turing
machine. However, if you give the PDA _two_ stacks instead of one,
suddenly it is equivalent to a Turing machine! (one stack holds tape
entries to the left of the head, with rightmost at the top of the stack, and
the other stack holds entries to the right of the head, with leftmost at the
top of the stack). The computational power of this machine with two
stacks was carefully studied and proved to be Turing-equivalent: it can
compute anything that an Turing machine can compute, and vice versa.
All of which led me to the mind-blowing realization that there's only three
classes of computational equivalence that all automata and machines fall
into:
(0 stacks) DFA, NFA, regular expression languages
(1 stack) PDA, Context-Free languages
(2 stacks) Turing machine, can compute anything that is computable !
More than 2 stacks does not matter. 2 is necessary for most types of
computation, and sufficient for every type of computation we know about.[ Reply to This | Parent | # ]
|
|
|
|
|