The Fallacy of the Almost-General-Purpose Computer writes Edward Felten. I get the impression I would know exactly why if I could put the correct pieces together, but I can't lay hands on them right now. Has anyone written well about this on the Internet?[markpasc.blog]
From the piece:
If you're designing a computer, you have two choices. Either you make a general-purpose computer that can do everything that every other computer can do; or you make a special-purpose device that can do only an infinitesimally small fraction of all the interesting computations one might want to do. There's no in-between.
I can tell you that this is true. And I can assure you that every well-educated computer scientist knows why it is true. But what I don't know how to do -- at least not yet -- is to give a simple, non-technical explanation for it. If anybody has a hint about how to do this, please, please let me know.
For the non-technical, a brief, breezy overview of why this is true: The theory started with the creation of the Turing Machine by Alan Turing, as a mathematical formalization of computing. It is hypothesized that given enough time, the Turing Machine can do any computation that can be specified. It has not been proven, but
- No counter-example has been found in the last 70 years, combined with...
- the fact that the definition of Turing Machine happens to basically be the definition of computable
What's more interesting is that a great deal of various mathematical constructs have been shown to be equivalent to Turing Machines, including some very odd ones. Consider Conway's famous game of Life, where little life forms live and die on a little grid, depending on how many neighbors they have. "Life" can be completely described with the following rules:
- In the next time step, cells with one or zero neighbors die.
- In the next time step, cells with four or more neightbors die.
- In the next time step, cells with two or three neighbors continue to live.
- In the next time step, an empty space that has three cells as a neighbor becomes populated by a cell (reproduction).
Nor is this unusual; a lot of crazy things have been proven to be equally powerful, if not always equally efficient, when it comes to computation.
Where this gets all nicely recursive is with the idea of a "Universal Turing Machine". A "UTM" is a Turing Machine that takes a Turing Machine as input, and actually executes that machine. In fact, that's basically what you have sitting in front of you: Your CPU is a Universal Turing Machine that understands a particular formulation of Turing Machines that you usually call programs, and your CPU, while running the program/Turing Machine, looks to all intents and purposes like the program itself is implemented directly.
This also lies behind the ability of one computer to emulate another: If you've ever used an emulation ("Virtual PC", any number of gaming console emulators, old computer emulators for the Commodore 64, etc.), you see the power of your CPU, taking a Turing Machine in as input (the emulator), that itself describes another Turing Machine (the emulated computer), which will then run yet another Turing Machine in the form of the program you run on the emulated computer... and yes, this can recur indefinately deeply if you have the inclination.
Nor is this entirely impractical and obtruse Computer Science academic egg-head stuff; Java programs work in the same form. The "Java Virtual Machine" that allows Java programs to be "Write Once, Run Anywhere" at all is implementing a standardized Turing Machine that allows the Java program to know what the Virtual Machine can do.
Now, the problem: As much fun as this explanation, the proof of this is one of the most difficult and twisty proofs in the history of mankind (in my opinion), because of the way the Universal Turing Machine wraps back on itself to run itself. So that's virtually useless in any general description. Moreover, any such demonstration must necessarily get down to the nuts and bolts of what computability is, which means tossing around some very ugly-looking mathematical notation. Plus the discussion has to take place on the level of hardware, not software, which is even tougher in this case to explain.
I hate to be pessimistic, but I don't think this point can be explained to an actively hostile audience. All we can do is keep repeating it, and wait until someone foolishly tries to build an almost-general-purpose computer, so we have concrete examples to point at.
One other aspect worth considering is that the efforts to build a restricted computer have focused not on limiting the Turing Machine itself, but on restricting the flow of data in and out of the Turing Machine through various means. Why that's disasterous is probably somewhat easier to explain, in the style of my human justice essay, as a function of the inability of a computer to truly distinguish between "good" and "bad" uses of content. Restricted-data computers will cost the economy billions, if not trillions, by artificially hobbling business's ability to communicate, both internally and externally.