The stupidity of computers has become a bit of a minor running theme on this weblog over the past few weeks (and I've got another post on that topic on tap, waiting for Monday), so I couldn't resist posting this news from Slashdot. The conversational bot A.L.I.C.E., winner of the Loebner Prize in 2000 and 2001 for most human-like conversation bot, was hooked up to itself and this is the result. Surprise surprise, very stupid conversation results, especially considered on the semantic level.
It turns out that what A.L.I.C.E. really does, as all conversational bots do, is reflect the intelligence of the human back at the human.
Based on this result, I would suggest modifying the Turing Test to pit two contestents against each other, with a third judge trying to ID whether it's two humans, two machines, or one of each. The machine passes the test when two instances of the program (not necessarily identical, it can be seeded with different background knowlege) convinces the majority of humans that it is two humans communicating. It's too easy to reflect the intelligence of the other player; bots have been doing that since Eliza!