This story is based on the Turing Test, a test developed by Alan Turing to test artificial intelligence. The test is carried out a human judge interacting with humans, and sometimes robots, through text messages, and the judge must decide if they are talking to a robot. This seems like a good idea, but it is thoroughly flawed. One of the main flaws is that humans regularly associate intelligence with things that aren’t actually intelligent. Another flaw is that language is not something that is all pre-programmed at birth, it is also a learned cultural thing. I guess we will end up having to answer all those hard problems about consciousness after all.
For over thirty years, Edward M. Lerner worked in the aerospace and information technology industries while writing science fiction part-time. He held positions at numerous companies such as Bell Labs, Hughes Aircraft, Honeywell, and Northrop Grumman. In February 2004, after receiving a book deal for Moonstruck, he decided to write science fiction full-time.
Corey • Jun 4, 2018 at 9:13 am
Even if the Turing test could be used to measure intelligence on any level, I definitely don’t think it’s viable for testing any modern technology. I don’t think an argument could be made that any “AI” we have today is capable of thinking at all, rather they’re essentially just decision trees, or long strings of binary choices, the machine doesn’t understand anything other than “if x then y.” The best we do have are called neural networks, these are much better than regular decision trees, but they’re still fairly limited in just understanding the numbers behind a subject. Ever advancing to real thinking machines would probably require new technologies we haven’t even considered yet.