Eventually artificial machines will exhibit human-like intelligence, and even self-awareness. We are not there yet – not even close. Probably not in the next 35 years, and predicting the pace of technology beyond that is a fool's game. In The Monkey in the Mirror (2002), anthropologist Ian Tattersall writes that it is “vanishingly improbable that any machine would ever achieve an internal state that we would recognize as consciousness.” But many sober technologists are convinced it can happen, so it's worth examining why.
The first step in the argument is materialism, the notion that nothing exists except the physical – consciousness is some kind of emergent property of our electrochemical nervous system. Contrast this with dualism, which states that consciousness exists separate from the physical body. This concept of the ‘soul’ is a lovely metaphor and a comforting myth, but the weight of scientific evidence points firmly to materialism. If you disagree so far, then probably nothing else I say will be convincing either. ;)
The next step is to investigate whether it's possible in principle to construct some physical thing that possesses intelligence (or consciousness, or whatever). To prove that some occurrence X is possible, it is sufficient to demonstrate that X happened at least once before. In our case, the process of evolution on this planet constructed – over hundreds of millions of years – physical organisms with elaborate nervous systems, some of which have this property we seek. That is our existence proof… QED.
The final step is to determine whether the property is substrate-neutral. That is, does consciousness rely specifically on having an electrochemical system of axons, dendrites, synapses, and neurotransmitters? Or can it arise in some other sufficiently complex information-processing system made of different stuff, such as silicon, aluminum, and electrons? This is admittedly an open question, so I expect most scientifically-minded AI skeptics to focus their quarrels here. I have yet to hear a convincing argument against. In the worst case, it should be possible in principle to simulate the essential components of a primate nervous system on powerful machines. This has already begun with insects and (parts of) rats, which are about as complex as current supercomputers can handle. It may be that the first ‘true AI’ begins as a simulation of a natural system.Now all that remains are questions of interpretation. Tattersall admits that we are “incapable of imagining states of consciousness other than our own,” which means it's likely that we won't recognize machine consciousness as such. As neuroscientist Terry Sejnowski points out, the Internet could already be self-aware… how would we know? Hell, in many places people of a particular race, gender identity, sexuality, or ethnicity are not treated as fully human. Good luck with civil rights for artificials… that struggle won't be pretty, and the only way it would be short is if they destroy us.
But now I'll retreat from the brink of science fiction; alarmist consequences detract from the main argument. I have a few other thoughts to work in on this topic – on the Turing/Loebner sideshow and on the uncharitably narrow notion of ‘algorithmic’ used by Tattersall and others – but I'll save those for another day.