Question Home

Position:Home>Philosophy> Would you accept the Turing Test as to whether a machine can think?


Question:I think it would offer compelling evidence, but it would neither be a necessary nor a sufficient condition. It boils down to how you define thought, which is a much debated topic. I argue that thought is an emergent process that occurs in our brains due to our complex neural networks. I tend to doubt that something that is not chaotic can be reasonably be called thought, but I'm very uncertain about that. It is possible that we might say that a deterministic machine has a will, but not a free will, like we have.
If a machine could pass the Turing Test, it may alter our view of thought, either to include or to disclude the machine.

You should also note that a human analogue to the Turing Test is also out there. You can not only note how often a machine is confused for a human, but how often a human is confused with a machine. Some people, especially programmers, are more machinelike in their conversations than others. Perhaps human thought and something machines will one day be able to do which could also reasonably be called thought are different, but overlap in some relevant way.

A machine may not need to pass the Turing Test in order to have thought. And a machine may pass the Turing Test with a complex program that does not involve anything we would consider to be thought at all.


Best Answer - Chosen by Asker: I think it would offer compelling evidence, but it would neither be a necessary nor a sufficient condition. It boils down to how you define thought, which is a much debated topic. I argue that thought is an emergent process that occurs in our brains due to our complex neural networks. I tend to doubt that something that is not chaotic can be reasonably be called thought, but I'm very uncertain about that. It is possible that we might say that a deterministic machine has a will, but not a free will, like we have.
If a machine could pass the Turing Test, it may alter our view of thought, either to include or to disclude the machine.

You should also note that a human analogue to the Turing Test is also out there. You can not only note how often a machine is confused for a human, but how often a human is confused with a machine. Some people, especially programmers, are more machinelike in their conversations than others. Perhaps human thought and something machines will one day be able to do which could also reasonably be called thought are different, but overlap in some relevant way.

A machine may not need to pass the Turing Test in order to have thought. And a machine may pass the Turing Test with a complex program that does not involve anything we would consider to be thought at all.

I am skeptical, because machines are preprogrammed and can only do what they are programmed to do.

I don't (and I wonder if anyone does) understand where the line lies between a mindless automaton and an intelligent machine. Given enough time, a good programmer could program a machine to react to all sorts of different stimuli. But at what point is that reaction deemed to be proof of intelligence?

Also, intelligence goes beyond answering questions and reacting to a stimulus. For example, humans lie. A machine could lie too, but only because it was programmed to do so. Humans, on the other hand, lie as a means to an end.

Beyond that, the Turing Test defines a machine as intelligent if an unbiased party cannot distinguish between a human's answer and a machine's answer. I don't agree with this. Intelligence should be marked by the ability to solve extremely complex problems and to offer radical and unique solutions that would normally be considered to be outside of the scope of said problem. Moreover, machines simply store and recount information that you give them. No machine has the ability to UNDERSTAND the meaning and implications of that information.

Perhaps, if I could also know a bit about how it thinks. It is possible to have a very sophisticated query-response program that could give realistic answers to many questions. However I would not consider this to be true sentience.

If, however, the machine's "mind" were based on a neural net, then I would be inclined to grant it sentient status. The reason for this is that our own brains are themselves neural nets, just made of proteins & running on electrochemical reactions instead of silicon and pure electricity. I don't believe it really matters what materials a mind is made of, what's important is its functionality. A silicon brain should be able to work as good as or, potentially, far better than a biological brain. I'm hoping to find out personally sometime in my lifetime. :-) (See 1st link below.)

A person who believes that there is a ghost, spirit or soul which is separate from the physical brain and which is the "spark of life" is called a dualist. One who believes that the entire mind or conscience of a person is explainable through purely physical laws of nature, and that there is no soul or spirit separate from the physical body, is called a monist or materialist. I think the dualist would argue that a machine A.I. could never be conscious, because being a creation of humans it did not come with a soul. However, all the evidence we have to date points strongly to the material side, namely that a mind is an entirely natural process. As such, an A.I. should indeed be capable of consciousness. There is an interesting debate between two neurosurgeons, one a dualist and the other a materialist, at the 2nd link below. Looking beyond the personal comments at just the relevant points, we can pretty much describe the brain's activity quite fully without any need for any supernatural extras.

So, bottom line, yeah, if I knew an A.I. was running as a neural net, then I would most likely accept the results from a Turing test demonstrating that it was sentient or conscious.

Yes, but that's not it.

Yes, I would accept machine passing Turing Test as evidence of thought. But, the real purpose of the test is to demonstrate intelligence and there are points that tickle me on that..

(1) It is very subjective to define the level of intelligence. Who sets the standard to say "Ahh, this is an intelligent machine!"
(2) Animals with developed brains do have self awareness. Would they pass Turing Test? Most likely not (even though we have little doubt that they do think and have some degree of intelligence).
(3) Modern psychology theorizes intelligence as a multiple facet thing (read Howard Gardner's works). If we choose to take this, which should Turing Test cover? All of them? How?
(4) Intelligence does not imply good or straight forward communication. What if the machine were intelligent but not communicative? What if the machine were really intelligent and chooses to be quiet about it?

I think a machine passing Turing Test would be a remarkable breakthrough. But the test itself is too simplistic to really declare intelligence.

Great question.

Well I would say that we do have just two ways of finding out if something can think.

The first way comes when you attach to the meaning of "thought" one subjective aspect as a necessary condition. If you do so, then thinking is probably a thing for which only first person approaches would suffice to find out if a machine, an animal or another human-like can indeed think. An inconvenient consequence of basing exclusively on methodology of this type is that we have in a berkeleyan (solipsistic) mood trustworthy evidence of our own thoughts only.

By contrast, if you do not attach to the meaning of "thought" a subjective aspect, then you'll have some sort of Turing Test to discern between thinking agents and non-thinking agents. All this given the fact that an essential component of the Turing Test (TT) is the external behaviouristic measurability.

(There can be some problems concerning the role of the judge in the original 1950 paper on the imitation game (or TT). Turing said that it could be one (any) human to decide if a machine is indeed intelligent (and in that sense able to think) and this might posit some problems, but in principle public checkability would do the work just fine, so one, two or n number of people can be judges concerning rational skills of an artificial or natural agent through a normal conversation.)

For me, confidence in the TT to check on psychological predicates is consistent with scientific methodology and it is basically right for that reason alone. But probably there is no way to find out if this approach is wrong in a more strict sense, i.e. we cannot test reliably intersubjectively if a subjective component in the processes of thinking is also an essential requirement to that process. To have a test would mean that we have a way to solve the problem of the other minds.

If I have to choose between the two general ways to understand the meaning of psychological predicates in general and thought in particular, only on the basis of prudence and caution, I would go to rely on TT and TT-like approaches. Dramatic consequences about discussions of this kind have happened and can happen in the future and in those cases I would vote to notice that we do not seem to have a absolutely reliable method to test those things.

When Spain conquered the territory of Mexico the discussion about if "indians" had souls or not raised in the european courts. The results of that conceptual conflict would determinate what kind of rights the Spanish empire would have to respect. If the indians had a mental and spiritual life as europeans, then automatically they would be entities protected by the christian laws, but if not, then indians would fit in the category of animals. That meant that the spanish needed not to respect the local inhabitants as humans.

Finally the position that won was that it didn't matter if they had souls or not. What mattered was that native people were not Christians and they had no right to own the lands that god had created to his followers. (!!)

This is a kind of discussion with severe consequences that I had in mind when I said that it would be wiser and more prudent to keep the possibility that machines, animals or other human-like entities that would pass a TT can actually have mind or thinking abilities, and according with that, they needed to be treated.

All a Turing Test demonstrates is that something can pass for a human. A different standard than thought, but it is difficult to see how you could pass one without thought.

Within the context of a normal conversation things like association, memory, and learning must usually occur (at least to a small degree). I am hard-pressed to think of a non-thought model that accomplishes such things.

So yes. I would accept a Turing Test as evidence of such.

I don't happen to think it's necessary, though. The standard I usually associate with thought is learning... a much lower bar. Like the Turing Test, I'll concede that even learning is probably not a necessary condition for thought but also like the test I am hard-pressed to think about how learning can occur without thought.

Not only do most animals show an ability to learn, so can many kinds of programs. So I would say that computers are already capable of thought, whether or any particular one happens to or not.