Wednesday, September 12, 2012

Article Reaction: Chinese Room

I believe that the paper presented by John Searle presents a very compelling argument in asserting that the chief difference between doing and understanding is not derived from merely physically correct outputs. I completely agree that nothing in the way of window dressing, no lifelike moving robot, or frequent understanding of Chinese to the point of a native speaker believing the machine to actually know Chinese, or any change in the Chinese Room to make it more "brain-like", will change the fact that simulated understanding is still simulated and not real. What I'm personally interested is in knowing whether any kind of man-made AI is even possible as per the argument that he presented. He believes that none of the accepted notions at the time of strong artificial intelligence were actually strong, but he doesn't seem to present any metrics by which he would consider an implementation truly "understanding". He claims the process of understanding, will, beliefs, feelings, etc., are all inherent to the brain in incredibly complex, enigmatic ways that humans have not been able to understand. Does it mean that, to him, nothing will ever achieve it? Or does he believe that in some distant future people will eventually be able to divorce the true essence of mental understanding from the physical brain and actually make man-made intelligence that truly understands and not merely simulates it?

I personally believe so. I think that at some point in time, whether it be 5 years from now or 5,000, that the human will be able to extract this "essence of the human mind" from the human mind itself and artificially recreate it. If such is the case, I don't believe that Searle's argument would have any rebuttals to declare it a weak AI.

At the same time, however, I don't believe that this would be of trivial difficulty. It's obvious to say so, of course, based on the fact that despite all the current psycho- and physiological studies of the human brain that the species is nowhere near understanding the fundamentals of human rationality to the point where we can recreate it. But what I believe is that mankind will never be able to understand these fundamentals at one-hundred percent accuracy, first of all because there will be no way of immediately knowing if the models presented are perfect, and secondly because the concept is so complex that it will be easy to find flaws. We will most likely reach models much like the models of light, in that in some cases we treat lights as waves and in some cases we treat them as particles, because neither model is inclusive enough to fully describe all the phenomena that we have observed but at the same time, its exclusion ignores important behaviors that cannot be passed up. I wonder if Searle would make significant objections should mankind ever create a relatively convincing model of understanding based on two simultaneous systems. Would making an artificial intelligence based on two models at once that together constitute true understanding be considered a "simulation" by his logic? If so, would models like the light model previously mentioned be considered a "simulation" of understanding of light by Searle and not a real understanding?

No comments:

Post a Comment