Thursday, December 06, 2007

We don't know what a machine is anymore


I was reading a great post on important differences between brains and computers. It is very interesting as it summarizes the current understanding of what a computer is and what it can and can't do. As I was reading it, it reminded me of a wonderful story by Edgar Allan Poe. In the Maelzel's Chess-Player story (1836) Edgar Allan Poe gives all the arguments to detract a supposedly automaton chess player Maelzel was exposing. The really interesting part is that trhough couple of his arguments you can see how limited their current understanding of what a machine could and could not do was at that time. Specially the two things that were apparently true then, but not some ~150 years later were:

"1. The moves of the Turk are not made at regular intervals of time, but accommodate themselves to the moves of the antagonist ... The fact then of irregularity, when regularity might have been so easily attained, goes to prove that regularity is unimportant to the action of the Automaton--in other words, that the Automaton is not a pure machine."

"3. The Automaton does not invariably win the game. Were the machine a pure machine this would not be the case--it would always win. The principle being discovered by which a machine can be made to play a game of chess, an extension of the same principle would enable it to win a game--a farther extension would enable it to win all games--that is, to beat any possible game of an antagonist. ..."

Some years later we could all see a machine that would play chess and for some time we could see it both take different intervals answering and win as well as loose a game. Today that the chess game computers are more sofisticated we might not even see a chess computer loose a game anymore, and the answer eventually might become so instantly that might be perceived as coming at regular intervals, but, still, by the time I read this story for the first time, I could see clearly that the "current" notion of what a computer can or can't do might and most likely will be wrong.


That's why I loved also this post with the differences between the brain and computers as we understand them today. Apart from being a very good detailed article, I loved it because I think it might be the kind of story we (and note I'm not saying our grandchildren) can look at in the future (near future I'd think because of the law of acceleration returns or exponential growth of technology) and realize how technology is changing and our notion of what technology is needs to change accordingly.

On the differences themselves, I believe brains and computers are different, they will continue to be. Computers will have some abilities (specifically regarding memory capacity and possibilities) that brains will only acquire by merging with technology, that's why it's important to think on how we will merge and how we design intelligent technology sooner than later (although it might be out of our control some day).

4 comments:

Willy Coyote said...

The post that you link have some interesting facts, but I am not sure about some other things it refers as "facts". That and the Poe's story in some way reminds me of the "Chinese Room", which kinds of represent (more like tries to refute) an approach to AI opposed to the one refered in the post, called weak artificial intelligence...
The idea of this approach is that if we want to create a machine whose reasoning abilities matches (or beats) the ones of the human brain, we shouldn't try to create an "electronic human brain", but something that answers to external stimulus in the same way (even when the process used to reach that answer in not even similar).
I think computers are going to keep moving in that direction. Trying to rewrite problems they don't handle well (like language recongnition, vision, etc) in ways they do... even when humans don't work like that. To try to create computer models of human behaviour may be interesting for cognitive psycologists, but usually doesn't make good solution.

Cecilia Abadie said...

You're right bringing up of the "Chinese Room". It's one of these discussions that make sense to our historic context and in the future will be of great historic value.
Human brains are the result of years of evolution. Because of that they were built incrementally from species to species and they are designed to solve circumstancial problems to the era in which they were developping.
I agree with you and the weak artificial intelligence approach. We should concentrate in making computers that can "act" intelligently and not necesarily in computers that re-create the human brain in a digital medium. Consiousness should not be a goal but self awareness will probably be needed (to cite couple of the strong AI pilars). Where I think the two might meet (strong and weak AI) is in the emergent properties. It might happen that in the process of creating weak AI we realize that there are systemic emergent properties that end up being similar to the way our minds work (not necesarily our brains). This might be specially true if the neural nets and other similar complex systems produce results by adding all the massive paralell processing they will need.

Willy Coyote said...

True... although we can't really focus in triggering emergent properties, because they are hard to predict (kind of like "joining the points in the past").

I think you may also be falling in one of the falacies of AI, that is to think that what limits neural nets and that kind of systems is only processing power. If you create a neural network the size of an animal brain (human brain would be too much) and you train it for over a year, most likely you are not going to have an electronic version of a one year old animal (or human) mind.

Cecilia Abadie said...

I agree, the emergent properties are unpredictable, it's just something that might happen.

The reason why I doubt about the possibilities of neural nets,is because I believe that the neural networks developed so far are based on neural units that are way too simple for what they try to model. Neurons are usually downplayed in the biological model because connections are known to be the only relevant thing, but I think biological neurons are much more complex and rich than we tend to think, capable of different types, frequencies and intensities of connections plus we do not know their internal processes in order to trigger these connections either. Therefore, I believe the path of imitating the biological model of neural networks is really just in its beginning and I give it some level of hope which might be borderline with the AI fallacy you mention.