Wednesday, March 19, 2008

Notas sobre Inteligência Artificial

(from "Artificial Intelligence" by Blay Whitby1):

"Artificial Intelligence (AI) is the study of intelligent behaviour (in humans, animals and machines) and the attempt to find ways in which such behaviour could be engineered in any type of artifact"

In the paper Intelligence Without Representation, Rodney Brooks (Director of the MIT AI lab) gives an illustration of how trying to emulate human levels of intelligence at this early stage may be foolhardy. He suggests considering a group of scientists from 1890 who are trying to create artificial flight. If they are granted (by way of a time machine) the opportunity to take a flight on a commercial 747 then they will be inspired by the discovery that flight is indeed possible.
Even if they got a good look under the hood, a turbofan engine would be essentially incomprehensible. So it is with the human mind- whilst an inspiration that intelligence is indeed obtainable, direct emulation of so advanced a system would be counterproductive, a case of trying to fly before we can walk.
Hopefully it should be clear from Blay's formulation of what constitutes AI that if algorithmic computer science yields intelligent behaviour in a computer, then it is AI, not proof of its failure.

The problem is that there is no single fitness criterion for intelligence- when presented with a passage from Shakespeare, what should our system do- count the words, compare the spelling to today's, examine the meter, or write an essay about its relevance to modern life? All are varying forms of intelligence (and all might get thrown at you during an english lesson) yet quantifying how intelligent such an action is, and hence refining different solutions against that benchmark, seems impossible. We might be able to implement all of them given time- but getting them to play together nicely in a system will probably need another level of insight akin to the leap from algorithms to less predictable but more flexible methods such as neural nets.

This too is a goal of AI research, and for many is the end goal- not to recreate ourselves, but to know just what is that makes our intelligence special in the first place.

A module in a cognitive architecture must do more than process a single input to provide an output. It must function asynchronously and in parallel with many other components. Its inputs are dependant on other such modules and its outputs may feed back into these whilst effecting the behaviour of yet more modules.
The majority of AI research in the past 30 years or so has ignored this. This is a major reason why no single technique has really made strides towards solving the problem AI is famous for. It is easy to highlight the limitations of AI when every solution to a small problem is viewed as an attempt at solving the big one. That said, the failure to address architectural issues is a very real limitation on the state of the art, and one that must be addressed for the field to start making real progress.

fonte: http://everything2.com/?node_id=1522443

Searle makes a good case that the Turing test is not sufficient — that a computer whose behavior is identical to that of a human doesn't necessarily have to have the same mental states going on underneath. He postulates an automaton that acts human but has none of the thought underneath, while AI research tends to suggest that a purely empirical approach is sufficient — only by observing behavior can we make statements about a thinking machine's internal state. Searle does a good job of demolishing that notion.
But does Searle's example really matter?

fonte: http://everything2.com/e2node/Searle%2527s%2520Chinese%2520room