Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
AI and Robotics
Transcript of AI and Robotics
In the 1800's, a "computer" was a person who was hired to perform routine calculations for a business.
The person didn't need to understand what the calculations were for or what they meant, they just needed to crunch the numbers. It was a "mindless" task, one that required no creativity or insight on the part of the person.
But humans are slow, prone to error, and worst of all
you have to feed them.
So we got started on the project of automating computation.
In 1837, Charles Babbage designed the Analytical Engine, the first Turing-complete automated computer.
Due to lack of funding and interest, the machine wasn't built until the 1940's, when electronic computers were already in production. It worked exactly as designed.
In the 1840s, Babbage's colleague Ada Lovelace became interested in the engine. Using punch cards inspired by the Jaquard Loom, Lovelace described ways of performing a variety of calculations with the Analytical engine, earning her the title of the world's fist computer programmer
In 1936, Turing proved that the halting problem was undecideable.
His proof relied on a formal definition of a general computing device, what is now known as a 'Turing Machine".
Finite state machine
With a formal definition of "computation", we could now build automated computing machines.
The first such machines were called "mechanical brains" in the press.
Aside from human beings, these machines were the only systems in the known universe capable of carrying out formal calculations automatically.
The mechanical brains became an important resource in the war effort.
The Turing Test
We now have real world artifacts that dwarf Leibniz’s giant mill both in speed and intricacy. And we have come to appreciate that what is well nigh invisible at the level of the meshing of billions of gears may nevertheless be readily comprehensible at higher levels of analysis–at any of many nested "software" levels, where the patterns of patterns of patterns of organization (of organization of organization) can render salient and explain the marvelous competences of the mill.
In the first half of the century, many scientists and philosophers might have agreed with Leibniz about the mind, simply because the mind seemed to consist of phenomena utterly unlike the phenomena in the rest of biology.
Dennett, "The Zombie Hunch"
The inner lives of mindless plants and simple organisms (and our bodies below the neck) might yield without residue to normal biological science, but nothing remotely mindlike could be accounted for in such mechanical terms.
Or so it must have seemed until something came along in midcentury to break the spell of Leibniz’s intuition pump.
Computers are mindlike in ways that no earlier artifacts were: they can control processes that perform tasks that call for discrimination, inference, memory, judgment, anticipation;
they are generators of new knowledge, finders of patterns–in poetry, astronomy, and mathematics, for instance–that heretofore only human beings could even hope to find.
The sheer existence of computers has provided an existence proof of undeniable influence: there are mechanisms–brute, unmysterious mechanisms operating according to routinely well-understood physical principles–that have many of the competences heretofore assigned only to minds.
or the Imitation Game
The original question, "Can machines think?" I believe to be too meaningless to deserve discussion.
Heads in the sand
Argument from Consciousness
Interrogator: In the first line of your sonnet which reads "Shall I compare thee to a summer's day," would not "a spring day" do as well or better?
Witness: It wouldn't scan.
Interrogator: How about "a winter's day," That would scan all right.
Witness: Yes, but nobody wants to be compared to a winter's day.
Interrogator: Would you say Mr. Pickwick reminded you of Christmas?
Witness: In a way.
Interrogator: Yet Christmas is a winter's day, and I do not think Mr. Pickwick would mind the comparison.
Witness: I don't think you're serious. By a winter's day one means a typical winter's day, rather than a special one like Christmas.
Nevertheless I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.
The Lovelace Objection
"The Analytical Engine has no pretensions to originate anything. It can do whatever we know how to order it to perform"
First response: Machines sometimes surprise us
This is largely because I do not do sufficient calculation to decide what to expect them to do, or rather because, although I do a calculation, I do it in a hurried, slipshod fashion, taking risks.
A better variant of the objection says that a machine can never "take us by surprise." This statement is a more direct challenge and can be met directly.
Machines take me by surprise with great frequency.
Lovelace's objection does not deny the possibilty of machine intelligence.
Machines can do whatever we are clever enough to design them to do.
Lovelace objects to the possibility of machine autonomy
"Lady Lovelace's objection, which stated that the machine can only do what we tell it to do."
Instead of trying to produce a programme to simulate the adult mind, why not rather try to produce one which simulates the child's? If this were then subjected to an appropriate course of education one would obtain the adult brain.
Turing's second response: Build Learning machines
The Chinese Room
My car and my adding machine, on the other hand, understand nothing: they are not in that line of business.
"Could a machine think?"The answer is, obviously, yes. We are precisely such machines.
... formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output.
Objections and replies