Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Do you really want to delete this prezi?
Neither you, nor the coeditors you shared it with will be able to recover it again.
Make your likes visible on Facebook?
Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.
AI: TWO COMPETING VIEWS
Transcript of AI: TWO COMPETING VIEWS
* Thinking =
Manipulation of symbols
organized according to certain
Different medium, same procedure
: what goes on in the brain, goes on in a computer program.
* AI main goal = build architectures or models that simulate intelligence
Defenders we encountered:
Alan Turing, Marvin Minsky
SG = GOFAI
, the organism interacts with the environment via symbolic and internal representations
PG = HEIDEGGERIAN
, the organism interacts directly with the world via its sensori-motor abilities
(IN DEFENSE OF GOFAI)
* The computer is a mistaken metaphor for the human mind
* Mechanistic philosophy of mind (aka the idea that the mind works as a machine) should be not be accepted a-critically.
* What is lacking?
(Dasein): Dealing with objects in a goal-directed way, according to our interests and values (e.g. hammer).
AI: COMPETING VIEWS
* Rodney Brooks argues that representation is not necessary for intelligence (1991)
* Attempt to build robots that interact directly with the world (
using the world as its own model)
* Human-level intelligence is too complex and poorly understood:
let's start by building "
simpler level intelligences
" (p. 1)
* "Complex" abilities (e.g. chess, logic) are easier to reproduce because more recently evolved. Perceptual & motor abilities that we share with non-human animals are far more difficult to simulate.
that cope appropriately and dynamically with a changing environment. Not tested in a lab, but
directly in the world.
They always have a specific purpose -- e.g. Herbert.
: simple modules connecting
sensing with action
(decomposition by activity). Complexity as a result of layers interaction.
* A mental state is a functional state, which can be described in terms of inputs, outputs and causal connections with other functional states.
* Again, the procedure and the function are more important than the medium.
* Functional states can be realized by multiple physical media (
* Mind = function; brain = structure.
Defenders we encountered:
* Chinese Room: Instantiating a computer program
is not sufficient
* Intelligence requires intentionality
* Example: a human agent can manipulate symbols according to specific rules w/out displaying understanding.
* Understanding and intentionality can be produced only by mechanisms displaying *causal powers* similar to the ones that the brain has.
* Computers may think, but not only in virtue of their
(symbols + rules) =
(meaning) is necessary.
Defenders we encountered:
What does it mean to be
MR in Philosophy of Mind
All mental kinds are MR by distinct physical kinds
What Computers Can't Do
What Computers Still Can't Do
VS two core tenets of GOFAI:
a) GOFAI sees the mind as an information-processing mechanism that bridges the gap between the organism and the world via
b) GOFAI conceives of human cognition as something that can be formalized and described in terms of
Both a) & b) are deeply mistaken.
* Dreyfus follows the
in philosophy (Husserl, Heidegger, Merleau-Ponty).
* Our interaction with the environment cannot be reduced to the knowledge of linguistic facts, but rather requires an
understanding of complex systems of interrelated practices.
* Heideggerian AI advocates a
dynamic interaction between organisms and environment
and attempts to substitute representations with more immediate ways of engaging with the world.
E.g. Hammer! It's not an object with a certain size and shape, but something that we can use “in-order-to” hang nails, something that is “ready-to-hand” and connected to our life in a meaningful way.
* Brooks (TED Talk) "Robots will invade our lives" (2003)
* Vintage: Herbert, the soda can collecting robot (1989)
* Beth Preston (1993) works on the idea of
human beings often act like proficient performers who carry out a huge variety of tasks without having to think about them or remember explicit rules.
. We are able to lock doors without having any idea of how locks work. We simply learn to use keys because “that’s what people do” in our culture, and this is all we need to know in order to engage in meaningful actions (e.g. going out, keeping the house safe).
* Our engagement with the world is ultimately
grounded in action
rather than in the knowledge of explicit rules.
* Lower-level activities such as
locomotion and perception
should be considered the hallmark of intelligence.
* Less *detached*, more absorbed in the environment = ORGANISM + ENVIRONMENT (unit of analysis).
AN EXAMPLE OF HEIDEGGERIAN AI: PENGI
What is it?
Pengi is a production system, with basic rules, a context and a method for determining which rules to apply.
What does it do?
Pengi plays a videogame called Pengo as well as a human who has a certain familiarity with the game.
How does it work?
Pengi plays the game w/out an explicit plan and w/out retaining any particular action in memory. It reacts directly to the situation.
No complex representations but indexical-functional entities -- e.g. "the-bee-that-chases-me" rather than "bee-3".
"If a block is next-to-me, then kick the block that is next-to-me"
as opposed to
"IF [(Goal = kick-blockX) AND (Adjacent(blockX))] THEN [Do (kick-blockX)]"
* Vintage: Pengi playing Pengo (1984)
* Try it here! http://www.penguingames.info/pengo-fandango.php
VS PHYSICAL GROUNDING
(linguistic abilities) vs
(coping with the environment)