Loading presentation...

Present Remotely

Send the link below via email or IM


Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.


Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

The Incomputable: dancing with pixies

No description

Mark Bishop

on 15 October 2015

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of The Incomputable: dancing with pixies

Ecological COGNITION as embodied, embedded, enactive interactions of my brain, in my body, in our world.
"... it is not sufficient that it [rational soul; mind] be lodged in the human body exactly like a pilot in a ship, unless perhaps to move its members, but that it is necessary for it to be joined and united more closely to the body, in order to have sensations and appetites similar to ours, and thus constitute a real man"

Descartes, Discourse on Method V (1618)
The CRA: 'syntax' is not sufficient for semantics ...
GOFAI robots: Shakey and the sense-plan-act cycle
The 'subsumption' architecture; the world as its best representation, (Brooks);
Kevin Warwick and the Seven Dwarfs (above); as conscious as a slug ?
Evolutionary robotics - Sussex room centering robots
Cognitive Robotics: Rolf Pfeifer; moving the cogitive load to materials
Epigenetic robotics: the robot child where the goal is to model the development of cognition in natural and artificial systems.
Animats and Cyborgs

1. Opens ‘Yellow Pages’ and selects a restaurant to visit at random, so defining the agent’s restaurant hypothesis.
2. Partial hypothesis evaluation: at dinner that night the delegate selects a meal from the menu at random and subsequently decides if it was ‘good’ or ‘bad’.
3. Diffusion/communication: the next morning at breakfast …

IF <last night’s meal was ‘good’>
THEN maintain restaurant hypothesis and GOTO (2)
ELSE IF <last night’s meal was ‘bad’> THEN communicate with a random colleague:
IF <colleague’s meal was good>
THEN adopt colleague’s restaurant hypothesis and GOTO (2)
[[I suggest if the restaurant game is broken down into two squares (slides) that you can zoom in. and same for Dancing with pixies]]
Evolutionary Robotics, (Harvey)
Animats, (Warwick, Nasuto et al)
Evolutionary robotics - Sussex room centering robots
Animats and Cyborgs
Mark Bishop
The trouble with computation ...
The themes and ideas highlighted in formulating these TROUBLEs WITH COMPUTATION emerged through many discussions with grad-students, post-docs, colleagues, friends and family.

For those interested, the evolution of the DwP reductio can be traced in the following papers:

Bishop, J.M., (2002), Dancing with Pixies, in Preston, J. & Bishop, J.M., (Ed's), Views into the Chinese Room: new essays on Searle and Artificial Intelligence, OUP.
Bishop, J.M., (2002), Counterfactuals can't count: a rejoinder to David Chalmers, Consciousness & Cognition, 11:4, pp: 642-652.
Bishop, J.M., (2004), Can computers feel?, AISB Quarterly, 119, pp. 6, UK.
Bishop, J.M., (2004), Mechanical bodies; mythical minds, Proc. Brain Inspired Cognitive Systems, BICS, 2004, CD ROM ISBN 1 85769 199 7, Stirling Scotland.
Bishop, J.M., (2009), Why computers can't feel pain, Mind & Machine,: Volume 19, Issue 4 (2009), Page 507-516.
Bishop, J.M., (2009), A Cognitive Computing fallacy? Cognition, computations and panpsychism, Cognitive Computing 1:3, pp. 221-233.

The move to formal accounts of cognition
Why do I think ?
Wherefore I feel ?
How do I see ?
: Texts, marks, logos, names, graphics, images, photographs, illustrations, artwork, audio clips, video clips, and software copyrighted by their respective owners are used on these slides for non-commercial, educational and personal purposes only. Use of any copyrighted material is not authorized without the written consent of the copyright holder. Every effort has been made to respect the copyrights of other parties. If you believe that your copyright has been misused, please direct your correspondence to:
stating your position and I shall endeavour to correct any misuse as early as possible.

Stevan Harnad suggests that a better test than The Turing Test for intelligence will be one that requires responses to all of our inputs, and not merely to text based questions. I.e. the appropriate goal for research in AI has to be to construct a robot with something like human sensorimotor capabilities.
Computational 'machine consciousness':
a straw man?
Michael Wheeler [on the Dynamic System theory of Cognition (DSC)]
"conceptualises mental phenomena as state space evolution in certain forms of dynamical system".
Searle's response to connectionism
and dynamic systems
Bishop: the ‘Dancing with Pixies’ reductio
Dancing with Pixies (DwP) is a reductio ad absurdum argument that endeavours to demonstrate that:

IF “the execution of an appropriate computer programme really does 'bring-forth' genuine cognitive states” (assumed claim)

THEN “All matter instantiates an infinitude of subjective conscious aspects" (pixies)
However, against the backdrop of our scientific knowledge of the closed physical world, and the corresponding widespread desire to explain everything ultimately in physical terms, such panpsychism has come to seem an implausible view.

Hence if the reductio holds we are led to reject the assumed claim ...

"A system does not instantiate genuine mental states merely via the execution of an appropriate computer program".
Turing's 3-state Discrete State Machine (DSM)
Consider Alan Turing’s 3-state input-less Discrete State Machine, DSM, (1950).

At each time step the wheel-machine occupies one of a finite number of possible physical positions, P.

Each physical position, P, is mapped to a computational state, Q, hence in its operation the wheel cycles through a finite number of computational states {Q1, Q2, Q3}.

At each point in time the next computational state is entailed by the current state.

The output (e.g. light on) occurs when the machine is in a specific computational state (e.g. Q1).
How to implement a 3-state input-less DSM by a digital counter
Over the time interval [T1 to T6] a simple digital counter transits the states {C1, C2, C3, C4, C5, C6}.

Over the same time interval an input-less DSM (Q) generates the finite linear series of state transitions labeled {Q1, Q2, Q3, Q1, Q2, Q3}.

To implement the input-less DSM (Q) by the counter:
Map DSM state Q1 to the disjunction of counter states (C1 v C4).
Map DSM state Q2 to the disjunction of counter states (C2 v C5).
… and DSM state Q3 to (C3 v C6).

I.e. The above mapping simply defines the logical [computational] state of the system from the physical position of the counter.
Putnam: an ‘open physical system’ implements any input-less FSA
In the appendix to his 1998 volume ‘Representation and Reality’ Hilary Putnam suggests that any Open Physical System (e.g. a cup of tea or a rock) can be characterised by a sequence of states that evolve over time {S1, S2, S3, S4, S5, S6 .. S}.

Putnam's 'Principle of non-cyclical behaviour' asserts this to be a 'non-repeating' state sequence due to the influence of cosmic rays, gravitational fields etc.

In this sense the behaviour of any open physical system effectively mirrors that of the hypothetical ‘infinite counter’ as it reliably generates a non-repeating sequence of state transitions.

Hence Putnam asserts that "a rock implements any input-less FSA".
Digital computation is not sufficient for 'consciousness'
David Chalmers: on input-less Finite State Automata
“… the state-space of an input-less FSA will consist of a single un-branching sequence of states ending in a cycle, or at best in a finite number of such sequences".

"The latter possibility arises if there is no state from which every state is reachable. It is possible that the various sequences will join at some point, but this is as far as the ‘structure’ of the state-space goes".

"This is a completely uninteresting kind of structure as is demonstrated by the ease with which it can be implemented by a simple digital counter".

"or, pace Putnam, any open physical system".
Dances with Pixies
Over a finite period, with the input to the FSA fixed (i.e. specified a priori), we can collapse the contingent branching state structure of the FSA state transition diagram to a simple linear path ...
... by simply replacing every contingent branch with a fixed state transition defined by current state and specified input.

With its input fully specified in this way the contingent branching state transition diagram of the FSA is transformed into a simple linear sequence.
I.e. With its input specified over a finite time period, an FSA with input functions like a simple clockwork device; effectively as an input-less FSA.

Hence, with input fixed over a finite time period, we can implement execution of any FSA with input via a simple digital counter or, pace Putnam, any open physical system ...
'There is something it is like' to be (i.e. there is a subject aspect to) the dynamic execution of specific - 'machine conscious' - computer programs.
I.e. The physical execution of a specific computer program instantiates conscious states in the [computational] system.

There is an ‘International Journal of Machine Consciousness’; notable researchers in the field include:
Prof. Warwick and the ‘seven Dwarves’
The Reading cybernetic ‘learning’ robots;
These robots are, “... as conscious as a slug” (Warwick).

Prof Holland and the ‘CRONOS’ robot
Prof. Owen Holland’s research aims to develop and extend Cronos (a putative ‘conscious’ robot) via ‘internal models’.
"If a [band limited] function x(t) contains no frequencies higher than B hertz, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart".
If sampling fast enough (i.e. meeting the Nyquist-Shannon criterion) we can fully characterise a continuous dynamic system as a set of discrete states evolving over time.

Over a finite time period we can map the states of the discretised dynamic system onto a counter, or, pace Putnam, any open physical system.

IF "mind and cognition are constituted by state space evolution trajectories in certain [continuous] dynamical systems" (i.e. DSC is TRUE)

THEN “All matter instantiates an infinitude of subjective conscious aspects";

'Pixies' dancing everywhere !!
'Consciousness' as a form of analogue computation:
Dancing with Pixies (reprise)
Computational modes of 'understanding':
a straw man?
"The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states", (Searle, 1980, Minds, Brains, Programs), targeting the work of Schank and Abelson on 'understanding stories'.
is that the mere execution of a computer program is not sufficient for either understanding or consciousness.
Professor of Cognitive Computing (Goldsmiths, UK)
Chair AISB:
the UK society for the study of Artificial Intelligence (A.I.)
and the Simulation of Behaviour (S.B.)

Searle targets this response in two ways, targeting both discrete and continuous connectionist [dynamical] systems:

1. The 'Chinese gym' targets discrete dynamical systems;

2. The 'network of water pipes' targets continuous dynamical systems.
Searle, "Computation is not an intrinsic property of matter.
Mechanical computation
Babbage’s Difference & Analytical Engines.

Michie’s Menace: a noughts-and-crosses playing machine, for which Donald Michie developed a general-purpose learning algorithm called Boxes, which could be hand-simulated using an assembly of matchboxes.

Weizenbaum’s 'toilet-roll and stones' computer programmed to play a simple game.
What computation is this?
It is arbitrary - a mere engineering practicality - how voltage levels are mapped to logical states. I.e.

CMOS: LOW (FALSE) as (0 V to VDD/2)
HIGH (TRUE) as (VDD/2 to VDD) [VDD = supply voltage]

TTL: LOW (FALSE) as (0V to 0.8V)
HIGH (TRUE) as (2V to VCC) [VCC = 5 V ±10%]

ECL: LOW (FALSE) as (VEE to −1.4)
HIGH (TRUE) as (V−1.2 V to 0) [VEE is about −5.2 V]

This leads Oron Shagir to assert that [contingent on the mapping between physical state and computational state]:
Computational states are ALWAYS mapped onto physical states of the computational system
"... some possible physical systems simultaneously implement different computational structures (or different states of the same computational structure) that suffices for cognition; hence [if the computational sufficiency thesis is true] these systems simultaneously possess different minds ...", (PT-AI Thessaloniki, 2011).
Cognition as autonomously EMBODIED
Evan Thompson's "Mind in life" thesis...

With its allopoietic engineered body, even Nasuto, Warwick et al's 'cybernetic animat' - a modern 'brain in a vat' - remains unable to ground meaning; a richer [autopoietic] form of embodiment is required.

See Nasuto & Bishop (2011), "Of zombie mice and animats", PT-AI 2011, Springer (2012 forthcoming).
Cognition as ENACTIVE
Perception is an exploratory activity; something we do ...

... not something that passively happens.

C.f. Held and Hein on kittens.
We cannot study cognition and perception in a void; do you see what I see?

E.g. Jules Davidoff's work with Himba tribe suggests colour perception (categorisation and phenomenology) is contingent on culture.

Davidoff, Davies & Roberson, (1999), "Colour categories in a stone-age tribe", 1999, Nature 398, 203-204.
Cognition as EMBEDDED (in society)
On the relationship between mind
body and world
Cognition as a form of analogue computing
The Nyquist-Shannon criterion
Yes but ...
Doug Hofstadter (Indiana University)
[of a priori mapping] “This is not science ...”
This is not a real mapping as we can only perform Putnam style mappings a-posteriori once we know the input(s) to the robot.

“ This is not science! ”

But, and even more so than Skinner’s rat in a maze, we can always re-run experiments on virtual robots using exactly the same input and starting conditions …

As the robot’s behaviour is deterministic, each time the robot is used with the same input the phenomenal states it experiences must be the same.

But with input to the robot fixed we can collapse its contingent control program state structure into an un-branching series of state transitions and hence perform the Putnam mapping on a counter (or any open physical system); thus the DWP reductio holds.
Peter Fletcher (University of Keele, UK)
“On not fully implementing the FSA ...”
Putnam’s mapping merely realises a desired series of state transitions and does not capture the full power of the FSA.

Fletcher, “Consider a FSA to recognise a string in a given language”
Just getting answer right once is not enough to say that the system recognises the string.
What matters is the sequence of states the machine would enter if it had been presented with other strings.

But this conflates ‘recognition of a string’ with ‘experiencing phenomenal states’.
It may be the case that to say of the FSA that it correctly ‘recognises a string’ it is necessary to implement its full structure; but Fletcher’s objection does not prove that such ‘full structural implementation’ is necessary for the system to realise phenomenal experience ...
David Chalmers (Australian National University)
“On the lack of Counterfactuals ...”
“It's not at all obvious just how the gradual transition [elimination of contingent transitions] will work in a combinatorial state-automaton, but let's grant that something like it is possible.
Then this process will gradually transform a counterfactually-sensitive system into a 'wind-up' system that implements just one run.
This plausibly will affect the system's cognitive states (such as beliefs), gradually destroying them, so the fading qualia argument (which relies on preserving cognitive states) doesn't apply.

Bishop addresses a version of this point by saying that 'input sensitivity' can't be crucial, since a non-input-sensitive system (e.g. a blind system, or one with constant input signals) could be conscious.
But what really matters is counterfactually-sensitive cognitive processes, which the blind system still has (it isn't a wind-up system).

Compare an ordinary human, a blind human, and a humanoid system pre-programmed to go through a single specific series of brain states.
Here it seems most plausible to say that the first two systems are conscious but that the third is not.”
Ron Chrisley (University of Sussex, UK)
“On implementing different systems ...”
"Hard-wiring (fixing) input to the robot makes the system physically distinct (hence no longer isomorphic) to a FSA with input as it physically lacks ability to correctly implement ‘counterfactually sensitive cognitive processes’ and this distinction is critical to the computationalist’s claim."
The objection from randomness ...
The Dancing with Pixies reduction specifically targets Discrete State Machines; it has nothing to say about the conscious state of a Stochastic Automata.

In a Stochastic FSA the future state of the machine is determined by a probability distribution which determines, given the current state (and any input), the probability of entering any future state.

If the stochastic element is generated by a genuine stochastic source – say ‘Shot noise’ – then, although in one sense conceding that computations alone are not necessary of sufficient for consciousness, it would seem that the objection is uncontroversially correct.

But we can replicate the statistical properties of randomness algorithmically [or via look up] hence all that is left is an appeal to the random process itself ...
Seth Lloyd (MIT)
"On carbon chauvinism: could a silicon dog be conscious?"
It is conceivable that a silicon based life form - a silicon dog say - could have conscious states instantiated in the embodied physical properties of its structure ...

... it just would not instantiate them purely in virtue of its execution of a specific computer program.
Is DwP too powerful:
can anything be conscious?
John Searle, (Minds, Brains, Programs, 1980), "Whatever else intentionality is, it is a biological phenomenon, and it is as likely to be as causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena.

No one would suppose that we could produce milk and sugar by running a computer simulation of the formal sequences in lactation and photosynthesis but where the mind is concerned many people are willing to believe in such a miracle because of a deep and abiding dualism

the mind they suppose is a matter of formal processes and is independent of quite specific material causes in the way that milk and sugar are not.".
The argument from teleology
Without genuine meaning or consciousness teleology must be hard coded - engineered - into the system.
The importance of meaning and consciousness
It is conceivable that at some future point in time engineers might build computer controlled robots that interact with humans so well as to pass (Harnad's embodied) Total Turing Test.

The last century was witness to too many acts of genocide and, in the absence of genuinely instantiating meaning or consciousness, I find the prospect a future "equal opportunities" [Kevin O'Regan] act for robots - potentially enabling an mere 'unconscious system' to rise to a position of power over humanity - truly appalling.
On robot ethics ...
Full transcript