Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

Artificial Intelligence: Computational Postmodernism

No description
by

Jay Hack

on 20 March 2013

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Artificial Intelligence: Computational Postmodernism

Hello, My name is Andrew Ng. I am the
current director of the Stanford Artificial Intelligence Lab, though you may know me better as the co-founder of Coursera. The other day, after watching "Tree of Life", I got in a discussion concerning postmodernism with a friend of mine. It occurred to me that the field of Artificial Intelligence has undergone a transition from modern to postmodern design approaches over the past several decades AI Researchers, you see, build agents that form internal representations of their external domains. It is the isomorphism, or "mapping," from the symbols that constitute these internal representations to external entities from which an agent's "understanding" of the world arises. Part 1: At the birth of AI, most researchers attempted to construct these representations themselves. That is, they would design a formal system to constitute the agent's model of the world. More often than not, the 'symbols' of this formal system would refer to high-level entities and actions in the real world. (Such as a person, object, or a typical verb; anything you might mention in a typical conversation.) My colleague Terry Winograd, for example, built a system named SHRDLU back in 1968 that would perform user-specified operations on a small domain of colored blocks. Part of his design process involved specifying that the blocks, which are certainly high-level entities, as well as their relative positions, are things worth representing. The 'symbols' in SHRDLU's internal model of its external domain would never consist of anything but the blocks and their relative positions. In this design paradigm, or "good old-fashioned AI," as it is known, the designer gives the agent his own methodology of representing and/or understanding the world. That is, the designer acts as an authority on the way in which the world works; the machine accepts all of its knowledge (or learning methodology) from the designer. The representations that these "good old-fashioned" artificial intelligences form are objective and explicit. Here, for example we have a representation that might be found in an agent that plays Tic-tac-toe. Notice that it is obvious what higher-level entities the discrete symbols refer to. (It is explicit) This right here corresponds to a board position in the game. No matter who views this representation, as long as they know how the game of Tic-tac-toe is played, they can understand it. (It is a universal, or objective, representation.) Good old-fashioned AI has had many successes over the years: • The General Problem Solver: program capable of solving a range of formally stated problems, including the 'Towers of Hanoi.' http://en.wikipedia.org/wiki/General_Problem_Solver • Deep Blue: the first chess-playing computer to defeat the reigning world champion. (Gary Kasparov, 1997)
http://www-03.ibm.com/ibm/history/ibm100/us/en/icons/deepblue/ But early in its history, researchers began to realize that relying on their own models and representations of the world to build AI was futile. For one, it takes a very long time for one to explicitly communicate a sufficiently comprehensive representation scheme and methodology for complicated tasks. In addition, it is often the case that we don't know how to describe the way in which we understand something! Part 2: This caused researchers to shift their efforts towards a design approach that emphasized learning and experience over objective truth. Instead of constructing explicit representation schemes, researchers began to increasingly rely on statistical models that made minimal assumptions about the agent's domain; given enough time and experience within their domain, these statistical models could offer a new type of understanding to their bearers. This is where my research comes in! In most statistical approaches to AI, the referrent of a 'symbol' in an agents' internal representation is either an atomic unit of perception (e.g. a pixel) or an abstract statistical property describing its percepts - the 'parameters' of the model in use. These agents learn like humans. When initially introduced to their domains, agents that incorporate statistical learning act like infants - uncoordinated and ignorant. They derive their knowledge of their domain from their own subjective experience within it, as opposed to from their designer's knowledge of it. While the designer does choose the statistical model that the agent implements, as well as what percepts the symbols will refer to, he makes minimal assumptions about the nature of the data it will receive and the structure of the domain in which it will operate. All of these things are left for the agent to conclude. In fact, some statistical models that we use are even designed to act like artificial neurons. The same algorithm (artificial neural networks), when applied in countless different domains, consistently produces good results. I, personally, have successfully used artificial neural networks to fly a helicopter, among other things. (This is a task that *still* nobody has been able to perform using the "good old-fashioned" approach - it is too complex to describe, though apparently easy enough to learn!) Here's an example of an agent learning to walk using a neural network. this is purely based on trial and error, mind you - the agent initially has no knowledge of the physics governing its world, it merely can sense its orientation. It will learn how its actuators (leg "muscles") exert influence over its percepts and how to leverage this knowledge to remain upright. This same algorithm is currently in use for recognizing the postal code written on envelopes in post offices across the US. So it hit me: Clearly, the agents are deriving their understanding of the world from experience. Truth, for them, is subjective. • SHRDLU: program capable of taking natural-language instructions and manipulating colored blocks on a tabletop.
http://hci.stanford.edu/winograd/shrdlu/ For an artificial neural network, along with many other statistical learning models, their semantic content may not even be understandable to an outside observer. That is, all of the understanding is 'locked up' in the parameters of the statistical model. Short of reading them off in a list or implementing an identical model, it's not clear to us exactly what the agent has learned. Furthermore, their understanding of the world is fragmented, distributed throughout thousands of statistical parameters. It has semantic content that can only be understood by the agent itself. Researchers seem to have given up on finding the objective truths and representations of the world - how one might best go about flying a helicopter, for instance - in favor of more fragmented and convoluted, yet effective alternatives. This is computational postmodernism. by Jay Hack PWR 2 Winter 2013
Full transcript