Loading presentation...

Present Remotely

Send the link below via email or IM


Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.


Language, mind, and brain

No description

Kristen Greer

on 8 September 2013

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Language, mind, and brain

Wernicke's area
Angular gyrus
Arcuate fasciculus
Controls the use of content words (N, V, Adj, Adv) and
works closely with the auditory cortex. Important for semantic processing and speech perception/comprehension.
Bundle of fibers connecting Broca's and Wernicke's, allowing the two to communicate
Broca's area
Controls the use of inflectional morphemes and function words; also controls the motor cortex, coordinating movement of the face, jaw, tongue, and lips. Important for syntax and speech production.
Facilitates the identification of objects by matching auditory and visual information. Important for reading and writing
sensory information, spatial sense

visual information
auditory perception
higher thinking,
Sylvian Fissure
Separating the frontal and temporal lobes
These two hemispheres are connected by a structure composed of a bundle of nerve fibers called the corpus callosum.
Frontal lobe -
Parietal lobe -
Occipital lobe -
Temporal lobe -
The cortex is characterized by bumps (gyri, pl./gyrus, sg.) and grooves (fissures or sulci)
Language Acquisition
Language and mind
Me is We: Mirror neurons, language, and human society
The critical period
Stages in the development of language
Delve Deeper
If you're interested in the critical period hypothesis, you may want to check out this compilation of fragments from a documentary about Genie, a rare case of a child deprived of language until she was 13 (also, click the link to the wikipedia page about Genie in the wiki excerpt.
NOTE: Genie is a victim of severe child abuse. Some of the images in the video may be upsetting.
In the last video, you saw that all humans are wired to receive color information in the same way. However, our perception of color in our minds may be different. In this video, the YouTube personality Michael Stevens (or 'VSauce') discusses qualia--our subjective experiences of the world. As you watch the video, think about how you might reconcile the concept of qualia with what you've seen elsewhere in this Prezi on mirror neurons and social learning.
Delve Deeper
Check out this really cool video about a man, Neil Harbisson, who literally LISTENS to color. To what extent do you think Harbisson's experience of color is the same as yours or mine?
The perception of color
Is your red the same as my red?
The three main theories of the development of language in children
Language is too complex to just be a learned phenomenon. There is no way children can hear and learn all of the possible sentences of human language because it is infinite). This argument is known as the poverty of the stimulus: children learn more linguistic sequences than they can possibly be exposed to in their environment. As a result, children's minds are not a 'tabula rasa' (blank slate) when they are born. Instead, they are born with a unique predisposition for learning language, sometimes called a 'Language Acquisition Device'.
The nativist theory
Key proponent:
Basic idea:
Noam Chomsky
Language learning occurs as the result of general learning processes in the human brain. Children extrapolate language data from their environment and, on the basis of the frequency of different structures, deduce patterns until they have acquired the full linguistic system.
The empiricist theory
Key proponent:
Basic idea:
B.F. Skinner
Children learn language primarily because of their interactions with others. They learn which structures get them what they want in different scenarios and this enables them to understand the meaning behind these structures, ultimately leading to language learning. Under this theory, the way caregivers interact with children plays an especially important role in shaping their language habits.
interactionist theory
Key proponent:
Basic idea:
Lev Vygotsky

In Unit 3, we looked at the features of Prof. Ramachandran's speech. Let's look at it again, this time paying attention to the content of his lecture (the intriguing topic of mirror neurons).Then, follow up by reading the included article on these 'Gandhi' neurons.
Mirror neurons in and of themselves are fascinating, but how are they relevant to language?
Researchers have identified mirror neurons in and around Broca's area, the region responsible for coordinating the motor movements needed for language.
Mirror Neurons
Language and mirror neurons
This finding has led to research identifying two roles these neurons play in language:
facilitating comprehension
promoting language acquisition
Click here!
Click on this image to check out this excellent (and very short) video on these two roles.
The transcript is also available on the website!
Language and human society
As we have seen, mirror neurons may have significantly contributed to our ability to evolve language. But the question remains as to why we evolved language. Prof. Mark Pagel gives a compelling response to this question in this next TED talk.
This video deals specifically with Genie's language development (~2:00)
: Are these theories more aligned with a
view of the development of language or a
Research on language acquisition has shown that children generally progress through something like the following stages in language acquisition (note: ages are approximations, as children do vary in their language development).
Prof. Deb Roy conducted an innovative experiment to investigate the development around 18-24 mos. He is interested in 'The birth of a word': the factors that influence how and when a child learns a new word. Listen to his findings in this next TED talk.
Considerable evidence in child language development suggests that there is a critical period for language learning: an age after which it becomes difficult, if not impossible, to learn language. In this next TED talk, Prof. Patricia Kuhl describes work she has done to investigate the critical period hypothesis (CPH) for the acquisition of sounds. In addition to its implications for the CPH, Kuhl's research sheds interesting light on the way babies learn language.
How do humans perceive color? This short, animated video explains.
: The theory of mind presented in Stevens' video and the HowStuffWorks article is based on the premise that 'mind' is something highly
: there are individual selves and individual consciousnesses, and each of these possesses his own 'mind'.
...Is this the
way to think of 'mind'?
Stevens talks about 'theory of mind'. For more on this, take a look at this article from HowStuffWorks.com.
Theory of mind
Where is language in the brain?
Some brain basics
The brain is divided into two hemispheres...
...each of which contains four lobes.
Surrounding the brain is a thin (2-4 mm) layer of gray matter called the cortex.
The brain is lateralized, meaning that the two hemispheres are largely distinct in structure and function. For the majority of people, language is localized in the left hemisphere of the brain. Thus, the regions discussed here are located in the left hemisphere for most people.
The 'classic' view of the organization of the brain for language is known as the 'Wernicke-Geschwind' model (though note that recent advances in neuroscience have shown that this model dramatically oversimplifies things).
Language areas of the brain
Evidence for the language areas of the brain
How do we know that language is localized in these regions? The two most important sources of evidence are aphasias (impairments of speech due to injury, disease) and brain imaging techniques (fMRIs, PET, CAT, ERP). This video explains.
Going deeper into the brain
Christopher DeCharms shows how we can use MRIs to...control our brains (!!).
Take a look at some recent TED talks showing cutting-edge uses of brain imaging techniques.
Language, mind, and brain
Answer the questions
in Part One (Brain basics) in the Comprehension Activity.
Answer the questions
in Part Two (Aphasias) in the Comprehension Activity.
Allan Jones paints a very interesting picture of the complexity of the brain and the striking degree to which humans are actually very, very similar.
Answer the questions
in Part Three (Brain imaging) in the Comprehension Activity.
Answer the questions
in Part Four (Language Acquisition: Stages of development) in the Comprehension Activity.
Answer the questions
in Part Five (Language Acquisition: Critical Period) in the Comprehension Activity.
Answer the questions
in Part Six (Mirror neurons) in the Comprehension Activity.
Answer the questions
in Part Seven (Language and human society) in the Comprehension Activity.
Answer the questions
in Part Eight (Language and mind) in the Comprehension Activity.
Mirror neurons
Language is tied to two key debates that have ramifications in disciplines beyond linguistics (such as psychology, sociology, cognitive science, and philosophy): the mind-body problem and the nature/nurture debate. The mind-body question refers to whether or not we should make a distinction between the actual physical bundle of neurons and tissue in our heads and the consciousness, the thinking, and the mental states that go on in there. The nature-nurture debate is the question of how we become who we are: are we born the way we are, or are we shaped by our everyday experiences in our environments? The materials in this Unit are all at some level about how language fits in to these two major discussions.
To be able to think about the materials in this unit in the right way, it is important to first understand a little bit about these two questions. Consider first the mind-body problem. The title of this unit, Language, mind, and brain, implies that there is a distinction to be made between something that is mind and something that is brain. But is such a distinction justified? In fact, this question has been the subject of much debate, in philosophy, psychology, and cognitive science. Responses to this question generally fall into one of two categories: dualist approaches and monist approaches. As your knowledge of English morphology has probably already led you to understand, dualist approaches note the root duo) hold that there is a difference between the realms of mind and matter, while monist approaches (note the root mono) argue that there is only one kind of ‘stuff’ and mind and brain are both contained within that stuff.

for this step
The dualist approach to the mind-body problem pervades much of Western academic traditions. And indeed, what will emerge from this Prezi is a general view that language has connections to a sense of mind that is distinct from the physical brain. But it is important to recognize that this is a debate, and we encourage you to keep this in mind as you work through these materials, pursuing further readings on monist approaches if you are interested. A good place to start if you are interested is, of course, the wikipedia page on the mind-body problem, which acts as an index for several different monist theories. What you’ll see in the prezi is the way language is connected to both brain and mind. First, as a cognitive function, language processing must take place in the brain, so we can ask ourselves how and where in particular is language stored in the brain? This will be addressed in these sections on brain anatomy, on the language centers of the brain, and on aphasias and neuroimaging techniques.
You’ll also see that language is connected to the brain in this section on mirror neurons and their role in language learning and comprehension.
Towards the end of the Prezi path, you’ll see how language has connections to the mind domain of the mind-body dichotomy. In particular, this video from the youtube personality known as Vsauce and this article from HowStuffWorks discusses how our everyday use of language betrays what philosophers call a ‘theory of mind’—namely, the understanding that we and other people have thoughts and mental states that we can think about and reflect on. In the next face-to-face meeting, we’ll continue to explore this ‘theory of mind’ concept and talk about how there are even different ways to understand the nature of ‘mind’.
Returning now to our second major question, the nature/nurture debate.
Once you’ve learned a little about the anatomy of the brain and where language is stored, we’ll talk about how language is first acquired and stored in the child brain. As you’ll see from this section of the prezi, the way language is acquired bears on this question of whether our make-up is mostly genetic (nature) or learned (nurture).
In addition to the mind-body problem and the nature/nurture debate, there is a third major theme in the materials in this unit. This theme, the question of whether or not language is a trait that is unique to the human species, is most prominently addressed in the lecture by Prof. Mark Pagel. After you’ve seen the information on how the human brain has evolved to be able to acquire language, Mark Pagel will address the question of why we evolved to acquire language.
So there are three major questions to kind of keep in the back of your mind as you make your way through all of this material, looking for connections between all the many resources presented here: first, what role does our biology play in developing language? What role does our environment play? Second, How does what we know about language suggest that there is some sort of ‘mind’ or ‘consciousness’ that is independent of the physical matter that is our brain? And third, What is the role of language in human society? To what extent is it something that is unique to our species?
Evidence for the language structures of the brain you’ve just seen comes from two main sources: aphasias and brain imaging techniques.
So looking first at aphasias. Aphasias are impairments in the speech of an individual who has suffered some kind of brain damage, whether by injury, stroke, or other diseases. The basic assumption with aphasic patients is that if a patient has a specific kind of language impairment and damage to a particular area of the brain, we can assume that it is that area that controls the part of language that has been impaired.
There are many different kinds of aphasias depending on the area of the brain that has been damaged and the type of language impediment this damage produces. The ones we will survey include Broca’s, Wernicke’s, and conduction aphasias.
The first aphasia I want to look at is Broca’s aphasia, which results from damage to Broca’s area, the language center located in the left frontal lobe that is responsible for syntax and speech production. Patients with this aphasia often suffer from difficulties with speech production; they have difficulty combining words, so they often produce broken speech that is lacking in inflectional morphemes and function words, as you can see in the short excerpt on this slide from an interview with a Broca’s aphasic. It is largely because of the nature of these symptoms that we believe Broca’s area to be partly responsible for coordinating the syntax of language. Here’s a video showing an example of a patient suffering from Broca’s aphasia.
A dramatically different kind of aphasia is called Wernicke’s aphasia, which sometimes is a symptom in patients who have damage to Wernicke’s area in the left temporal lobe. Unlike Broca’s aphasics, Wernicke’s patients have no problems producing speech. They struggle significantly, however, with speech comprehension. They misinterpret what others say and respond in unexpected and unusual ways. They also have difficulties retrieving words. When Wernicke’s aphasics speak, their speech is fluid, with regular intonations and words connected in seemingly grammatical ways. However, because they have difficulty retrieving words, the words they connect don’t make sense. The result is a stream of fluent but highly nonsensical speech. It is because of these kinds of symptoms that we believe this area of the brain, Wernicke’s area, to be involved in processing the semantic component of language in the brain.
for this step
Let’s take a look at some audio from interviews with patients suffering from Wernicke’s aphasia.
Another kind of aphasia is called conduction aphasia. This results from damage to the arcuate fasciculus, the bundle of nerve fibers connecting Broca’s and Wernicke’s areas. Patients suffering from this kind of aphasia sound a lot like Wernicke’s aphasics: they can produce fluent speech, but this speech is nonsensical. Unlike Wernicke’s aphasics, who also have a difficult time comprehending speech, conduction aphasics can usually comprehend speech with little difficulty. Generally, however, these patients have trouble repeating linguistic information they hear because the connection between Broca’s and Wernicke’s areas is impaired.
A side effect characterizing some of these disorders is what is known as anosognosia. Anosognosia refers to the patient’s unawareness that there is anything wrong with his or her speech. Most often, this accompanies Wernicke’s and Conduction aphasics. Broca’s aphasics, on the other hand, are often (but not necessarily always) acutely aware of their inability to speak normally.
In the past, the only way to know where damage in the brain occurred was to wait for your patient to die and then perform an autopsy. Recent inventions have allowed us to look at the brain directly to study language processes in living humans. Since neuroimaging is non-invasive, we can study the way both patients and ‘normal’ individuals use language. In use today are several different methods of peeking inside the brain: X-ray computed tomography (CT or CAT scans), positron emission tomography (PET), functional magnetic resonance imaging (fMRI), and near-infrared spectroscopy (NIRS). Neuroimaging is best for answering questions about where certain kinds of processing are taking place. For instance, we can see what areas of the brain react more when people listening to speech compared to listening to random noise.
Along with these techniques that help us pinpoint where in the brain certain linguistic signals are processed, another set of neuroimaging techniques can help us figure out when certain linguistic processes take place. These include Electroencephalography (EEG) / Event-related potentials (ERPs) and Magnetoencephalography (MEG). Using these we can answer questions such as how long it takes a person to process a difficult syntactic structure, or when we think a subject has understood a word. As you’ll see later in this unit, MEGs are also extremely useful for studying how babies acquire language.
Evidence from neuroimaging paints a much more complicated picture of where and when language is processed in the brain. The Wernicke-Geschwind model that is mentioned in the Prezi is useful as a way to begin thinking about localization of language function, but it’s not a very complete picture.
Besides where language is stored in the brain, we can also find out how our brain organizes and stores language by examining the types of errors people make when speaking. In addition to the evidence from aphasias, evidence for how language is stored in the brain may come from observing the errors normal speakers make from time to time when they speak. Speech errors include things like slips of the tongue, what are called slips of the ear, and the ‘tip of the tongue’ phenomenon. Slips of the tongue, also called spoonerisms, often include switching the sounds in different words, like tup of tea for cup of tea or pos pucket for pus pocket. Such errors indicate that words are made up of a phonemes: we can anticipate later phonemes in a sequence before we speak them, causing them to sometimes be spoken out of sequence. Slips of the ear are the reverse phenomenon; they involve mishearing the sounds in a series of words. These in turn show us which phonemes are easily confused during perception. Finally, the tip of the tongue phenomenon occurs when you can’t think of the word for the concept you are trying to express. This shows that we may store meanings and concepts separately of the phonetic structure of the word we use to express them. We also know that some words, often words that are frequent in everyday conversation, can be easier to recall than others, and these are less susceptible to tip of the tongue phenomena. This suggests that word frequency may play a role in the way we store words in our brain.

Many thanks to Laurie Lawyer, our colleague in the department, for her help with the ideas and script for this video.
Humans have long held a fascination for the human brain. We chart it, we've described it, we've drawn it, we've mapped it. Now just like the physical maps of our world that have been highly influenced by technology -- think Google Maps, think GPS -- the same thing is happening for brain mapping through transformation.

So let's take a look at the brain. Most people, when they first look at a fresh human brain, they say, "It doesn't look what you're typically looking at when someone shows you a brain." Typically, what you're looking at is a fixed brain. It's gray. And this outer layer, this is the vasculature, which is incredible, around a human brain. This is the blood vessels. 20 percent of the oxygen coming from your lungs, 20 percent of the blood pumped from your heart, is servicing this one organ. That's basically, if you hold two fists together, it's just slightly larger than the two fists.

Scientists, sort of at the end of the 20th century, learned that they could track blood flow to map non-invasively where activity was going on in the human brain. So for example, they can see in the back part of the brain, which is just turning around there. There's the cerebellum; that's keeping you upright right now. It's keeping me standing. It's involved in coordinated movement. On the side here, this is temporal cortex. This is the area where primary auditory processing -- so you're hearing my words, you're sending it up into higher language processing centers. Towards the front of the brain is the place in which all of the more complex thought, decision making -- it's the last to mature in late adulthood. This is where all your decision-making processes are going on. It's the place where you're deciding right now you probably aren't going to order the steak for dinner.

So if you take a deeper look at the brain, one of the things, if you look at it in cross-section, what you can see is that you can't really see a whole lot of structure there. But there's actually a lot of structure there. It's cells and it's wires all wired together. So about a hundred years ago, some scientists invented a stain that would stain cells. And that's shown here in the the very light blue. You can see areas where neuronal cell bodies are being stained. And what you can see is it's very non-uniform. You see a lot more structure there. So the outer part of that brain is the neocortex. It's one continuous processing unit, if you will. But you can also see things underneath there as well. And all of these blank areas are the areas in which the wires are running through. They're probably less cell dense. So there's about 86 billion neurons in our brain. And as you can see, they're very non-uniformly distributed. And how they're distributed really contributes to their underlying function. And of course, as I mentioned before, since we can now start to map brain function, we can start to tie these into the individual cells.

So let's take a deeper look. Let's look at neurons. So as I mentioned, there are 86 billion neurons. There are also these smaller cells as you'll see. These are support cells -- astrocytes glia. And the nerves themselves are the ones who are receiving input. They're storing it, they're processing it. Each neuron is connected via synapses to up to 10,000 other neurons in your brain. And each neuron itself is largely unique. The unique character of both individual neurons and neurons within a collection of the brain are driven by fundamental properties of their underlying biochemistry. These are proteins. They're proteins that are controlling things like ion channel movement. They're controlling who nervous system cells partner up with. And they're controlling basically everything that the nervous system has to do.

So if we zoom in to an even deeper level, all of those proteins are encoded by our genomes. We each have 23 pairs of chromosomes. We get one from mom, one from dad. And on these chromosomes are roughly 25,000 genes. They're encoded in the DNA. And the nature of a given cell driving its underlying biochemistry is dictated by which of these 25,000 genes are turned on and at what level they're turned on.

And so our project is seeking to look at this readout, understanding which of these 25,000 genes is turned on. So in order to undertake such a project, we obviously need brains. So we sent our lab technician out. We were seeking normal human brains. What we actually start with is a medical examiner's office. This a place where the dead are brought in. We are seeking normal human brains. There's a lot of criteria by which we're selecting these brains. We want to make sure that we have normal humans between the ages of 20 to 60, they died a somewhat natural death with no injury to the brain, no history of psychiatric disease, no drugs on board -- we do a toxicology workup. And we're very careful about the brains that we do take. We're also selecting for brains in which we can get the tissue, we can get consent to take the tissue within 24 hours of time of death. Because what we're trying to measure, the RNA -- which is the readout from our genes -- is very labile, and so we have to move very quickly.

One side note on the collection of brains: because of the way that we collect, and because we require consent, we actually have a lot more male brains than female brains. Males are much more likely to die an accidental death in the prime of their life. And men are much more likely to have their significant other, spouse, give consent than the other way around.


So the first thing that we do at the site of collection is we collect what's called an MR. This is magnetic resonance imaging -- MRI. It's a standard template by which we're going to hang the rest of this data. So we collect this MR. And you can think of this as our satellite view for our map. The next thing we do is we collect what's called a diffusion tensor imaging. This maps the large cabling in the brain. And again, you can think of this as almost mapping our interstate highways, if you will. The brain is removed from the skull, and then it's sliced into one-centimeter slices. And those are frozen solid, and they're shipped to Seattle. And in Seattle, we take these -- this is a whole human hemisphere -- and we put them into what's basically a glorified meat slicer. There's a blade here that's going to cut across a section of the tissue and transfer it to a microscope slide. We're going to then apply one of those stains to it, and we scan it. And then what we get is our first mapping.

So this is where experts come in and they make basic anatomic assignments. You could consider this state boundaries, if you will, those pretty broad outlines. From this, we're able to then fragment that brain into further pieces, which then we can put on a smaller cryostat. And this is just showing this here -- this frozen tissue, and it's being cut. This is 20 microns thin, so this is about a baby hair's width. And remember, it's frozen. And so you can see here, old-fashioned technology of the paintbrush being applied. We take a microscope slide. Then we very carefully melt onto the slide. This will then go onto a robot that's going to apply one of those stains to it. And our anatomists are going to go in and take a deeper look at this.

So again this is what they can see under the microscope. You can see collections and configurations of large and small cells in clusters and various places. And from there it's routine. They understand where to make these assignments. And they can make basically what's a reference atlas. This is a more detailed map.

Our scientists then use this to go back to another piece of that tissue and do what's called laser scanning microdissection. So the technician takes the instructions. They scribe along a place there. And then the laser actually cuts. You can see that blue dot there cutting. And that tissue falls off. You can see on the microscope slide here, that's what's happening in real time. There's a container underneath that's collecting that tissue. We take that tissue, we purify the RNA out of it using some basic technology, and then we put a florescent tag on it. We take that tagged material and we put it on to something called a microarray.

Now this may look like a bunch of dots to you, but each one of these individual dots is actually a unique piece of the human genome that we spotted down on glass. This has roughly 60,000 elements on it, so we repeatedly measure various genes of the 25,000 genes in the genome. And when we take a sample and we hybridize it to it, we get a unique fingerprint, if you will, quantitatively of what genes are turned on in that sample.

Now we do this over and over again, this process for any given brain. We're taking over a thousand samples for each brain. This area shown here is an area called the hippocampus. It's involved in learning and memory. And it contributes to about 70 samples of those thousand samples. So each sample gets us about 50,000 data points with repeat measurements, a thousand samples.

So roughly, we have 50 million data points for a given human brain. We've done right now two human brains-worth of data. We've put all of that together into one thing, and I'll show you what that synthesis looks like. It's basically a large data set of information that's all freely available to any scientist around the world. They don't even have to log in to come use this tool, mine this data, find interesting things out with this. So here's the modalities that we put together. You'll start to recognize these things from what we've collected before. Here's the MR. It provides the framework. There's an operator side on the right that allows you to turn, it allows you to zoom in, it allows you to highlight individual structures.

But most importantly, we're now mapping into this anatomic framework, which is a common framework for people to understand where genes are turned on. So the red levels are where a gene is turned on to a great degree. Green is the sort of cool areas where it's not turned on. And each gene gives us a fingerprint. And remember that we've assayed all the 25,000 genes in the genome and have all of that data available.

So what can scientists learn about this data? We're just starting to look at this data ourselves. There's some basic things that you would want to understand. Two great examples are drugs, Prozac and Wellbutrin. These are commonly prescribed antidepressants. Now remember, we're assaying genes. Genes send the instructions to make proteins. Proteins are targets for drugs. So drugs bind to proteins and either turn them off, etc. So if you want to understand the action of drugs, you want to understand how they're acting in the ways you want them to, and also in the ways you don't want them to. In the side effect profile, etc., you want to see where those genes are turned on. And for the first time, we can actually do that. We can do that in multiple individuals that we've assayed too.

So now we can look throughout the brain. We can see this unique fingerprint. And we get confirmation. We get confirmation that, indeed, the gene is turned on -- for something like Prozac, in serotonergic structures, things that are already known be affected -- but we also get to see the whole thing. We also get to see areas that no one has ever looked at before, and we see these genes turned on there. It's as interesting a side effect as it could be. One other thing you can do with such a thing is you can, because it's a pattern matching exercise, because there's unique fingerprint, we can actually scan through the entire genome and find other proteins that show a similar fingerprint. So if you're in drug discovery, for example, you can go through an entire listing of what the genome has on offer to find perhaps better drug targets and optimize.

Most of you are probably familiar with genome-wide association studies in the form of people covering in the news saying, "Scientists have recently discovered the gene or genes which affect X." And so these kinds of studies are routinely published by scientists and they're great. They analyze large populations. They look at their entire genomes, and they try to find hot spots of activity that are linked causally to genes. But what you get out of such an exercise is simply a list of genes. It tells you the what, but it doesn't tell you the where. And so it's very important for those researchers that we've created this resource. Now they can come in and they can start to get clues about activity. They can start to look at common pathways -- other things that they simply haven't been able to do before.

So I think this audience in particular can understand the importance of individuality. And I think every human, we all have different genetic backgrounds, we all have lived separate lives. But the fact is our genomes are greater than 99 percent similar. We're similar at the genetic level. And what we're finding is actually, even at the brain biochemical level, we are quite similar. And so this shows it's not 99 percent, but it's roughly 90 percent correspondence at a reasonable cutoff, so everything in the cloud is roughly correlated. And then we find some outliers, some things that lie beyond the cloud. And those genes are interesting, but they're very subtle. So I think it's an important message to take home today that even though we celebrate all of our differences, we are quite similar even at the brain level.

Now what do those differences look like? This is an example of a study that we did to follow up and see what exactly those differences were -- and they're quite subtle. These are things where genes are turned on in an individual cell type. These are two genes that we found as good examples. One is called RELN -- it's involved in early developmental cues. DISC1 is a gene that's deleted in schizophrenia. These aren't schizophrenic individuals, but they do show some population variation. And so what you're looking at here in donor one and donor four, which are the exceptions to the other two, that genes are being turned on in a very specific subset of cells. It's this dark purple precipitate within the cell that's telling us a gene is turned on there. Whether or not that's due to an individual's genetic background or their experiences, we don't know. Those kinds of studies require much larger populations.

So I'm going to leave you with a final note about the complexity of the brain and how much more we have to go. I think these resources are incredibly valuable. They give researchers a handle on where to go. But we only looked at a handful of individuals at this point. We're certainly going to be looking at more. I'll just close by saying that the tools are there, and this is truly an unexplored, undiscovered continent. This is the new frontier, if you will. And so for those who are undaunted, but humbled by the complexity of the brain, the future awaits.

Hi. I'm going to ask you to raise your arms and wave back, just the way I am -- kind of a royal wave. You can mimic what you can see. You can program the hundreds of muscles in your arm. Soon, you'll be able to look inside your brain and program, control the hundreds of brain areas that you see there. I'm going to tell you about that technology.

People have wanted to look inside the human mind, the human brain, for thousands of years. Well, coming out of the research labs just now, for our generation, is the possibility to do that. People envision this as being very difficult. You had to take a spaceship, shrink it down, inject it into the bloodstream. It was terribly dangerous. (Laughter) You could be attacked by white blood cells in the arteries. But now, we have a real technology to do this.

We're going to fly into my colleague Peter's brain. We're going to do it non-invasively using MRI. We don't have to inject anything. We don't need radiation. We will be able to fly into the anatomy of Peter's brain -- literally, fly into his body -- but more importantly, we can look into his mind. When Peter moves his arm, that yellow spot you see there is the interface to the functioning of Peter's mind taking place. Now you've seen before that with electrodes you can control robotic arms, that brain imaging and scanners can show you the insides of brains. What's new is that that process has typically taken days or months of analysis. We've collapsed that through technology to milliseconds, and that allows us to let Peter to look at his brain in real time as he's inside the scanner. He can look at these 65,000 points of activation per second. If he can see this pattern in his own brain, he can learn how to control it.

There have been three ways to try to impact the brain: the therapist's couch, pills and the knife. This is a fourth alternative that you are soon going to have. We all know that as we form thoughts, they form deep channels in our minds and in our brains. Chronic pain is an example. If you burn yourself, you pull your hand away. But if you're still in pain in six months' or six years' time, it's because these circuits are producing pain that's no longer helping you. If we can look at the activation in the brain that's producing the pain, we can form 3D models and watch in real time the brain process information, and then we can select the areas that produce the pain. So put your arms back up and flex your bicep.

Now imagine that you will soon be able to look inside your brain and select brain areas to do that same thing. What you're seeing here is, we've selected the pathways in the brain of a chronic pain patient. This may shock you, but we're literally reading this person's brain in real time. They're watching their own brain activation, and they're controlling the pathway that produces their pain. They're learning to flex this system that releases their own endogenous opiates. As they do it, in the upper left is a display that's yoked to their brain activation of their own pain being controlled. When they control their brain, they can control their pain. This is an investigational technology, but, in clinical trials, we're seeing a 44 to 64 percent decrease in chronic pain patients.

This is not "The Matrix." You can only do this to yourself. You take control. I've seen inside my brain. You will too, soon. When you do, what do you want to control? You will be able to look at all the aspects that make you yourself, all your experiences. These are some of the areas we're working on today that I don't have time to go into in detail. But I want to leave with you the big question. We are the first generation that's going to be able to enter into, using this technology, the human mind and brain. Where will we take it?
for this step
There has been a lot of thought about the role that mirror neurons could play in language, and there are in particular two ways in which the two things have been linked. One is if you imagine a young infant trying to learn to speak, say the parent is trying to talk him into saying “Papa” before “Mama”, and therefore keeps telling “Papa” to the baby. What the baby needs to be able to do is match the sound with what it takes to produce a similar sound. Now we have quite a bit of evidence that while you hear as an adult the sounds of someone else, you actually activate the motor program it takes to reproduce that sound, making this mirror [neuron] system very useful in learning to reproduce words we hear in other people. Now in infants, we believe that babbling may actually be the way the kid finds out and trains it’s own mirror [neuron] system. We also know that in songbirds, we find things that are very similar to mirror neurons, and these songbirds need them to be able to learn to sing a song by listening to their parents. Now at this level we have good evidence for the link between the mirror [neuron] system and language. There is another domain in which the two have tried to be linked, and there the evidence is a little bit less strong, and it’s the idea that we understand language because we map it on our own body. So if I would ask you whether running was an action, part of why you would know it was an action is because you would actually activate your own motor program for running if you hear the word running. And this kind of embodiment of the words you hear seems to be working well in utilizing things that are similar to mirror neurons for action words like running, but it’s less clear whether they are important for other aspects; say if you think about words like an idea, or words like generosity, which are not directly actions but concepts.
Imagine if you could record your life -- everything you said, everything you did, available in a perfect memory store at your fingertips, so you could go back and find memorable moments and relive them, or sift through traces of time and discover patterns in your own life that previously had gone undiscovered. Well that's exactly the journey that my family began five and a half years ago. This is my wife and collaborator, Rupal. And on this day, at this moment, we walked into the house with our first child, our beautiful baby boy. And we walked into a house with a very special home video recording system.

(Video) Man: Okay.

Deb Roy: This moment and thousands of other moments special for us were captured in our home because in every room in the house, if you looked up, you'd see a camera and a microphone, and if you looked down, you'd get this bird's-eye view of the room. Here's our living room, the baby bedroom, kitchen, dining room and the rest of the house. And all of these fed into a disc array that was designed for a continuous capture. So here we are flying through a day in our home as we move from sunlit morning through incandescent evening and, finally, lights out for the day. Over the course of three years, we recorded eight to 10 hours a day, amassing roughly a quarter-million hours of multi-track audio and video.

So you're looking at a piece of what is by far the largest home video collection ever made. (Laughter) And what this data represents for our family at a personal level, the impact has already been immense, and we're still learning its value. Countless moments of unsolicited natural moments, not posed moments, are captured there, and we're starting to learn how to discover them and find them.

But there's also a scientific reason that drove this project, which was to use this natural longitudinal data to understand the process of how a child learns language -- that child being my son. And so with many privacy provisions put in place to protect everyone who was recorded in the data, we made elements of the data available to my trusted research team at MIT so we could start teasing apart patterns in this massive data set, trying to understand the influence of social environments on language acquisition. So we're looking here at one of the first things we started to do. This is my wife and I cooking breakfast in the kitchen, and as we move through space and through time, a very everyday pattern of life in the kitchen.

In order to convert this opaque, 90,000 hours of video into something that we could start to see, we use motion analysis to pull out, as we move through space and through time, what we call space-time worms. And this has become part of our toolkit for being able to look and see where the activities are in the data, and with it, trace the pattern of, in particular, where my son moved throughout the home, so that we could focus our transcription efforts, all of the speech environment around my son -- all of the words that he heard from myself, my wife, our nanny, and over time, the words he began to produce. So with that technology and that data and the ability to, with machine assistance, transcribe speech, we've now transcribed well over seven million words of our home transcripts. And with that, let me take you now for a first tour into the data.

So you've all, I'm sure, seen time-lapse videos where a flower will blossom as you accelerate time. I'd like you to now experience the blossoming of a speech form. My son, soon after his first birthday, would say "gaga" to mean water. And over the course of the next half-year, he slowly learned to approximate the proper adult form, "water." So we're going to cruise through half a year in about 40 seconds. No video here, so you can focus on the sound, the acoustics, of a new kind of trajectory: gaga to water.

(Audio) Baby: Gagagagagaga Gaga gaga gaga guga guga guga wada gaga gaga guga gaga wader guga guga water water water water water water water water water.

DR: He sure nailed it, didn't he.


So he didn't just learn water. Over the course of the 24 months, the first two years that we really focused on, this is a map of every word he learned in chronological order. And because we have full transcripts, we've identified each of the 503 words that he learned to produce by his second birthday. He was an early talker. And so we started to analyze why. Why were certain words born before others? This is one of the first results that came out of our study a little over a year ago that really surprised us. The way to interpret this apparently simple graph is, on the vertical is an indication of how complex caregiver utterances are based on the length of utterances. And the [horizontal] axis is time.

And all of the data, we aligned based on the following idea: Every time my son would learn a word, we would trace back and look at all of the language he heard that contained that word. And we would plot the relative length of the utterances. And what we found was this curious phenomena, that caregiver speech would systematically dip to a minimum, making language as simple as possible, and then slowly ascend back up in complexity. And the amazing thing was that bounce, that dip, lined up almost precisely with when each word was born -- word after word, systematically. So it appears that all three primary caregivers -- myself, my wife and our nanny -- were systematically and, I would think, subconsciously restructuring our language to meet him at the birth of a word and bring him gently into more complex language. And the implications of this -- there are many, but one I just want to point out, is that there must be amazing feedback loops. Of course, my son is learning from his linguistic environment, but the environment is learning from him. That environment, people, are in these tight feedback loops and creating a kind of scaffolding that has not been noticed until now.

But that's looking at the speech context. What about the visual context? We're not looking at -- think of this as a dollhouse cutaway of our house. We've taken those circular fish-eye lens cameras, and we've done some optical correction, and then we can bring it into three-dimensional life. So welcome to my home. This is a moment, one moment captured across multiple cameras. The reason we did this is to create the ultimate memory machine, where you can go back and interactively fly around and then breathe video-life into this system. What I'm going to do is give you an accelerated view of 30 minutes, again, of just life in the living room. That's me and my son on the floor. And there's video analytics that are tracking our movements. My son is leaving red ink. I am leaving green ink. We're now on the couch, looking out through the window at cars passing by. And finally, my son playing in a walking toy by himself.

Now we freeze the action, 30 minutes, we turn time into the vertical axis, and we open up for a view of these interaction traces we've just left behind. And we see these amazing structures -- these little knots of two colors of thread we call "social hot spots." The spiral thread we call a "solo hot spot." And we think that these affect the way language is learned. What we'd like to do is start understanding the interaction between these patterns and the language that my son is exposed to to see if we can predict how the structure of when words are heard affects when they're learned -- so in other words, the relationship between words and what they're about in the world.

So here's how we're approaching this. In this video, again, my son is being traced out. He's leaving red ink behind. And there's our nanny by the door.

Video) Nanny: You want water? (Baby: Aaaa.) Nanny: All right. (Baby: Aaaa.)

DR: She offers water, and off go the two worms over to the kitchen to get water. And what we've done is use the word "water" to tag that moment, that bit of activity. And now we take the power of data and take every time my son ever heard the word water and the context he saw it in, and we use it to penetrate through the video and find every activity trace that co-occurred with an instance of water. And what this data leaves in its wake is a landscape. We call these wordscapes. This is the wordscape for the word water, and you can see most of the action is in the kitchen. That's where those big peaks are over to the left. And just for contrast, we can do this with any word. We can take the word "bye" as in "good bye." And we're now zoomed in over the entrance to the house. And we look, and we find, as you would expect, a contrast in the landscape where the word "bye" occurs much more in a structured way. So we're using these structures to start predicting the order of language acquisition, and that's ongoing work now.

In my lab, which we're peering into now, at MIT -- this is at the media lab. This has become my favorite way of videographing just about any space. Three of the key people in this project, Philip DeCamp, Rony Kubat and Brandon Roy are pictured here. Philip has been a close collaborator on all the visualizations you're seeing. And Michael Fleischman was another Ph.D. student in my lab who worked with me on this home video analysis, and he made the following observation: that "just the way that we're analyzing how language connects to events which provide common ground for language, that same idea we can take out of your home, Deb, and we can apply it to the world of public media." And so our effort took an unexpected turn.

Think of mass media as providing common ground and you have the recipe for taking this idea to a whole new place. We've started analyzing television content using the same principles -- analyzing event structure of a TV signal -- episodes of shows, commercials, all of the components that make up the event structure. And we're now, with satellite dishes, pulling and analyzing a good part of all the TV being watched in the United States. And you don't have to now go and instrument living rooms with microphones to get people's conversations, you just tune into publicly available social media feeds.

So we're pulling in about three billion comments a month, and then the magic happens. You have the event structure, the common ground that the words are about, coming out of the television feeds; you've got the conversations that are about those topics; and through semantic analysis -- and this is actually real data you're looking at from our data processing -- each yellow line is showing a link being made between a comment in the wild and a piece of event structure coming out of the television signal. And the same idea now can be built up. And we get this wordscape, except now words are not assembled in my living room. Instead, the context, the common ground activities, are the content on television that's driving the conversations. And what we're seeing here, these skyscrapers now, are commentary that are linked to content on television. Same concept, but looking at communication dynamics in a very different sphere.

And so fundamentally, rather than, for example, measuring content based on how many people are watching, this gives us the basic data for looking at engagement properties of content. And just like we can look at feedback cycles and dynamics in a family, we can now open up the same concepts and look at much larger groups of people. This is a subset of data from our database -- just 50,000 out of several million -- and the social graph that connects them through publicly available sources. And if you put them on one plain, a second plain is where the content lives. So we have the programs and the sporting events and the commercials, and all of the link structures that tie them together make a content graph. And then the important third dimension. Each of the links that you're seeing rendered here is an actual connection made between something someone said and a piece of content. And there are, again, now tens of millions of these links that give us the connective tissue of social graphs and how they relate to content. And we can now start to probe the structure in interesting ways.

So if we, for example, trace the path of one piece of content that drives someone to comment on it, and then we follow where that comment goes, and then look at the entire social graph that becomes activated and then trace back to see the relationship between that social graph and content, a very interesting structure becomes visible. We call this a co-viewing clique, a virtual living room if you will. And there are fascinating dynamics at play. It's not one way. A piece of content, an event, causes someone to talk. They talk to other people. That drives tune-in behavior back into mass media, and you have these cycles that drive the overall behavior.

Another example -- very different -- another actual person in our database -- and we're finding at least hundreds, if not thousands, of these. We've given this person a name. This is a pro-amateur, or pro-am media critic who has this high fan-out rate. So a lot of people are following this person -- very influential -- and they have a propensity to talk about what's on TV. So this person is a key link in connecting mass media and social media together.

One last example from this data: Sometimes it's actually a piece of content that is special. So if we go and look at this piece of content, President Obama's State of the Union address from just a few weeks ago, and look at what we find in this same data set, at the same scale, the engagement properties of this piece of content are truly remarkable. A nation exploding in conversation in real time in response to what's on the broadcast. And of course, through all of these lines are flowing unstructured language. We can X-ray and get a real-time pulse of a nation, real-time sense of the social reactions in the different circuits in the social graph being activated by content.

So, to summarize, the idea is this: As our world becomes increasingly instrumented and we have the capabilities to collect and connect the dots between what people are saying and the context they're saying it in, what's emerging is an ability to see new social structures and dynamics that have previously not been seen. It's like building a microscope or telescope and revealing new structures about our own behavior around communication. And I think the implications here are profound, whether it's for science, for commerce, for government, or perhaps most of all, for us as individuals.

And so just to return to my son, when I was preparing this talk, he was looking over my shoulder, and I showed him the clips I was going to show to you today, and I asked him for permission -- granted. And then I went on to reflect, "Isn't it amazing, this entire database, all these recordings, I'm going to hand off to you and to your sister" -- who arrived two years later -- "and you guys are going to be able to go back and re-experience moments that you could never, with your biological memory, possibly remember the way you can now?" And he was quiet for a moment. And I thought, "What am I thinking? He's five years old. He's not going to understand this." And just as I was having that thought, he looked up at me and said, "So that when I grow up, I can show this to my kids?" And I thought, "Wow, this is powerful stuff."

So I want to leave you with one last memorable moment from our family. This is the first time our son took more than two steps at once -- captured on film. And I really want you to focus on something as I take you through. It's a cluttered environment; it's natural life. My mother's in the kitchen, cooking, and, of all places, in the hallway, I realize he's about to do it, about to take more than two steps. And so you hear me encouraging him, realizing what's happening, and then the magic happens. Listen very carefully. About three steps in, he realizes something magic is happening, and the most amazing feedback loop of all kicks in, and he takes a breath in, and he whispers "wow" and instinctively I echo back the same. And so let's fly back in time to that memorable moment.

(Video) DR: Hey. Come here. Can you do it? Oh, boy. Can you do it? Baby: Yeah. DR: Ma, he's walking.



DR: Thank you.

I want you to take a look at this baby. What you're drawn to are her eyes and the skin you love to touch. But today I'm going to talk to you about something you can't see -- what's going on up in that little brain of hers. The modern tools of neuroscience are demonstrating to us that what's going on up there is nothing short of rocket science. And what we're learning is going to shed some light on what the romantic writers and poets described as the "celestial openness" of the child's mind.

What we see here is a mother in India, and she's speaking Koro, which is a newly discovered language. And she's talking to her baby. What this mother -- and the 800 people who speak Koro in the world -- understands [is] that, to preserve this language, they need to speak it to the babies. And therein lies a critical puzzle. Why is it that you can't preserve a language by speaking to you and I, to the adults? Well, it's got to do with your brain. What we see here is that language has a critical period for learning. The way to read this slide is to look at your age on the horizontal axis. (Laughter) And you'll see on the vertical your skill at acquiring a second language. Babies and children are geniuses until they turn seven, and then there's a systematic decline. After puberty, we fall off the map. No scientists dispute this curve, but laboratories all over the world are trying to figure out why it works this way.

Work in my lab is focused on the first critical period in development -- and that is the period in which babies try to master which sounds are used in their language. We think, by studying how the sounds are learned, we'll have a model for the rest of language, and perhaps for critical periods that may exist in childhood for social, emotional and cognitive development. So we've been studying the babies using a technique that we're using all over the world and the sounds of all languages. The baby sits on a parent's lap, and we train them to turn their heads when a sound changes -- like from "ah" to "ee." If they do so at the appropriate time, the black box lights up and a panda bear pounds a drum. A six-monther adores the task.

What have we learned? Well, babies all over the world are what I like to describe as "citizens of the world." They can discriminate all the sounds of all languages, no matter what country we're testing and what language we're using, and that's remarkable because you and I can't do that. We're culture-bound listeners. We can discriminate the sounds of our own language, but not those of foreign languages. So the question arises: when do those citizens of the world turn into the language-bound listeners that we are? And the answer: before their first birthdays. What you see here is performance on that head-turn task for babies tested in Tokyo and the United States, here in Seattle, as they listened to "ra" and "la" -- sounds important to English, but not to Japanese. So at six to eight months the babies are totally equivalent. Two months later something incredible occurs. The babies in the United States are getting a lot better, babies in Japan are getting a lot worse, but both of those groups of babies are preparing for exactly the language that they are going to learn.

So the question is: what's happening during this critical two-month period? This is the critical period for sound development, but what's going on up there? So there are two things going on. The first is that the babies are listening intently to us, and they're taking statistics as they listen to us talk -- they're taking statistics. So listen to two mothers speaking motherese -- the universal language we use when we talk to kids -- first in English and then in Japanese.

(Video) English Mother: Ah, I love your big blue eyes -- so pretty and nice.

Japanese Mother: [Japanese]

Patricia Kuhl: During the production of speech, when babies listen, what they're doing is taking statistics on the language that they hear. And those distributions grow. And what we've learned is that babies are sensitive to the statistics, and the statistics of Japanese and English are very, very different. English has a lot of Rs and Ls. The distribution shows. And the distribution of Japanese is totally different, where we see a group of intermediate sounds, which is known as the Japanese "R." So babies absorb the statistics of the language and it changes their brains; it changes them from the citizens of the world to the culture-bound listeners that we are. But we as adults are no longer absorbing those statistics. We're governed by the representations in memory that were formed early in development.

So what we're seeing here is changing our models of what the critical period is about. We're arguing from a mathematical standpoint that the learning of language material may slow down when our distributions stabilize. It's raising lots of questions about bilingual people. Bilinguals must keep two sets of statistics in mind at once and flip between them, one after the other, depending on who they're speaking to.

So we asked ourselves, can the babies take statistics on a brand new language? And we tested this by exposing American babies who'd never heard a second language to Mandarin for the first time during the critical period. We knew that, when monolinguals were tested in Taipei and Seattle on the Mandarin sounds, they showed the same pattern. Six to eight months, they're totally equivalent. Two months later, something incredible happens. But the Taiwanese babies are getting better, not the American babies. What we did was expose American babies during this period to Mandarin. It was like having Mandarin relatives come and visit for a month and move into your house and talk to the babies for 12 sessions. Here's what it looked like in the laboratory.

(Video) Mandarin Speaker: [Mandarin]

PK: So what have we done to their little brains? (Laughter) We had to run a control group to make sure that just coming into the laboratory didn't improve your Mandarin skills. So a group of babies came in and listened to English. And we can see from the graph that exposure to English didn't improve their Mandarin. But look at what happened to the babies exposed to Mandarin for 12 sessions. They were as good as the babies in Taiwan who'd been listening for 10-and-a-half months. What it demonstrated is that babies take statistics on a new language. Whatever you put in front of them, they'll take statistics on.

But we wondered what role the human being played in this learning exercise. So we ran another group of babies in which the kids got the same dosage, the same 12 sessions, but over a television set and another group of babies who had just audio exposure and looked at a teddy bear on the screen. What did we do to their brains? What you see here is the audio result -- no learning whatsoever -- and the video result -- no learning whatsoever. It takes a human being for babies to take their statistics. The social brain is controlling when the babies are taking their statistics.

We want to get inside the brain and see this thing happening as babies are in front of televisions, as opposed to in front of human beings. Thankfully, we have a new machine, magnetoencephalography, that allows us to do this. It looks like a hair dryer from Mars. But it's completely safe, completely non-invasive and silent. We're looking at millimeter accuracy with regard to spatial and millisecond accuracy using 306 SQUIDs -- these are Superconducting QUantum Interference Devices -- to pick up the magnetic fields that change as we do our thinking. We're the first in the world to record babies in an MEG machine while they are learning.

So this is little Emma. She's a six-monther. And she's listening to various languages in the earphones that are in her ears. You can see, she can move around. We're tracking her head with little pellets in a cap, so she's free to move completely unconstrained. It's a technical tour de force. What are we seeing? We're seeing the baby brain. As the baby hears a word in her language the auditory areas light up, and then subsequently areas surrounding it that we think are related to coherence, getting the brain coordinated with its different areas, and causality, one brain area causing another to activate.

We are embarking on a grand and golden age of knowledge about child's brain development. We're going to be able to see a child's brain as they experience an emotion, as they learn to speak and read, as they solve a math problem, as they have an idea. And we're going to be able to invent brain-based interventions for children who have difficulty learning. Just as the poets and writers described, we're going to be able to see, I think, that wondrous openness, utter and complete openness, of the mind of a child. In investigating the child's brain, we're going to uncover deep truths about what it means to be human, and in the process, we may be able to help keep our own minds open to learning for our entire lives.

Thank you.


4:51But, for now, it remains the case that we have no way of knowing if my red is the same
5:00as your red. Maybe one day our language will allow us to share and find out, or, maybe,
5:07it never will. I know it's frustrating to not have an answer, but, the mere fact that
5:13you guys can ask me about my internal experiences, and the mere fact that I can ask my friends
5:20and we can all collectively wonder about the concept of qualia is quite incredible, and
5:27also quite human.
5:30Animals can do all sorts of clever things that we do. They can use tools, problem solve,
5:36communicate, cooperate, exhibit curiosity, plan for the future, and, although we can't
5:42know for sure, many animals certainly act as if they feel emotions- loneliness, fear,
5:51Apes have even been taught to use language to talk to us humans. It's a sort of sign
5:57language that they've used to do everything from answer questions, to express emotion,
6:02or even produce novel thoughts. Unlike any other animal, these apes are able to understand
6:08language and form responses at about the level of a 2.5 year old human child.
6:14But, there is something that no signing-ape has ever done. No ape has ever asked a question.
6:25Joseph Jordania's "Who Asked the First Question?" is a great read on this topic, and it's available
6:30for free online.
6:32For as long as we've been able to use sign language to communicate with apes, they have
6:37never wondered, out loud, about anything that we might know that they
6:44Of course, this does not mean that apes, and plenty of other animals, aren't curious. They
6:50obviously are. But, what is suggests is that they lack a "Theory of Mind": An understanding
6:58that other people have separate minds. That they have knowledge, access to information
7:04that you might not have. Even us humans aren't born with a "theory of mind," and there's
7:12a famous experiment to test when a human child first develops a "theory of mind." It is called
7:18the "Sally-Anne" test.
7:21During the test, researchers tell children a story about Sally and Anne. Sally and Anne
7:27have a box, and a basket, in their room. They also happen to have a delicious cookie. Now,
7:34Sally takes the cookie and puts it inside the box, and then Sally leaves the room. While
7:41Sally is gone, Anne comes over to the box, takes the cookie out, and puts the cookie
7:47inside the basket. Now, when Sally comes back, the researchers ask the children "where
7:53will Sally look for the cookie?" Obviously, Sally will look in the box- that's where she
7:59left it. She has no way of knowing what Anne did while she was gone. But, until the age
8:04of about 4, children will insist that Sally will check the basket because, after all,
8:10that's where the cookie is. The child saw Anne move the cookie, so why wouldn't Sally
8:16also know? Young children fail to realize that Sally's mental representation of the
8:22situation, her access to information, can be different than their own.
8:27And apes who know sign language, but never ask us questions, are doing the same thing.
8:34They're failing to recognize that other individuals have similar cognitive abilities, and can
8:40be used as sources of information.
8:44So, we are all alone with our perceptions. We are alone in our own minds. We can both
8:53agree that chocolate tastes good. But, I cannot climb into your consciousness and experience
8:59what chocolate tastes like to you. I can never know if my red looks the same as your red.
9:08But, I can ask.
9:10So, stay human, stay curious, and let the entire world know that you are. And, as always,
9:20thanks for watching.
:00Hey, Vsauce, Michael here.
0:02This appears blue, this appears yellow, and this appears green. Those of us with normal
0:10color vision can probably agree. But, that doesn't change the fact that color is an illusion.
0:19Color, as we know it, does not exist in the outside world, beyond us, like gravity, or
0:26protons do. Instead, color is created inside our heads. Our brains convert a certain range
0:35of the electromagnetic spectrum into color. I can measure the wavelength of radiation,
0:42but, I can't measure, or observe, the experience of a color inside your mind.
0:51So, how do I know that when you and me look at a strawberry, and, in my brain, this perception
0:59occurs, which I call "red," that, in your brain, a perception like this doesn't occur,
1:06which you have, of course, also learned to call red. We both call it red. We communicate
1:12effectively and walk away never knowing just how different each of our internal experiences
1:19really were.
1:22Of course, we already know that not everybody sees color in exactly the same way. One example
1:28would be color blindness. But, we can diagnose and discuss these differences because people
1:34with the conditions fail to see things that most of us can.
1:39Conceivably, though, there could be ways of seeing that we use that cause colors to look
1:45differently in different people's minds, without altering their performances on any tests we
1:50could come up with.
1:52Of course, if that were the case, wouldn't some people think other colors look better
1:57than others? Or, that some colors were more complimentary of others? Well, yeah, but
2:03doesn't that already happen?
2:05This matters because it shows how fundamentally, in terms of our perceptions, we are all alone
2:13in our minds.
2:16Let's say I met an alien from a far away solar system who, lucky enough, could speak english,
2:22but had never, and could never, feel pain. I could explain to the alien that pain is
2:30sent through A-delta and C fibers to the spinal chord. The alien could learn every single
2:36cell, and pathway, and process, and chemical involved in the feeling of pain. The alien
2:41could pass a biology exam about pain, and believe that pain, to us, generally is a bad
2:47But, no matter how much he learned, the alien would never actually feel pain. Philosophers
2:57call these ineffable, raw feelings "Qualia." And our inability to connect physical phenomenon
3:05to these raw feelings, our inability to explain and share our own internal qualia is known
3:13as the "Explanatory Gap." This gap is confronted when describing color to someone who's been
3:20blind their entire life.
3:23Tommy Edison has never been able to see. He has a YouTube channel where he describes what
3:28being blind is like. It's an amazing channel. In one video he talks about colors, and how
3:35strange, and foreign of a concept it seems to him. Sighted people try to explain, for
3:40instance, that red is "hot," and blue is "cold." But, to someone who has never seen a single
3:47color, that just seems weird. And, as he explains, it has never caused him to finally see a color.
3:56Some philosophers, like Daniel Dennett, argue that qualia may be private and ineffable simply
4:03because of a failure of our own language, not because they are, necessarily, always
4:09going to be impossible to share.
4:12There may be an alien race that communicates in a language that causes colors to appear
4:17in your brain without your retina having to be involved at all. Or, without you having
4:22to have ever needed actually to see the color yourself. Perhaps, even in English, he says,
4:28given millions and billions of words used in just the right way, it may be possible
4:34to adequately describe a color such that a blind person could see it for the first time.
4:41Or, you could figure out that, once-and-for-all, yes, or, no, in fact, you and your friend
4:48do not see the same red.
I'd like to talk to you today about the human brain, which is what we do research on at the University of California. Just think about this problem for a second. Here is a lump of flesh, about three pounds, which you can hold in the palm of your hand. But it can contemplate the vastness of interstellar space. It can contemplate the meaning of infinity, ask questions about the meaning of its own existence, about the nature of God.

And this is truly the most amazing thing in the world. It's the greatest mystery confronting human beings: How does this all come about? Well, the brain, as you know, is made up of neurons. We're looking at neurons here. There are 100 billion neurons in the adult human brain. And each neuron makes something like 1,000 to 10,000 contacts with other neurons in the brain. And based on this, people have calculated that the number of permutations and combinations of brain activity exceeds the number of elementary particles in the universe.

So, how do you go about studying the brain? One approach is to look at patients who had lesions in different part of the brain, and study changes in their behavior. This is what I spoke about in the last TED. Today I'll talk about a different approach, which is to put electrodes in different parts of the brain, and actually record the activity of individual nerve cells in the brain. Sort of eavesdrop on the activity of nerve cells in the brain.

Now, one recent discovery that has been made by researchers in Italy, in Parma, by Giacomo Rizzolatti and his colleagues, is a group of neurons called mirror neurons, which are on the front of the brain in the frontal lobes. Now, it turns out there are neurons which are called ordinary motor command neurons in the front of the brain, which have been known for over 50 years. These neurons will fire when a person performs a specific action. For example, if I do that, and reach and grab an apple, a motor command neuron in the front of my brain will fire. If I reach out and pull an object, another neuron will fire, commanding me to pull that object. These are called motor command neurons that have been known for a long time.

But what Rizzolatti found was a subset of these neurons, maybe about 20 percent of them, will also fire when I'm looking at somebody else performing the same action. So, here is a neuron that fires when I reach and grab something, but it also fires when I watch Joe reaching and grabbing something. And this is truly astonishing. Because it's as though this neuron is adopting the other person's point of view. It's almost as though it's performing a virtual reality simulation of the other person's action.

Now, what is the significance of these mirror neurons? For one thing they must be involved in things like imitation and emulation. Because to imitate a complex act requires my brain to adopt the other person's point of view. So, this is important for imitation and emulation. Well, why is that important? Well, let's take a look at the next slide. So, how do you do imitation? Why is imitation important? Mirror neurons and imitation, emulation.

Now, let's look at culture, the phenomenon of human culture. If you go back in time about [75,000] to 100,000 years ago, let's look at human evolution, it turns out that something very important happened around 75,000 years ago. And that is, there is a sudden emergence and rapid spread of a number of skills that are unique to human beings like tool use, the use of fire, the use of shelters, and, of course, language, and the ability to read somebody else's mind and interpret that person's behavior. All of that happened relatively quickly.

Even though the human brain had achieved its present size almost three or four hundred thousand years ago, 100,000 years ago all of this happened very, very quickly. And I claim that what happened was the sudden emergence of a sophisticated mirror neuron system, which allowed you to emulate and imitate other people's actions. So that when there was a sudden accidental discovery by one member of the group, say the use of fire, or a particular type of tool, instead of dying out, this spread rapidly, horizontally across the population, or was transmitted vertically, down the generations.


So, this made evolution suddenly Lamarckian, instead of Darwinian. Darwinian evolution is slow; it takes hundreds of thousands of years. A polar bear, to evolve a coat, will take thousands of generations, maybe 100,000 years. A human being, a child, can just watch its parent kill another polar bear, and skin it and put the skin on its body, fur on the body, and learn it in one step. What the polar bear took 100,000 years to learn, it can learn in five minutes, maybe 10 minutes. And then once it's learned this it spreads in geometric proportion across a population.

This is the basis. The imitation of complex skills is what we call culture and is the basis of civilization. Now there is another kind of mirror neuron, which is involved in something quite different. And that is, there are mirror neurons, just as there are mirror neurons for action, there are mirror neurons for touch. In other words, if somebody touches me, my hand, neuron in the somatosensory cortex in the sensory region of the brain fires. But the same neuron, in some cases, will fire when I simply watch another person being touched. So, it's empathizing the other person being touched.

So, most of them will fire when I'm touched in different locations. Different neurons for different locations. But a subset of them will fire even when I watch somebody else being touched in the same location. So, here again you have neurons which are enrolled in empathy. Now, the question then arises: If I simply watch another person being touched, why do I not get confused and literally feel that touch sensation merely by watching somebody being touched? I mean, I empathize with that person but I don't literally feel the touch. Well, that's because you've got receptors in your skin, touch and pain receptors, going back into your brain and saying "Don't worry, you're not being touched. So, empathize, by all means, with the other person, but do not actually experience the touch, otherwise you'll get confused and muddled."

Okay, so there is a feedback signal that vetoes the signal of the mirror neuron preventing you from consciously experiencing that touch. But if you remove the arm, you simply anesthetize my arm, so you put an injection into my arm, anesthetize the brachial plexus, so the arm is numb, and there is no sensations coming in, if I now watch you being touched, I literally feel it in my hand. In other words, you have dissolved the barrier between you and other human beings. So, I call them Gandhi neurons, or empathy neurons. (Laughter)

And this is not in some abstract metaphorical sense. All that's separating you from him, from the other person, is your skin. Remove the skin, you experience that person's touch in your mind. You've dissolved the barrier between you and other human beings. And this, of course, is the basis of much of Eastern philosophy, and that is there is no real independent self, aloof from other human beings, inspecting the world, inspecting other people. You are, in fact, connected not just via Facebook and Internet, you're actually quite literally connected by your neurons. And there is whole chains of neurons around this room, talking to each other. And there is no real distinctiveness of your consciousness from somebody else's consciousness.

And this is not mumbo-jumbo philosophy. It emerges from our understanding of basic neuroscience. So, you have a patient with a phantom limb. If the arm has been removed and you have a phantom, and you watch somebody else being touched, you feel it in your phantom. Now the astonishing thing is, if you have pain in your phantom limb, you squeeze the other person's hand, massage the other person's hand, that relieves the pain in your phantom hand, almost as though the neuron were obtaining relief from merely watching somebody else being massaged.

So, here you have my last slide. For the longest time people have regarded science and humanities as being distinct. C.P. Snow spoke of the two cultures: science on the one hand, humanities on the other; never the twain shall meet. So, I'm saying the mirror neuron system underlies the interface allowing you to rethink about issues like consciousness, representation of self, what separates you from other human beings, what allows you to empathize with other human beings, and also even things like the emergence of culture and civilization, which is unique to human beings. Thank you. (Applause)
Each of you possesses the most powerful, dangerous and subversive trait that natural selection has ever devised. It's a piece of neural audio technology for rewiring other people's minds. I'm talking about your language, of course, because it allows you to implant a thought from your mind directly into someone else's mind, and they can attempt to do the same to you, without either of you having to perform surgery. Instead, when you speak, you're actually using a form of telemetry not so different from the remote control device for your television. It's just that, whereas that device relies on pulses of infrared light, your language relies on pulses, discrete pulses, of sound.

And just as you use the remote control device to alter the internal settings of your television to suit your mood, you use your language to alter the settings inside someone else's brain to suit your interests. Languages are genes talking, getting things that they want. And just imagine the sense of wonder in a baby when it first discovers that, merely by uttering a sound, it can get objects to move across a room as if by magic, and maybe even into its mouth.

Now language's subversive power has been recognized throughout the ages in censorship, in books you can't read, phrases you can't use and words you can't say. In fact, the Tower of Babel story in the Bible is a fable and warning about the power of language. According to that story, early humans developed the conceit that, by using their language to work together, they could build a tower that would take them all the way to heaven. Now God, angered at this attempt to usurp his power, destroyed the tower, and then to ensure that it would never be rebuilt, he scattered the people by giving them different languages -- confused them by giving them different languages. And this leads to the wonderful irony that our languages exist to prevent us from communicating. Even today, we know that there are words we cannot use, phrases we cannot say, because if we do so, we might be accosted, jailed, or even killed. And all of this from a puff of air emanating from our mouths.

Now all this fuss about a single one of our traits tells us there's something worth explaining. And that is how and why did this remarkable trait evolve, and why did it evolve only in our species? Now it's a little bit of a surprise that to get an answer to that question, we have to go to tool use in the chimpanzees. Now these chimpanzees are using tools, and we take that as a sign of their intelligence. But if they really were intelligent, why would they use a stick to extract termites from the ground rather than a shovel? And if they really were intelligent, why would they crack open nuts with a rock? Why wouldn't they just go to a shop and buy a bag of nuts that somebody else had already cracked open for them? Why not? I mean, that's what we do.

Now the reason the chimpanzees don't do that is that they lack what psychologists and anthropologists call social learning. They seem to lack the ability to learn from others by copying or imitating or simply watching. As a result, they can't improve on others' ideas or learn from others' mistakes -- benefit from others' wisdom. And so they just do the same thing over and over and over again. In fact, we could go away for a million years and come back and these chimpanzees would be doing the same thing with the same sticks for the termites and the same rocks to crack open the nuts.

Now this may sound arrogant, or even full of hubris. How do we know this? Because this is exactly what our ancestors, the Homo erectus, did. These upright apes evolved on the African savanna about two million years ago, and they made these splendid hand axes that fit wonderfully into your hands. But if we look at the fossil record, we see that they made the same hand axe over and over and over again for one million years. You can follow it through the fossil record. Now if we make some guesses about how long Homo erectus lived, what their generation time was, that's about 40,000 generations of parents to offspring, and other individuals watching, in which that hand axe didn't change. It's not even clear that our very close genetic relatives, the Neanderthals, had social learning. Sure enough, their tools were more complicated than those of Homo erectus, but they too showed very little change over the 300,000 years or so that those species, the Neanderthals, lived in Eurasia.

Okay, so what this tells us is that, contrary to the old adage, "monkey see, monkey do," the surprise really is that all of the other animals really cannot do that -- at least not very much. And even this picture has the suspicious taint of being rigged about it -- something from a Barnum & Bailey circus.

But by comparison, we can learn. We can learn by watching other people and copying or imitating what they can do. We can then choose, from among a range of options, the best one. We can benefit from others' ideas. We can build on their wisdom. And as a result, our ideas do accumulate, and our technology progresses. And this cumulative cultural adaptation, as anthropologists call this accumulation of ideas, is responsible for everything around you in your bustling and teeming everyday lives. I mean the world has changed out of all proportion to what we would recognize even 1,000 or 2,000 years ago. And all of this because of cumulative cultural adaptation. The chairs you're sitting in, the lights in this auditorium, my microphone, the iPads and iPods that you carry around with you -- all are a result of cumulative cultural adaptation.

Now to many commentators, cumulative cultural adaptation, or social learning, is job done, end of story. Our species can make stuff, therefore we prospered in a way that no other species has. In fact, we can even make the "stuff of life" -- as I just said, all the stuff around us. But in fact, it turns out that some time around 200,000 years ago, when our species first arose and acquired social learning, that this was really the beginning of our story, not the end of our story. Because our acquisition of social learning would create a social and evolutionary dilemma, the resolution of which, it's fair to say, would determine not only the future course of our psychology, but the future course of the entire world. And most importantly for this, it'll tell us why we have language.

And the reason that dilemma arose is, it turns out, that social learning is visual theft. If I can learn by watching you, I can steal your best ideas, and I can benefit from your efforts, without having to put in the time and energy that you did into developing them. If I can watch which lure you use to catch a fish, or I can watch how you flake your hand axe to make it better, or if I follow you secretly to your mushroom patch, I can benefit from your knowledge and wisdom and skills, and maybe even catch that fish before you do. Social learning really is visual theft. And in any species that acquired it, it would behoove you to hide your best ideas, lest somebody steal them from you.

And so some time around 200,000 years ago, our species confronted this crisis. And we really had only two options for dealing with the conflicts that visual theft would bring. One of those options was that we could have retreated into small family groups. Because then the benefits of our ideas and knowledge would flow just to our relatives. Had we chosen this option, sometime around 200,000 years ago, we would probably still be living like the Neanderthals were when we first entered Europe 40,000 years ago. And this is because in small groups there are fewer ideas, there are fewer innovations. And small groups are more prone to accidents and bad luck. So if we'd chosen that path, our evolutionary path would have led into the forest -- and been a short one indeed.

The other option we could choose was to develop the systems of communication that would allow us to share ideas and to cooperate amongst others. Choosing this option would mean that a vastly greater fund of accumulated knowledge and wisdom would become available to any one individual than would ever arise from within an individual family or an individual person on their own. Well, we chose the second option, and language is the result.

Language evolved to solve the crisis of visual theft. Language is a piece of social technology for enhancing the benefits of cooperation -- for reaching agreements, for striking deals and for coordinating our activities. And you can see that, in a developing society that was beginning to acquire language, not having language would be a like a bird without wings. Just as wings open up this sphere of air for birds to exploit, language opened up the sphere of cooperation for humans to exploit. And we take this utterly for granted, because we're a species that is so at home with language,

but you have to realize that even the simplest acts of exchange that we engage in are utterly dependent upon language. And to see why, consider two scenarios from early in our evolution. Let's imagine that you are really good at making arrowheads, but you're hopeless at making the wooden shafts with the flight feathers attached. Two other people you know are very good at making the wooden shafts, but they're hopeless at making the arrowheads. So what you do is -- one of those people has not really acquired language yet. And let's pretend the other one is good at language skills.

So what you do one day is you take a pile of arrowheads, and you walk up to the one that can't speak very well, and you put the arrowheads down in front of him, hoping that he'll get the idea that you want to trade your arrowheads for finished arrows. But he looks at the pile of arrowheads, thinks they're a gift, picks them up, smiles and walks off. Now you pursue this guy, gesticulating. A scuffle ensues and you get stabbed with one of your own arrowheads. Okay, now replay this scene now, and you're approaching the one who has language. You put down your arrowheads and say, "I'd like to trade these arrowheads for finished arrows. I'll split you 50/50." The other one says, "Fine. Looks good to me. We'll do that." Now the job is done.

Once we have language, we can put our ideas together and cooperate to have a prosperity that we couldn't have before we acquired it. And this is why our species has prospered around the world while the rest of the animals sit behind bars in zoos, languishing. That's why we build space shuttles and cathedrals while the rest of the world sticks sticks into the ground to extract termites. All right, if this view of language and its value in solving the crisis of visual theft is true, any species that acquires it should show an explosion of creativity and prosperity. And this is exactly what the archeological record shows.

If you look at our ancestors, the Neanderthals and the Homo erectus, our immediate ancestors, they're confined to small regions of the world. But when our species arose about 200,000 years ago, sometime after that we quickly walked out of Africa and spread around the entire world, occupying nearly every habitat on Earth. Now whereas other species are confined to places that their genes adapt them to, with social learning and language, we could transform the environment to suit our needs. And so we prospered in a way that no other animal has. Language really is the most potent trait that has ever evolved. It is the most valuable trait we have for converting new lands and resources into more people and their genes that natural selection has ever devised.

Language really is the voice of our genes. Now having evolved language, though, we did something peculiar, even bizarre. As we spread out around the world, we developed thousands of different languages. Currently, there are about seven or 8,000 different languages spoken on Earth. Now you might say, well, this is just natural. As we diverge, our languages are naturally going to diverge. But the real puzzle and irony is that the greatest density of different languages on Earth is found where people are most tightly packed together.

If we go to the island of Papua New Guinea, we can find about 800 to 1,000 distinct human languages, different human languages, spoken on that island alone. There are places on that island where you can encounter a new language every two or three miles. Now, incredible as this sounds, I once met a Papuan man, and I asked him if this could possibly be true. And he said to me, "Oh no. They're far closer together than that." And it's true; there are places on that island where you can encounter a new language in under a mile. And this is also true of some remote oceanic islands.

And so it seems that we use our language, not just to cooperate, but to draw rings around our cooperative groups and to establish identities, and perhaps to protect our knowledge and wisdom and skills from eavesdropping from outside. And we know this because when we study different language groups and associate them with their cultures, we see that different languages slow the flow of ideas between groups. They slow the flow of technologies. And they even slow the flow of genes. Now I can't speak for you, but it seems to be the case that we don't have sex with people we can't talk to. (Laughter) Now we have to counter that, though, against the evidence we've heard that we might have had some rather distasteful genetic dalliances with the Neanderthals and the Denisovans.


Okay, this tendency we have, this seemingly natural tendency we have, towards isolation, towards keeping to ourselves, crashes head first into our modern world. This remarkable image is not a map of the world. In fact, it's a map of Facebook friendship links. And when you plot those friendship links by their latitude and longitude, it literally draws a map of the world. Our modern world is communicating with itself and with each other more than it has at any time in its past. And that communication, that connectivity around the world, that globalization now raises a burden. Because these different languages impose a barrier, as we've just seen, to the transfer of goods and ideas and technologies and wisdom. And they impose a barrier to cooperation.

And nowhere do we see that more clearly than in the European Union, whose 27 member countries speak 23 official languages. The European Union is now spending over one billion euros annually translating among their 23 official languages. That's something on the order of 1.45 billion U.S. dollars on translation costs alone. Now think of the absurdity of this situation. If 27 individuals from those 27 member states sat around table, speaking their 23 languages, some very simple mathematics will tell you that you need an army of 253 translators to anticipate all the pairwise possibilities. The European Union employs a permanent staff of about 2,500 translators. And in 2007 alone -- and I'm sure there are more recent figures -- something on the order of 1.3 million pages were translated into English alone.

And so if language really is the solution to the crisis of visual theft, if language really is the conduit of our cooperation, the technology that our species derived to promote the free flow and exchange of ideas, in our modern world, we confront a question. And that question is whether in this modern, globalized world we can really afford to have all these different languages.

To put it this way, nature knows no other circumstance in which functionally equivalent traits coexist. One of them always drives the other extinct. And we see this in the inexorable march towards standardization. There are lots and lots of ways of measuring things -- weighing them and measuring their length -- but the metric system is winning. There are lots and lots of ways of measuring time, but a really bizarre base 60 system known as hours and minutes and seconds is nearly universal around the world. There are many, many ways of imprinting CDs or DVDs, but those are all being standardized as well. And you can probably think of many, many more in your own everyday lives.

And so our modern world now is confronting us with a dilemma. And it's the dilemma that this Chinese man faces, who's language is spoken by more people in the world than any other single language, and yet he is sitting at his blackboard, converting Chinese phrases into English language phrases. And what this does is it raises the possibility to us that in a world in which we want to promote cooperation and exchange, and in a world that might be dependent more than ever before on cooperation to maintain and enhance our levels of prosperity, his actions suggest to us it might be inevitable that we have to confront the idea that our destiny is to be one world with one language.

Thank you.


Matt Ridley: Mark, one question. Svante found that the FOXP2 gene, which seems to be associated with language, was also shared in the same form in Neanderthals as us. Do we have any idea how we could have defeated Neanderthals if they also had language?

Mark Pagel: This is a very good question. So many of you will be familiar with the idea that there's this gene called FOXP2 that seems to be implicated in some ways in the fine motor control that's associated with language. The reason why I don't believe that tells us that the Neanderthals had language is -- here's a simple analogy: Ferraris are cars that have engines. My car has an engine, but it's not a Ferrari. Now the simple answer then is that genes alone don't, all by themselves, determine the outcome of very complicated things like language. What we know about this FOXP2 and Neanderthals is that they may have had fine motor control of their mouths -- who knows. But that doesn't tell us they necessarily had language.

MR: Thank you very much indeed.

Full transcript