Loading presentation...

Present Remotely

Send the link below via email or IM


Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.


lexicon access

getting to your lexicon

James Tuttle

on 16 March 2012

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of lexicon access

And Becca said "_____ _____."
_____ _____!"
She what??
Trace Model of Activation
∑ Context + Wave Features
+ Previous words =
One of Many Lexical Entires
We're amost there - Lexical entries have been narrowed down to one
Oh it was a noun- a noun that means:
an intense feeling of deep affection
...from neurons?
A small part of a Lexicon
old cars
that damn dog
Q. What is Rahul saying to Anjali?
And Becca says "Olive Juice"
But James and you as a class hear "I love you"
Once we've accessed the lexical entry we can now define spoken words
Written words go in here
Phonemes pieced together here
Spoken language comes out here
1≤ 3
∑ Excitatory - ∑ Inhibitory ≥ Unit Threshold = Unit Activation

No hidden units exceed threshold
no output units activated
Context Changes Activated Words in Current Trace
Q. What did Becca say to James?
In order to determine this simply compare your lexical entires to the one you've just seen. What is the best match?
So how do we get Lexicons...
Introducing Connectionism
The Love Unit
The Grandmother Unit
The Cat Unit
Caring About Individual Units = Wrong
Patterns of Activaton
Foil = Hidden Weighting = Synaptic Connections
Your memories are the weights stored in your synaptic connections
100 Billion Neurons in the Brain
Some neurons can be innervated by 10,000 other neurons
That's 1e +10011 possible variations
A Neuron
A Universe
NETtalk could mimic the way a child learns to read
Words are provided to Input Units
Output units provide data for another program to speak
If output = desired result then the system maintains the weightings between hidden units
If output ≠ desired output than
Feed Forward Activation
So what can a connectionist model do?
What does this have to do with Language Perception?
Instead what matters is everything in between:
The Patterns of Activation
Distributed Approach = Neuroscience
"Hearing Love"
Images of Flowers
Activation Patterns
Your Conception of "Love"
Foil, Memories, and Behavior
multiplied by x =
What They Don't Do
Saying a word is not understanding a word
Deal Poorly with complex Context
TRACE model can deal with previously acivated words but not abstract ideas
Each experience causes a crease
Emotions increase the size of the creases
Creases interact with each other
Pattern of Activation = Crease
Creases frequently activated together become habits
You determine the geometry of your brain
Neural Network Learns to Read
Language Perception
By: Becca, Seth and James
In this context - what Becca Says Evokes:
Full transcript