Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
AP Psychology: Sensation and Perception
Transcript of AP Psychology: Sensation and Perception
Right now as you read this, your eyes capture the light reflected off the screen in front of you. Structures in your eyes change this pattern of light into signals that are sent to your brain and interpreted as language. the nsation of the symbols as words allow you to understand what you are reading.
Body Position Senses
Our vestibular sense tells us about how our body is oriented in space.
Three semi-circular canals in the inner ear give the brain feedback about the body orientation. The canals are basically tubes partially filled with fluid. When the position
of your head changes, the fluid moves in the canals, causing sensors in the canals to move. The movement of these hair cells activate neurons, and their impulses go to the brain. You probably have experienced the nausea and dizziness caused when the fluid in these canals is agitated.
Step 4: In the Brain:
The visual cortex is located in the
occipital lobe. Some researchers say it is at
this point where sensation ends and perception
begins. Others say some interpretation of images occurs in the layers of cells in the retina. Still others say it occurs in the LGN region of the thalamus. That debate aside, the visual cortex of the brain receives the impulses from the cells of the retina, and the impulses activate feature detectors. Perception researchers David Hubel (1926-present) and Torsten Wiesel (1924-present) discovered that groups of neurons in the visual cortex respond to different types of visual images. The visual cortex has feature detectors for vertical lines, curves, motion, and many other features of images. What we perceive visually is a combination of these features.
Theories of Color Vision
hypothesizes that we have three
types of cones in the retina: cones that detect the different
colors blue, red, and green (the primary colors of light). These
cones are activated in different combinations to produce all the
colors of the visible spectrum. While this theory has some research
support and makes sense intuitively, it cannot explain some visual
phenomena (like afterimages and color blindness). If you stare at one color
for a while and then look at a white or blank space, you will see a color afterimage (green's afterimage will be red, yellow's is blue). Individuals with dichromatic color blindness cannot either see red/green shades or blue/yellow shades. Monochromatic color blindness causes people to see only in shades of gray. Another theory of vision is needed to explain these phenomena.
All our senses work in a similar way. In general, our sensory organs receive stimuli. These messages go through a process called transduction, which means the signals are transformed into neural impulses. These neural impulses travel first to the thalamus, then on to different cortices of the brain (the sense of smell is the one exception to this rule). What we sense and perceive is influenced by many factors, including how long we are exposed to stimuli.
For example, you probably felt your socks when you put them on this morning, but you stopped feeling them after a while. You probably stopped perceiving the feeling of socks on your feet because of a combination of sensory adaptation (decreasing responsiveness to stimuli due to constant stimulation) and sensory habituation (our perception of sensations is partially due to how focused we are on them). What we perceive is determined by what sensations activate our senses and by what we focus on perceiving. We can voluntarily attend to
stimuli in order to perceive them, as you are doing right now, but paying attention can also be involuntary. If you are talking with a friend and someone across the room says your name, your attetion will probably involuntarily switch across the
room (sometimes called the cocktail-party
The exact distinction between what is sensation and what is perception is debated by psychologists and philosophers. For our purposes, we can think of sensation as activation of our senses and perception as the process of understanding these sensations.
Vision is the dominant sense in human beings. Sighted people use vision to gather information about their environment more than any other sense. The process of vision involves several steps.
Step 1: Gathering Light: First light is reflected off objects and gathered by the eye. Visible light is a small section of the electromagnetic spectrum. The color that we perceive depends on several factors. One is light tensity. It describes how much energy the light contains. This factor determines how bright the object appears. A second factor, light wavelength, determines the particular hue we see. Wavelengths longer that visible light are infared waves, microwaves, and radio waves. Wavelengths shorter that visible light include ultraviolet waves and X-rays. We see different wavelengths within the visible light spectrum as different colors. The colors of the visible spectrum in order from longest to shortest wavelengths are: red, orange, yellow, green, blue, indigo, violet. When you mix all these colors of light waves together, you get white light or sunlight. Although we think of objects as possessing colors (a red shirt, a blue car), objects appear as they do as
a result of the wavelengths of light they reflect. A red shirt
reflects red light and absorbs other colors. Objects appear
black because they absorb all colors and white because
they reflect all wavelengths of light.
Step 2: Within the Eye: When we
look at something, we turn our eyes toward
the object and the reflected light coming from the
object enters our eye. The reflected light first enters the
eye through the cornea, a protective covering. The cornea
also helps focus the light. then the light goes through the pupil.
The pupil is like the shutter of a camera. The muscles that control the pupil (called the iris) open it (dilate) to let more light in and also make it smaller (constrict) to let less light in. Through a process called accomodation, light that enters the pupil is focused by the lens; the lens is curved and flexible in order to focus the light. As the light passes through the lens, the image is flipped upside down and inverted. The Focused inverted image projects on the retina, which is like a screen on the back of your eye. On this screen are specialized neurons that are activated by the different wavelengths of light.
Step 3: Transduction: The term
transduction refers to the translation of incoming
stimuli into neural signals. This term applies not only to
vision but to all our senses. In vision, transduction occurs
when light activates the neurons in the retina. Actually several
layers of cells are in the retina. The first layer of cells is directly
activated by light. These cells are cones, cells that are activated by high
light levels and see color, and rods, cells that are activated by low light levels and
see black and white. These cells are arranged in a pattern on the retina. Rods outnumber cones (the ratio is approximately 20 to 1) and are distributed throughout the retina. Cones are concentrated toward the center of the retina. At the very center of the retina is an indentation called the fovea that contains the highest concentration of cones. If you focus on something, you are focusing the light onto your fovea and see it in color. Your peripheral vision may seem to be full color, but controlled experiments prove otherwise. If enough rods and cones fire in an area of the retina, they activate the next layer of bipolar cells. If enough bipolar cells fire, the next layer of cells, ganglion cells, is activated. The axons of ganglion cells make up the optic nerve that sends these impulses to a specific region in the thalamus called the lateral geniculate nucleus (LGN). From there, the messages are sent to the visual cortices located in the occipital lobes of the brain. The spot where the optic nerve leaves the retina has no rods or cones, so it
is referred to as the blind spot. The optic nerve is divided into two parts. Impulses
from the left side of each retina go to the left hemisphere of the brain. Impulses
from the right side of each retina go to the right side of our brain. the spot
where the nerves cross each other is called the optic chiasm.
states that the sensory receptors arranged in the retina come in pairs: red/green pairs, yellow/blue pairs, and black/white pairs. If one sensor is stimulated, its pair is inhibited from firing. This theory explains color afterimages well. If you stare at the color red for a while, you fatigue the sensors for red. Then when you switch your gaze
and look at a blank page, the opposite part of the pair for red will
fire and you will see a green afterimage. The opponent-process
theory also explains color blindness.If color sensors do
come in pairs, and an individual is missing one pair,
he or she should have difficulty seeing
Our auditory sense also uses energy in the form of waves,
but sound waves are vibrations in the air rather than
electromagnetic waves. Sound waves are created by vibrations, which
travel through the air, and are collected by our ears. These vibrations
finally go through the process of transduction into neural messages and are
sent to the brain. Sound waves, like all waves, have amplitude and frequency.
Amplitude is the height of the wave and determines the loudness of the sound,
which is measured in decibels. Frequency refers to the length of the waves and determines pitch, measured in megahertz. High-pitched sounds have high frequencies, and The waves are densely packed together. Low-pitched sounds have low frequencies, and the waves are spaced apart.
Sound waves are collected in your outer ear, or pinna. The waves travel down the ear canal until they reach the eardrum, or tympanic membrane. This is a thin membrane that vibrates as the sound waves hit it. This membrane is attached to the first of a series of three bones collectively known as the ossicles. The eardrum connects with the hammer (malleus), which is connected to the anvil (incus), which connects to the stirrup (stapes). The vibration of the eardrum is transmitted by these three bones to the oval window (a membrane similar to the eardrum). The oval window is attached to the cochlea, a
structure shaped like a snail shell filled with fluid. As the oval window vibrates, the
fluid moves. The floor of the cochlea is the basilar membrane. It is lined with
hair cells connected to the organ of Corti, which are neurons activated by
the movement of the hair cells. When the fluid moves, the hair cells
move and transduction occurs. The organ of Corti fires, and these
impulses are transmitted to the brain via the auditory nerve.
Two different theories describe the processes involved in hearing pitch: place theory and frequency theory.
Place theory holds that the hair cells in the cochlea respond to different frequencies of sound based on where they are located in the cochlea. Some bend in response to high pitches and some to low. We sense pitch because the hair cells move in different places in the cochlea.
Research indicates that place theory accurately describes how hair cells sense the upper
range of pitches but not the lower tones. Lower tones are sensed by the rate at which the
cells fire. We sense pitch because the hair cells fire at different rates (frequencies) in the cochlea.
An understanding of how hearing works explains hearing problems as well. Conduction deafness occurs when something goes wrong with the system of conducting the sound to the cochlea (in the ear canal, eardrum, ossicles, or oval window). nerve (or sensioneural) deafness occurs when the hair cells in the cochlea are damaged, usually by loud noise. If you have ever been to a loud concert, football game, or other event loud enough to leave your ears ringing, chances are you came close or did permanent damage to your hearing. Prolonged exposure to noise that loud
can permanently damage the hair cells in your cochlea, and these hair cells do not regenerate. Nerve deafness is much more difficult to treat since no method has been found that will encourage the hair cells to regenerate.
When our skin is indented, pierced, or experiences a change in
temperature, our sense of touch is activated by this energy. We have
many different types of nerve endings in every patch of skin, and the exact
relationship between these different types of nerve endings and the sense of
touch is not completely understood. Some nerve endings respond to pressure
while others respond to temperature (and some respond to pain). We do know that
our brain interprets the amount of indentation (or temperature change) as the intensity
of touch, from a light touch to a hard blow. We also sense placement of the touch by the place on our body where the nerve endings fire. Also, nerve endings are more concentrated in different parts of our body. If we want to feel something, we usually use our fingertip, an area of high nerve concentration, rather than the back of our elbow, an area of low nerve concentration. If touch or temperature receptors are stimulated sharply, a different kind of nerve ending called pain receptors will also fire. Pain is a useful response because it warns us of potential dangers.
Gate-control theory helps explain how we experience pain the way we do. Gate-control theory explains that some pain messages have a higher priority than others. When a higher priority message is sent, the gate swings open for it and swings shut for a low priority message, which we will not feel. Of course this gate is not a physical gate swinging in the nerve, it is just a convenient way to understand how pain messages are sent. When you scratch an itch, the gate swings open for your high-intensity scratching and shut for the
low-intensity itching, stopping the itching for a short period of time. Endorphins,
or pain-killing chemicals in the body, also swing the gate shut. Natural
endorphins in the brain, which are chemically similar to opiates like
morphine, control pain.
The nerves involved in the chemical senses respond to chemicals
rather than to energy, like light and sound waves. Chemicals from food we
eat are absorbed by taste buds on our tongue Taste buds are located on
papillae, which are the bumps you can see on your tongue. Taste buds are located
all over the tongue and some parts of the inside of the cheeks and roof of the mouth. Humans sense five different types of tastes: sweet, salty, sour, bitter, and umami (savory). Some taste buds respond more intensely to a specific taste and more weakly to others. People differ in their ability to taste food. The more densely packed the taste buds, the more chemicals are absorbed, and the more intensely the food is tasted. What we think of as the flavor of food is actually a combination of taste and smell.
Our sense of smell also depends on chemicals emitted by substances. Molecules of substances rise into the air. Some of them are drawn into our nose. The molecules settle in a mucous membrane at the top of each nostril and are absorbed by receptor cells located there. The exact types of receptor cells are not yet known. Some researchers estimate that there are as many as 100 different types of smell receptors may exist. These receptor cells are linked to the olfactory bulb, which gathers the messages from the olfactory receptor cells and sends this information to the brain. Interestingly, the nerve fibers from the olfactory bulb connect to the brain at the amygdala and then to the hippocampus,
which make up the limbic system (responsible for emotional impulses and
memory). The impulses from all the other senses go through the thalamus
first before being sent to the appropriate cortices. This direct connection
to the limbic system may explain why smell is such a powerful
trigger for memories.
While our vestibular sense keeps track of the overall
orientation of our body, our kinesthetic sense gives
us feedback about the position and orientation of
specific body parts. Receptors in our muscles and joints send information to our brain about our limbs. This information, combined with visual feedback, lets us keep track of our body. This sense allows us to move through space without bumping into objects.
Perception is the process of understanding and interpreting sensations. Psychophysics is the study of the interaction between the sensations we receive and our experience of them. Researchers who study psychophysics try to uncover the rules our minds use to interpret situations. Below are some of the basic principles in psychophysics and some basic perceptual rules for vision.
Research shows that while our senses are very acute, they do have their limits. The absolute threshold is the smallest amount of stimulus we can detect. For example, the absolute threshold for vision is the smallest amount of light we can detect, which is estimated to be a single candle flame from 30 miles away on a perfectly dark night. Most of us could detect a single drop of perfume a room away. Stimuli below our absolute threshold are said to be subliminal.
The difference threshold is the smallest amount of change needed in a stimulus before we detect a change. This threshold is computed by Weber's law, which states that the change needed is proportional to the original intensity of the stimulus. The more intense a stimulus, the more it will need to change before we notice a difference. One example is a dimmer switch on a light; it would need to increase by 8% before we could notice the light got brighter.
Signal Detection Theory
Real-world examples of perception are more complicated than controlled laboratory-perception experiments. Signal detection theory investigates the effects of the distractions and interferences we experience while perceiving the world. This area of research tries to predict what we will perceive among competing stimuli. Signal detection theory takes into account how motivated we are to detect certain stimuli and what we expect to perceive. These factors together are called response criteria (or receiver operating characteristics). By using factors like response criteria, signal detection theory tries to explain and predict the different perceptual mistakes we make. A false positive is when we think we perceive a stimulus that is not there (like thinking you see your friend in a crowd and wave to a stranger). A false negative is not perceiving a stimulus that is present (like missing the directions at the top of the paper that instruct you not to write on the test). In some situations, one type of error is much more serious than the other, and this importance can alter perception (if a surgeon has a false negative they would miss a tumor that was there; a false positive would be seeing a tumor that was not actually present).
When we use top-down processing, we perceive by filling in gaps in what we sense. Top-down processing occurs when you use your background knowledge to fill in gaps in what you perceive. Our experiences create schemata (a schema), mental representations of how we expect the world to be. Our schemata influence how we perceive the world. Schemata can create a perceptual set, which is a predisposition to perceiving something in a certain way. If you have ever seen images in clouds you have experienced top-down processing. You use your background knowledge (schemata) to perceive the random shapes of clouds as organized shapes.
Bottom-up processing, also called feature analysis, is the opposite of top-down processing. Instead of our experience to perceive an object, we use only the features of the object itself to build a complete perception. We start our perception at the bottom with the individual characteristics of the image and put all those characteristics together into our final perception. Bottom-up processing can be hard to imagine, because it is such an automatic process. The feature detectors in the visual cortex allow us to perceive basic features of objects, such as horizontal and vertical lines, curves, motion, and so on. Our mind builds the picture from the bottom up using these basic characteristics.
We are constantly using both bottom-up and top-down processing as we perceive the world. Top-down processing is faster but more prone to error, while bottom-down processing takes longer but is more accurate.
Principles of Visual Perception
The rules we use for visual perception are too numerous to cover completely in these notes, however, some of the basic rules are important to know and understand for the AP exam. One of the first perceptual decisions our mind must make is the figure-ground relationship. What part of a visual image is the figure and what part is the ground or background? Several optical illusions play with this rule. A famous example is the vase/two faces illusion.
At the beginning of twentieth century, a group of researchers called the Gestalt
psychologists described the principles that govern how we perceive groups of objects.
The Gestalt psychologists pointed out that we normally perceive images as groups, not
as isolated elements. They thought this process was innate and inevitable. Several
factors influence how we will group objects:
Proximity: objects that are close together are more likely to be perceived as belonging to the same group.
Similarity: objects that are similar in appearance are more likely to be perceived as belonging to the same group.
Continuity: objects that form a continuous form (such as a trail or a continuous figure) are more likely to be perceived as belonging in the same group.
Closure: Similar to top-down processing. Objects that make up a recognizable image are more likely to be perceived as belonging to the same group even if the image contains gaps that the mind needs to fill in.
Common fate: objects that move together belong together.
Every object we see changes minutely from moment to moment due to our changing angle of vision, variations in light, and so on. Our ability to maintain a constant perception of an object despite these changes is called constancy. There are several types:
size constancy: objects closer to our eyes will produce bigger images on our retinas, but we take distance into account our estimations of size. We know that people n the distance are not actually tiny compared to those close to us.
shape constancy: objects viewed from different angles will produce different shapes on our retinas, but we know the shape of an object remains constant. For example, from a certain angle the top of a mug might look elliptical, but we still know that it is circular.
brightness constancy: we perceive objects as being a constant color even as the light reflecting off the object changes. For example, we perceive a brick wall as brick red, even as daylight fades and the wall appears gray.
Another aspect of perception is our ability to gauge motion. Our brains are able to detect how fast images move across our retinas and to take into account our own movement. Stroboscopic effect (like a strobe light or flip book) causes us to perceive motion where there is none. The phi phenomenon occurs when a series of lightbulbs is turned on and off in a particular order and rate; the bulbs will appear as a single moving light. If a spot of light is projected steadily onto a wall of an otherwise dark room and people are asked to stare at it, they will report seeing it move; this is autokinetic effect.
One of the most important and frequency investigated parts of visual perception is depth. Without depth perception, we would perceive the world as a two-dimensional flat surface, unable differentiate between what is near and what is far. This limitation could obviously be dangerous. Researcher Eleanor Gibson used the visual cliff experiment to determine when human infants can perceive depth. An infant is placed onto one side of a glass-topped table that creates the impression of a cliff. Actually, the glass extends all the way across so the infant cannot possibly fall. Gibson found that an infant old enough to crawl will not crawl across the visual cliff, implying the child has depth perception. Researchers divide the cues that we use to perceive depth into two categories: monocular cues (depth cues that do not depend on having two eyes) and binocular cues (cues that depend on having two eyes).
If you have taken a drawing class, you have learned monocular depth cues. Artists use these cues to imply depth in their drawings. One of the most common cues is linear perspective. If you wanted to draw a railroad track that runs away from the veer off into the distance, most likely you would start with two lines that converge somewhere close to the top of your picture. A water tower blocking our view of the train would be seen as closer to us due to the interposition cue (objects that block the view to other objects must be closer to us). If the train was running through a desert landscape, you might draw the rocks closest to the viewer in detail, while the landscape off in the distance would not be as detailed; this cue is called texture gradient (we can see details in texture close to us but not far away). Finally your art teacher might teach you to use shadowing in your picture. By shading part of your picture, you can imply where the light source is and thus imply depth and position of objects.
Other cues for depth result from our anatomy. We see the world with two eyes set a certain distance apart, and this feature of our anatomy gives us the ability to perceive depth. Binocular disparity (or retinal disparity) occurs because each of our eyes sees any object from a slightly different angle. The brain gets both images. It knows that if the object is far away, the images will be similar, but the closer the object is, the more disparity there will be between the images coming from each eye. The other binocular cue is convergence. As an object gets closer to our face, our eye must move toward each other to keep focused on the object. The brain receives feedback from the muscles controlling eye movement and knows that the more the eyes converge, the closer the object must be.
Effects of Culture on Perception
One area of psychology cross-cultural researchers are investigating is the effect of culture on perception. Research indicates that some of the perceptual rules psychologists once thought were innate are actually learned. In the famous Muller-Lyer illusion. The top line (in the picture below) should look longer, even though both lines are actually the same length. People that come from noncarpentered cultures that do not use right angles and corners in their building and architecture are not usually fooled by the illusion. Cross-cultural research demonstrates that some basic perceptual sets are learned from our culture.
Now that you've reviewed the senses and how the brain changes these sensations into perceptions, you can interpret the term extrasensory perception (ESP) in a more specific way than most people can. Someone claiming to have ESP is claiming to perceive a sensation that is outside the senses discussed in these notes. Psychologists are skeptical of ESP claims, primarily because our senses are well understood, and researchers do not find reliable evidence that we can perceive sensations other than through our sight, smell, hearing, taste, touch, and vestibular/balance systems.