Audio Transcript Auto-generated
- 00:01 - 00:03
this is heather alexander for module three.
- 00:04 - 00:10
The first question was about our reinforcement schedules. There are four basic
- 00:11 - 00:17
schedules, fixed interval, fixed ratio, variable interval and variable ratio.
- 00:18 - 00:21
You're fixed interval schedules reinforced that target behavior
- 00:21 - 00:23
after a certain amount of time has passed.
- 00:23 - 00:26
Um Since a previous reinforcement,
- 00:26 - 00:34
it delivers a reward when a set amount of time has elapsed. So for example, um
- 00:35 - 00:37
exams for college students,
- 00:37 - 00:43
so for us we know that we have final exams at the end of each uh class.
- 00:43 - 00:46
Therefore if we study for those final exams,
- 00:46 - 00:49
that would be an example of a fixed interval schedule.
- 00:50 - 00:50
So
- 00:51 - 00:51
um
- 00:53 - 00:55
so studying was the target behavior,
- 00:55 - 00:58
The exam is the result and the reinforcement is given
- 00:58 - 01:00
after the final exam at the end of the semester,
- 01:01 - 01:04
but because the examining occurs at those fixed intervals
- 01:04 - 01:07
um usually at the end of the class um
- 01:10 - 01:10
then
- 01:11 - 01:13
at the exam time that is
- 01:13 - 01:15
what the grade depends on.
- 01:15 - 01:20
So your variable interval interval schedule delivers the reinforcement after a
- 01:20 - 01:24
variable amount of time um During an interval has passed.
- 01:24 - 01:26
Since the previous reinforcement,
- 01:27 - 01:32
a good one for this one would be high school students with a pop quiz.
- 01:32 - 01:36
You know, your teacher's gonna give pop quizzes throughout the semester.
- 01:36 - 01:39
Um But you don't know when they're gonna occur,
- 01:40 - 01:41
so without knowing that schedule,
- 01:41 - 01:44
the student studies regularly throughout the entire
- 01:44 - 01:46
semester instead of the last minute.
- 01:46 - 01:47
Um
- 01:49 - 01:50
So that is our variable
- 01:51 - 01:52
interval schedule
- 01:52 - 01:54
for fixed ratio schedule,
- 01:54 - 01:56
a fixed ratio delivers reinforcement after a
- 01:56 - 01:59
certain number of responses are given.
- 01:59 - 02:00
So
- 02:00 - 02:01
um
- 02:01 - 02:07
for example, if a toymaker uh makes
- 02:07 - 02:08
a new doll
- 02:08 - 02:09
and
- 02:09 - 02:11
Target only buys five,
- 02:12 - 02:14
Buys those in batches of five.
- 02:15 - 02:17
That the toy maker of the doll will now produce the toys
- 02:17 - 02:19
at a higher rate so that he can make more money.
- 02:20 - 02:21
So
- 02:21 - 02:24
the toys are only required when all five have been made.
- 02:25 - 02:27
The toy making is rewarded and reinforced
- 02:27 - 02:29
when fiber delivered because they get paid
- 02:30 - 02:32
for a variable ratio schedule.
- 02:33 - 02:35
Um they deliver reinforcement after the variable
- 02:35 - 02:38
number of responses responses have been made.
- 02:38 - 02:42
The best example of this is like a slot machine.
- 02:42 - 02:45
Um gambling rewards unpredictably.
- 02:45 - 02:48
So every time you pull the lever at a slot machine,
- 02:48 - 02:51
you want to keep pulling it so that you hope you win.
- 02:52 - 02:52
Um
- 02:54 - 02:55
So that
- 02:56 - 02:58
Because you don't know what when you're gonna win.
- 02:58 - 03:00
Like today it could be six poles,
- 03:00 - 03:05
first the slot machine and then the next time it could be 35 poles.
- 03:05 - 03:06
It um the variable
- 03:07 - 03:09
changes each time you play at the slot machine.
- 03:12 - 03:15
For the naturally recurring and reinforcement is when
- 03:15 - 03:18
a person positive behaviors are reinforced naturally.
- 03:18 - 03:19
For example,
- 03:19 - 03:24
if if a toddler says ball for the first time and the parent gives the ball to the child,
- 03:24 - 03:27
the child is reinforced by the ball and is more likely to say ball.
- 03:29 - 03:32
A puzzle is also another example of a natural reinforcement.
- 03:32 - 03:34
If your child has trouble doing a puzzle and
- 03:34 - 03:37
asks for help once the puzzle is complete,
- 03:37 - 03:40
that is natural reinforcement and the likelihood that they're going
- 03:40 - 03:42
to ask for help in the future is higher.
- 03:42 - 03:43
So that target behavior.
- 03:46 - 03:48
So
- 03:48 - 03:49
there are different
- 03:50 - 03:52
the different basics of um
- 03:53 - 03:54
schedules
- 03:55 - 03:57
also have different results.
- 03:59 - 04:00
So
- 04:01 - 04:05
so for a fixed interval it trains to slow down the response
- 04:05 - 04:08
rate right after reinforcement and increases towards the end of an interval.
- 04:08 - 04:13
So think about that studying for the final exam um
- 04:15 - 04:18
You get that right after your reinforcement and you know that towards the
- 04:18 - 04:23
end you can get a better grade for that variable interval schedule.
- 04:23 - 04:26
Um Usually generates a study rate due to the
- 04:26 - 04:29
uncertainty of when the next reward is given.
- 04:29 - 04:32
So think about that pop quiz. Um
- 04:33 - 04:37
You don't know when they're going to have the pop quiz. Therefore
- 04:37 - 04:41
you continue to study so that you can get a good grade
- 04:41 - 04:44
if you're a fixed ratio schedule it produces high rates of
- 04:44 - 04:47
response until the reward and then a pause and behavior.
- 04:48 - 04:51
Um This was like the
- 04:52 - 04:54
the toys made in fives.
- 04:54 - 04:57
Um So they made more in the beginning
- 04:57 - 05:00
so that they could sell more to make more money as the world board
- 05:00 - 05:06
for your fixed variable schedule. It's usually high. Um And study response rates.
- 05:06 - 05:07
So
- 05:07 - 05:12
again when you get addicted to a slot machine you want to win.
- 05:12 - 05:17
So you're going to keep responding by pulling that lever,
- 05:21 - 05:22
a discriminative
- 05:23 - 05:26
um And non discriminative schedules of reinforcement.
- 05:27 - 05:30
So discriminative is two or more basic schedules
- 05:30 - 05:33
of reinforcement usually used in a random sequence.
- 05:33 - 05:37
So if I was using a um
- 05:38 - 05:42
combination of variable ratio and
- 05:42 - 05:44
fixed ratio
- 05:44 - 05:45
schedule together.
- 05:46 - 05:51
Um I would randomize though so I wouldn't use one every single time.
- 05:51 - 05:53
I would use this one some of the time and use the other the other times.
- 05:53 - 05:56
For non discriminative it's just like multiple schedules.
- 05:56 - 05:58
Um Except the mix schedules has no
- 05:58 - 06:01
discriminative stimuli related to the independent schedule.
- 06:01 - 06:02
So
- 06:03 - 06:05
um I will go into more detail of that
- 06:05 - 06:08
here in just a few minutes when we get to mixed schedules.
- 06:10 - 06:10
Yeah
- 06:14 - 06:18
so a concurrent schedule occurs when two or more contingencies that
- 06:18 - 06:21
reinforcement operate independently and simultaneously
- 06:21 - 06:22
with two or more behaviors.
- 06:23 - 06:24
So
- 06:26 - 06:30
so this is a good example of this is the pigeon example
- 06:31 - 06:31
um
- 06:32 - 06:38
Where a pigeon in a box might be faced with two pecking keys. Those two keys
- 06:38 - 06:41
um You can make the pigeon can make a response on either
- 06:41 - 06:46
and a food reinforcement might follow um a peck on either one.
- 06:47 - 06:52
Um But the schedule is arranged for those two could be two completely different um
- 06:54 - 06:58
schedules of reinforcement. So you've got two things going on at once.
- 06:58 - 07:01
Um going for that behavior of the pecking
- 07:05 - 07:09
for multiple and mixed schedules. They are the same
- 07:10 - 07:11
in the fact that the schedules use two
- 07:11 - 07:14
or more schedules of reinforcement and random order.
- 07:15 - 07:18
Um Except that a multiple.
- 07:19 - 07:24
The difference is that with multiple schedules there is a discriminative stimuli.
- 07:24 - 07:27
So for multiple schedules
- 07:28 - 07:34
you could have if we take the pigeon example again if a red light turns on um
- 07:36 - 07:40
or there's a red light on for the one key and it
- 07:40 - 07:44
is on a fixed ratio schedule and then you have
- 07:44 - 07:49
a green light for the other key. And that is on a fixed um
- 07:52 - 07:56
on a fixed ratio and then i fixed interval. Um
- 07:57 - 07:59
So they're on two different schedules,
- 07:59 - 08:02
meaning to different things and you alternate them in random order.
- 08:10 - 08:15
And last. We have alternative schedules. And conjunctivitis schedules.
- 08:15 - 08:18
An alternative schedule as a response is reinforced
- 08:18 - 08:21
on which ever schedule is met first.
- 08:21 - 08:24
And a conjunctivitis schedule is a response reinforced on
- 08:24 - 08:27
both or more schedules at the same time.
- 08:27 - 08:29
So for an alternate schedule
- 08:30 - 08:34
you might be reinforcing a child after a certain amount of time
- 08:35 - 08:37
um has passed. So say
- 08:38 - 08:42
you know, every time one minute passes, you reinforce
- 08:42 - 08:45
or every 10 times it happens.
- 08:45 - 08:49
Whereas with the conjunctivitis schedule, a response is reinforced for both.
- 08:49 - 08:52
So you would have to meet the one minute and the
- 08:52 - 08:56
10 times um at the same time to reinforce the behavior