Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Do you really want to delete this prezi?
Neither you, nor the coeditors you shared it with will be able to recover it again.
Make your likes visible on Facebook?
You can change this under Settings & Account at any time.
Ch 7: How do we know students have learned?
Transcript of Ch 7: How do we know students have learned?
Isn't this the million dollar question?
We teach are brains out, but how do we know that the students absorbed any of it?
If you want to see what a student really knows you have to:
1. Test them in multiple ways (MI)
2. Make sure the test has higher level questions (Bloom's)
-They should "create," "evaluate," "make judgements," etc.
Assessments are not always tests!
They can be:
Goals of Assessment:
Show what student can do
Show where student has problems
Show misconceptions student has
What clarifications does the teacher need to make?
Track progress of student (overtime assessments)
Help teacher make decisions about how/what to teach
Show if new resources/teaching materials are needed
Help teacher make decisions (move to next grade level)
Good Assessments give rich data that the teacher can use to better the instruction.
What if students do well?
Indication that students understand content, move on
What if students don't do well?
May not have covered information in a way that makes sense.
May have not make adequate connections.
Probably need to reteach content in new way.
What if some students don't do well?
Was it taught in a way that they learn?
What misconceptions did they have?
Where did they go wrong?
May need to re-teach via small groups.
But what if my assessment is designed poorly?
That means that the data you gained from your students may not be correct. That means that students may know more/less than the assessment indicates.
Be careful on how you
format your assessments
Make sure they
match the objective measured
Make sure they
align with the data you want.
Make sure you can
draw adequate decisions.
The night before the test:
You have studied and feel confident. You talk to another student about the content. You can explain all the elements. You can even give examples of each concept.
The day of the test:
You don't feel well. You miss breakfast. You get to your test and you see questions that sort of relate to what you studied but not really. You know you know the information, but the test questions are not what you expected.
You might actually know the content, but the teacher (based upon the test) won't know that.
In the ways students learn best
As "real world" as you can make it
Assessments are more than "let's make a test".
Assessments are designed to allow the teacher to make judgements on what a student does/doesn't know.
So, there is a lot of pressure on the teacher to make sure the assessment is:
Designed so the student can show what he knows
measures the objective it seeks to measure
Doesn't measure several objectives with convoluted results
Match to the way the information has been taught
Ex: What if I give you a scale and tell you to measure how long something is?
(The assessment doesn't match what I expect you to be able to do.)
Ex: What if I've taught you definitions of something but I want you to be able to compare/contrast?
(The assessment doesn't match the objective.)
Ex: What if you have trouble reading and I give you math word problems. You don't score well, so I assume you can't do the math.
(The assessment is measuring two different things.)
Ex: What if we talk and watch videos about a topic but then I ask you (as an assessment) to make a model?
(The assessment doesn't match the instruction.)
There are lots of types of assessments!
2 Major Types:
Overarching for set of students
Designed by test-makers
Have been field tested
Probably don't have many design flaws
Can be based upon what was specifically taught
Can be differentiated for students in the class
May have design flaws
Not field tested
After giving the assessment, we have to know how to
interpret the data.
Different assessments have a
specific way you are to interpret the data.
If you interpret data in the wrong way
because the test isn't designed to measure such)
, you may be drawing incorrect conclusions about the student!
Norm-Referenced Evaluations: (Standardized Test)
Student's score is compared to
reference group scores
Test is given to sample group. Sample groups scores serve as baseline.
Individual student score is judged against baseline.
Student A's score falls within the
Doesn't mean he made a 90 on the test!
Means that based upon the baseline data,
90% of other students who took the test would have scored at or below Student A's score.
Student's score/ability judged to a set criterion. (Compared to set standard NOT other people).
Easier in saying, Yes! The student met this criterion.
Doesn't matter what others have done.
Criterion: Students must be able to read 120 words in one minute.
Student A reads 125 words per minute. He meets the criteria.
Easier to make judgements about individual students.
Doesn't tell you what next
Easy to write for low level skills.
Not a foreign concept. Somewhat arbitrary.
Ex: I made an A in the class. That means I did well right?
What if everyone made an A in the class? What if I was the only A in the class?
Ex: I made a 75 on the test. What does that say about what I know? What if the test was designed well? What if it wasn't? How do I improve?
Ex: I made a 99 on the test. Does that mean I know 99% percent of the information? Where do I go from here?
As a parent: My child made an A in reading. So what exactly can he do? What can't he do? What does he need help with? Is he up to par with the others? Is he ready for the next year?
If I made a 90 on a test, does that mean I've met 90% of the criterion (objectives of the test)? Probably not, teacher made tests aren't usually designed that well.
Does it mean I made within the 90th percentile in comparison to my peers? Probably not, teacher made tests don't usually compare students in that way.
Standards-Based Report Card:
More information as to what the student can/can't do
Specific to criterion (standards) on the grade level
On a 1-4 scale: 1-Not Mastered, 2-In Progress, 3-Proficient, 4-Mastery
But how is "Proficient" and "Mastery" different?
Still somewhat vague- rely on student samples/teacher feedback
Standard listed here
Teacher inputs score (1-4).
Can give score over different periods.
Grading Period 1: 2
Grading Period 2: 2
Grading Period 3: 3
Used to "place" a student in the correct class (based upon rigor)
show where in the subject a student is currently in functioning
before given before teaching to know where students are (to know where to start teaching)
. Shows what a student knows. Make decision of Learning Support or what class to start with.
while learning is occurring
Able to see if students are "getting it"
adjust teaching based upon the data
If formative assessment scores low-
in another way. Clear up misconceptions.
Ex: Teacher Questions, Observations, Oral Presentation, Pop Quiz, Ticket Out the Door.
Goal: To get data before lesson is finished so you can adjust the instruction
How we determine
what the misconception is
We have given an assessment and there are poor scores
Where did they go wrong?
Lots of working out problems (showing work) and student explanations
Break process down to see where students are going astray
end of teaching lesson
See if objectives are met
Communicate what the students know
Grades, Final Exam, Final Project (tying it all together)
Make sure the objectives are measured by this!
Make sure you have several types of evidence
Assessment of Cognitive Knowledge (Thinking/Process Information):
Recall, facts, Memory
Application: Applying knowledge to Solve Problems
Break complex info into parts (compare/contrast)
: Pull information together to create new
Judge existing information based upon given criteria.
Affective Domain (Feelings, Interest, Values)
Based upon attitudes:
I am in taking this information (not zoning out)
I am responding to the information
I am choosing to do this (more interest)
I am choosing to do this over other things I also like
I am so passionate it about this, this is ingrained in who I am.
Fine & Gross Motor
What can you physically do without thinking?
Type on Keyboard
Validity: Does the test measure what it is supposed to measure?
Ie: I want to measure a student's ability to add. I give him all adding word problems. If he can't read, he can't do the math. The test is measuring reading & math.
Reliability: Does the test give the same results consistently?
Ie: If two different teachers gave the same test to the same group of students,
would they get similar answers?
If I gave an assessment based upon a rubric,
could two different teachers grade by the rubric and come up with the same grade?
Performance Assessments, Authentic Assessments
Ask the student to do something in the context of how they will actually use the skills.
Driver's Test- Actually drive a car
Mechanic- Actually fix a car
Teacher-Actually teach a lesson
Nurse- Actually bandage someone
of skills. (Can/Can't do)
(At what level can he/she do it?)
Building of series of different types of artifacts to show mastery of concept.
Should always have reflections
Showcases best work
Must understand how it will be evaluated
Evaluation can be subjective
Job ready (used in interviews)
Bottom Line on Assessments:
Does it measure the objectives?
Does it measure the objective appropriately?
Are there diverse assessments?
How are they judged by the teacher?
What does the teacher do with the data?