Introducing
Your new presentation assistant.
Refine, enhance, and tailor your content, source relevant images, and edit visuals quicker than ever before.
Trending searches
THANK YOU FOR LISTENING!
DPE 104 - Group 2
The extent to which an assessment accurately measures what it's intended to measure.
Often expressed numerically as a coefficient of correlation with another test of the same kind and of known validity
Appropriateness of Test Items
Directions
Reading Vocabulary and Sentence Construction
Test Item Construction
Length of the Test
Arrangement of Test Items
Pattern of answers
Difficulty of Items
the degree to which a test seems to measure what it reports to measure.
Measure knowledge of the content domain of which it was designed to measure knowledge
concern for tests that are designed to predict someone's status on an external criterion measure
Concurrent Validity
Predicitive Validity
A measure of how well a test predicts abilities
Measures the test against a benchmark test and high correlation indicates that the test has a strong criterion validity
What is a construct?
In psychology, a construct is a skill, attribute, or ability that is based on one or more established theories.
Examples of constructs are:
Intelligence
Motivation
Anxiety
Fear
Artistic ability
English Language Proficiancy
Problem Solving Skills
Memory
Discriminant Validity
Construct validity is used to determine how well a test measures what it is supposed to measure
Test construct refers to the concept or the characteristic that a test is designed to measure.
Convergent Validity
These two should be established as one way of proving the construct validity of your tool of assessment
giving him a mathematical problem set
Measuring a student's mathematical problem solving skills by...
How well a test agrees with other previously validated tests that measure the same construct
Then you have convergent validity
Results highly correlate
Your Problem Set
Some hypothetical problem set
Which is a previously validated measure of your construct
Then you have established discriminant validity
Results have low correlation
Your Problem Set
Test on enumerating the different theories of mathematics
TEST CONSTRUCT:
Memorization/Familiarity of the theories of mathematics
Your assessment tool has good construct validity
TAKE NOTE:
Establishing good construct validity is a matter of experience and judgment, building up as much supporting evidence as possible
the extent to which a score on a scale or test predicts scores on some criterion measure.
the degree to which the result of a measurement, calculation, or specification can be depended on to be accurate.
It is a measure of reliability obtained by administering different versions of an assessment tool to the same group of individuals.
The scores from the two versions can then be correlated in order to evaluate the consistency of results across alternate versions.
EXAMPLE:
If you wanted to evaluate the reliability of a critical thinking assessment, you might create a large set of items that all pertain to critical thinking and then randomly split the questions up into two sets, which would then represent the parallel forms.
It is a measure of reliability used to evaluate the degree to which different test items that probe the same construct produce similar results.
Has TWO general sub-types:
Split-half Reliability
Average inter-item correlation
Two "sets" of questions are created for every construct being tested. With the entire test being administered, the correlation between both sets are computed and interpreted,
Items assessing the same construct are paired up and their respective correlation coefficients are averaged.
Student knowledge of learning targets and assessment
Opportunity to learn
Prerequisite knowledge and skills
Avoiding stereotypes
Avoiding Bias in Assessment task and procedures
1
Affects motivation, Student-teacher relationship, Can foster effective study and learning habits
2
Efficiency of teaching strategies, feedbacking, dvelopment both in curriculum and practices.
Assessments should:
Assessments should not be used to derogate students.
Teachers need to ask themselves if it is right to assess a specific knowledge or investigate a certain question.
However there are instances where it is necessary to conceal the objective of an assessment to ensure impartiality and fairness.
Test results and assessment results are CONFIDENTIAL. Results should be communicated to the students in a way where other students would not be in possession of such personal information.
(For measuring knowledge and reasoning)
Preparing the Answer Sheet and Scoring Key
Preparation of a Table of Specifications (TOS)
Writing and sequencing of Test Items
Identification of instructional objectives and learning outcomes
Selection of the Appropriate Types of Test
Writing the Directions or Instructions
Listing of topics to be covered by the test.
On a 1/4 sheet of paper, write down the letter of the best answer in each of the questions asked.
A. Concurrent Validity
B. Content Validity
C. Criterion-related Validity
D. Face Validity
A. Concurrent Validity
B. Content Validity
C. Predictive Validity
D. Face Validity
A. Character
B. Construct
C. Talent
D. Behaviour
A. Equivalence
B. Stability
C. Internal Consistency
D. None of the Above
A. She will conduct another exam the next day.
B. She will compute the correlation coefficient between the designated item pairings
C. She will compare the scores of her students to those in Mr. Mike's class.
D. She will measure the correlation coefficient between the two sets of questions.
A. Yes, as she has the right to express her own ideas and beliefs.
B. No, as this goes against the idea that assessments should be fair.
C. Yes, as this will allow for a more valid result.
D. No, as this will cause outrage and conflicts from the side of the excluded students
A. Yes, as this will ease him of scoring and interpreting the results.
B. No, as this is unfair for other teachers who choose to use more traditional methods.
C. Yes, because it will create a more efficient image towards his students.
D. No, because automated checking softwares are far less reliable than human checkers
A. Writing of Test Items
B. Preparation of a Table of Specifications
C. Selecting the Appropriate Types of Tests
D. Identifying the instructional objectives and learning outcomes
A. After determining the number of items per difficulty, the teacher should then proceed to write the questions.
B. A teacher has to write he directions as clearly and as simply as possible.
C. The Type of Test is dependent on what needs to be measured during an exam.
D. After identifying the instructional objectives and learning outcome, a teacher needs to outline the topics to be included in the test.
A. Asking students about their thoughts on the school's uniform policy
B. Asking students about their summer vacation highlights.
C. Asking students to list down the names of their relatives that are experiencing marital problems
D. Asking students to evaluate a recently released K-Drama Series