Introducing 

Prezi AI.

Your new presentation assistant.

Refine, enhance, and tailor your content, source relevant images, and edit visuals quicker than ever before.

Loading…
Transcript

THANK YOU FOR LISTENING!

High Quality Assessment

DPE 104 - Group 2

VALIDITY

The extent to which an assessment accurately measures what it's intended to measure.

VALIDITY

Often expressed numerically as a coefficient of correlation with another test of the same kind and of known validity

FACTORS THAT INFLUENCE TEST VALIDITY

Appropriateness of Test Items

Directions

Reading Vocabulary and Sentence Construction

INFLUENTIAL FACTORS

Test Item Construction

Length of the Test

Arrangement of Test Items

Pattern of answers

Difficulty of Items

TYPES OF VALIDITY

TYPES

FACE VALIDITY

the degree to which a test seems to measure what it reports to measure.

Face Validity

CONTENT VALIDITY

Measure knowledge of the content domain of which it was designed to measure knowledge

Content Validity

CRITERION-RELATED VALIDITY

concern for tests that are designed to predict someone's status on an external criterion measure

Concurrent Validity

Predicitive Validity

Criterion-Related Validity

A measure of how well a test predicts abilities

Measures the test against a benchmark test and high correlation indicates that the test has a strong criterion validity

CONSTRUCT VALIDITY

What is a construct?

In psychology, a construct is a skill, attribute, or ability that is based on one or more established theories.

Construct Validity

Examples of constructs are:

Intelligence

Motivation

Anxiety

Fear

Constructs often seen in Educational Assessment

Artistic ability

Common Constructs

English Language Proficiancy

Problem Solving Skills

Memory

Discriminant Validity

Construct validity is used to determine how well a test measures what it is supposed to measure

Test construct refers to the concept or the characteristic that a test is designed to measure.

Test Construct

Convergent Validity

These two should be established as one way of proving the construct validity of your tool of assessment

Convergent and Discriminant Validity

giving him a mathematical problem set

Measuring a student's mathematical problem solving skills by...

EXAMPLE:

How well a test agrees with other previously validated tests that measure the same construct

Convergent Validity

Then you have convergent validity

Results highly correlate

Your Problem Set

Some hypothetical problem set

Which is a previously validated measure of your construct

Convergent Validity

Then you have established discriminant validity

Results have low correlation

Your Problem Set

Test on enumerating the different theories of mathematics

TEST CONSTRUCT:

Memorization/Familiarity of the theories of mathematics

Discriminant Validity

Then, voilà!

Your assessment tool has good construct validity

TAKE NOTE:

Establishing good construct validity is a matter of experience and judgment, building up as much supporting evidence as possible

TYPES

PREDICTIVE VALIDITY

the extent to which a score on a scale or test predicts scores on some criterion measure.

Predictive Validity

RELIABILITY

the degree to which the result of a measurement, calculation, or specification can be depended on to be accurate.

Stability

Also known as Test-Retest Reliability, this is done by administering the same test twice over a period of time.

1

The scores from Time 1 and Time 2 can then be correlated in order to evaluate the test for stability over time

2

TEST OF STABILITY

3

For example: A test designed to assess student learning in psychology could be given to a group of students twice, with the second administration coming a week after the first. The obtained correlation coefficient would indicate the stability of the scores.

Equivalence

TEST OF EQUIVALENCE

It is a measure of reliability obtained by administering different versions of an assessment tool to the same group of individuals.

The scores from the two versions can then be correlated in order to evaluate the consistency of results across alternate versions.

EXAMPLE:

If you wanted to evaluate the reliability of a critical thinking assessment, you might create a large set of items that all pertain to critical thinking and then randomly split the questions up into two sets, which would then represent the parallel forms.

Internal Consistency

TEST OF INTERNAL CONSISTENCY

It is a measure of reliability used to evaluate the degree to which different test items that probe the same construct produce similar results.

Has TWO general sub-types:

Split-half Reliability

Average inter-item correlation

Two "sets" of questions are created for every construct being tested. With the entire test being administered, the correlation between both sets are computed and interpreted,

Items assessing the same construct are paired up and their respective correlation coefficients are averaged.

PRINCIPLES

PRINCIPLES OF HIGH QUALITY ASSESSMENT

FAIRNESS

FAIRNESS

Student knowledge of learning targets and assessment

Opportunity to learn

Prerequisite knowledge and skills

Avoiding stereotypes

Avoiding Bias in Assessment task and procedures

POSITIVE CONSQUENCES

POSITIVE CONSEQUENCES

  • Should have a positive effect
  • Should motivate students
  • Should help improve instruction
  • Should provide effective feedback
  • Should provide students tools for self-assessment and help understand ways to improve

Stakeholders

1

Students

Stakeholders

Affects motivation, Student-teacher relationship, Can foster effective study and learning habits

2

Teachers

Efficiency of teaching strategies, feedbacking, dvelopment both in curriculum and practices.

PRACTICALITY and EFFICIENCY

PRACTICALITY AND EFFICIENCY

Assessments should:

  • Save TIME and MONEY
  • Be resourceful
  • Should take into consideration:
  • FAMILIARITY WITH THE METHOD
  • TIME REQUIRED
  • COMPLEXITY OF ADMINISTRATION
  • EASE OF SCORING
  • EASE OF INTERPRETATION
  • COST

ETHICS IN ASSESSMENT

Assessments should not be used to derogate students.

Teachers need to ask themselves if it is right to assess a specific knowledge or investigate a certain question.

However there are instances where it is necessary to conceal the objective of an assessment to ensure impartiality and fairness.

ETHICS

REMINDER!

Test results and assessment results are CONFIDENTIAL. Results should be communicated to the students in a way where other students would not be in possession of such personal information.

NOTE!

DEVELOPMENT OF TOOLS

TOOLS

DEVELOPMENT

(For measuring knowledge and reasoning)

Steps in Developing Classroom Assessment Tools

Step 7

Preparing the Answer Sheet and Scoring Key

Step 3

Step 5

Preparation of a Table of Specifications (TOS)

Writing and sequencing of Test Items

Step 1

STEPS

Identification of instructional objectives and learning outcomes

Selection of the Appropriate Types of Test

Writing the Directions or Instructions

Step 4

Step 6

Listing of topics to be covered by the test.

Step 2

INSTRUCTIONS

QUIZ

On a 1/4 sheet of paper, write down the letter of the best answer in each of the questions asked.

QUESTION 1

Questions 1-2

Jenny thinks her exam accurately measured her students' ability to do long division. What kind of validity does she have towards her exam?

A. Concurrent Validity

B. Content Validity

C. Criterion-related Validity

D. Face Validity

QUESTION 2

Louie cross-referenced the results of her Math Skills assessment to the results of the well-known CTU Math Test and it showed high correlation between both test results What type of validity is Louie trying to establish?

A. Concurrent Validity

B. Content Validity

C. Predictive Validity

D. Face Validity

QUESTION 3

Questions 3-4

In psychology, this is known to be a skill, attribute or ability that is based on one or more established theories.

A. Character

B. Construct

C. Talent

D. Behaviour

QUESTION 4

Students from Monggos High School were given the same exam during and two months after they went through culinary training. What form of reliability are the instructors trying to prove?

A. Equivalence

B. Stability

C. Internal Consistency

D. None of the Above

QUESTION 5

Questions 5-6

Miss Mel's students were given two sets of questions aimed to test their comprehension about World War II. What is most likely the next step Miss Mel will take upon receiving the papers?

A. She will conduct another exam the next day.

B. She will compute the correlation coefficient between the designated item pairings

C. She will compare the scores of her students to those in Mr. Mike's class.

D. She will measure the correlation coefficient between the two sets of questions.

QUESTION 6

Miss Joanna refuses to give her Muslim students a Philosophy exam as she believes their religion will hamper the validity of her exam. Is she justified in her decision?

A. Yes, as she has the right to express her own ideas and beliefs.

B. No, as this goes against the idea that assessments should be fair.

C. Yes, as this will allow for a more valid result.

D. No, as this will cause outrage and conflicts from the side of the excluded students

QUESTION 7

Questions 7-8

Teacher Josh just decided to use Zipgrade, an automated checking software to check his class's Midterm Exam. Is this advisable for Teacher Josh?

A. Yes, as this will ease him of scoring and interpreting the results.

B. No, as this is unfair for other teachers who choose to use more traditional methods.

C. Yes, because it will create a more efficient image towards his students.

D. No, because automated checking softwares are far less reliable than human checkers

QUESTION 8

This is the first step a teacher has to undertake when constructing classroom tests.

A. Writing of Test Items

B. Preparation of a Table of Specifications

C. Selecting the Appropriate Types of Tests

D. Identifying the instructional objectives and learning outcomes

QUESTION 9

Questions 9-10

Which of the following statements is FALSE?

A. After determining the number of items per difficulty, the teacher should then proceed to write the questions.

B. A teacher has to write he directions as clearly and as simply as possible.

C. The Type of Test is dependent on what needs to be measured during an exam.

D. After identifying the instructional objectives and learning outcome, a teacher needs to outline the topics to be included in the test.

QUESTION 10

Which of the following assessments is least likely to be ethically sound for students to be asked to answer?

A. Asking students about their thoughts on the school's uniform policy

B. Asking students about their summer vacation highlights.

C. Asking students to list down the names of their relatives that are experiencing marital problems

D. Asking students to evaluate a recently released K-Drama Series

Learn more about creating dynamic, engaging presentations with Prezi