Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

Principle of high quality assessment

Assessment report
by

Vlademir Dacuyan

on 20 January 2013

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Principle of high quality assessment

Principles of High
Quality Assessment Appropriateness of
Assessment Methods Validity - refers to the extent to which
the test serves its purpose or the
efficiency with which it
intends to measure. What is a High-Quality Assessment? Properties of Assessment Methods It provides results that demonstrate
and improve targeted student
learning. What are the characteristic
of High-Quality Assessment? Characteristics of High-Quality Assessments 1. Clear and Appropriate Learning Targets
Knowledge learning target
Reasoning learning Target
Skill Learning target
Product Learning Target
Affective Learning target 2. Appropriateness of Assessment Method
Written-Response Instrument
Product Rating Scales
Performance Tests
Oral Questioning
Observation and Self reports 3. Properties of Assessment Method
Validity
Reliability
Fairness
Practicality and Efficiency
Ethics in Assessment Learning Targets - a clear description of what students know and able to do. 1. Knowledge learning target/ Cognitive Targets

- the ability of the student to master a substantive matter. Stiggins and Conklin (1992)
categorized LEARNING TARGETS into five: COGNITIVE As early as the 1950's, Bloom (1954), proposed a hierarchy of educational objectives at the Cognitive level. Knowledge which refers to the acquisition of facts, concepts and theories. LEVEL 1
KNOWLEDGE Comprehension refers to the same concept as “understanding”.
It is a step higher than mere acquisition of facts and involves a cognition or awareness of the interrelationships of facts and concepts. LEVEL 2
COMPREHENSION Application refers to the transfer of knowledge from one field of study to another or from one concept to another concept in the same discipline. LEVEL 3
APPLICATION Analysis refers to the breaking down of a concept or idea into its components and explaining the concept as a composition of these concepts. LEVEL 4
ANALYSIS Synthesis refers to the opposite of analysis and entails putting together the components in order to summarize the concept. LEVEL 5
SYNTHESIS It refers to valuing and judgment or putting the “worth” of a concept or principle. LEVEL 6
EVALUATION AND REASONING Reasoning learning target

- The ability to use knowledge and solve problems Skill learning target
- the ability to demonstrate achievement-related skills like conducting experiments, playing basketball, and operating computers

Product learning target
- the ability to create achievement-related products such as written reports, oral presentations, and art products Affective learning target

- The attainment of affective traits such as
attitudes, values,
interest and self-efficacy. Written-Response Instruments Product Rating Scales Performance Tests ORAL QUESTIONING The General Categories
of Assessment Methods Written-response instruments include objective tests (multiple choice, true or false, matching or short answer) tests, essays, examinations and checklists. Objective Tests are appropriate for assessing the various
levels of hierarchy of educational objectives. Multiple choice tests in particular can be constructed in such a way as to test higher order thinking skills. Essays- can test the student's grasp of the higher level cognitive skills particularly in the areas of application analysis, synthesis, and judgment. Example:
(POOR) Write an essay about the first EDSA revolution

(BETTER) Write an essay about the first EDSA revolution and their respective roles Examples of products that are
frequently rated in Education The Classic 'handwriting' scale
used in the California Achievement Test, Form W (1957) Book reports
Maps
Charts
Diagrams
Notebooks
Essays It is used to determine whether or not an individual behaves
in a certain (usually desired) way when asked to complete a particular task. Example : (Performance Checklist in Solving a mathematics
problem)

Behavior:
identifies the given information
identifies what is being asked
uses variables to replace the unknown
formulates the equation
performs algebraic operations
obtains an answer
checks if the answer makes sense OBSERVATION AND SELF REPORTS Tally sheet- a device often used by teachers to record the frequency of student behaviors, activities or remarks.

Self-Checklist- a list of several characteristics or activities presented to the subjects of a study

Observation and Self-reports- useful supplementary assessment methods when used in conjuntion with oral questioning and performance tests. VALIDITY
- refer to the appropriateness, correctness, meaningfulness and usefulness of the specific conclusions that a teacher reaches regarding the teaching-learning situation. Validity

- A characteristic that pertains to the appropriateness of the inferences, uses, and results of the test or any other method utilized to gather data. Content-validity- refers to the content and format of the instrument.

1.Some criteria for judging content validity are given as follows?

2.Do students have adequate experience with the type of task posed by the item?

3.Did the teachers cover sufficient material for most students to be able to answer
the item correctly?

4.Does the item reflect the degree of emphasis received during instruction? A. How validity is determined? Face validity- refers to the outward appearance of the test.

Criterion-related validity- the test item is judge against a specific criterion e.g relevance to a topic like the topic on conservation.

Construct validity- it follows that an item possesses construct validity if it loads highly on a given construct or factor. B.TEST VALIDITY ENHANCERS The following are suggestion for enhancing the validity of classroom assessment:

1.Prepare a table of specifications (TOS)

2.Construct appropriate test items

3.Formulate directions that are brief, clear, and concise

4.Consider the reading vocabulary of the examines. The test should not be made up jargons

5.Make the sentence structure of your test items simple 6.Never have an identifiable pattern of answers

7. Arrange the test items from easy to difficult

8. Provide adequate time for student to complete the assessment
9. Use different methods to assess the same thing

10. Use the test only for intended purposes Reliability The Reliability of an assessment method refers to its consistency.

It also a term that is synonymous with dependability or stability. 1.Scorer’s inconsistency because of his/her subjectivity

2.Limited sampling because of incidental inclusion and accidental exclusion of some materials in the test

3.Changes in the individual examinee himself/herself and his/her instability during the examination

4.Testing environment Factors that affect test reliability Test-Retest Method Parallel-Forms Method or Test of Equivalence Split-Half Method Internal-Consistency Method How Reliability is determined? 4 methods in estimating the reliability of a good measuring instrument The same measuring instrument is administered to the same group of subjects. It may be administered to the group of subjects and the paired observations correlated. The test in this method may only be administered once, but the test items are divided into two halves. This method is used with psychological tests, which are constructed as dichotomously scored items. B. The Concept of Error in Assessment Internal Error EXTERNAL ERROR SOURCES OF ERROR HEALTH
MOOD
MOTIVATION
TEST-TAKING SKILLS
ANXIETY
FATIGUE
GENERAL ABILITY DIRECTIONS
LUCK
ITEM AMBIGUITY
HEAT IN THE ROOM
LIGHTNING
SAMPLE ITEMS
OBSERVER DIFFERENCES AND BIAS
TEST INTERPRETATION AND SCORING C. Test Reliability Enhancers
The following should be considered in enhancing the reliability of classroom assessment:

1. Use a sufficient number of items or tasks. A longer test is more reliable.

2. Use independent raters or observers who can provide similar top the same performances.

3. Make sure the assessment procedures and scoring are objective.

4. Continue the assessment until the results are consistent.

5. Eliminate or reduce the influence of extraneous events or factors.

6. Assess the difficulty level of the test.

7. Use shorter assessments more frequently rather than a few long assessments. Fairness
This pertains to the intent that each question should be made as clear as possible to the examinees and the test is absent of any biases. An assessment procedure needs to be fair. First, students need to know exactly what the learning targets are and what method of assessment will be used. Second, assessment has to be viewed as an opportunity to learn rather than an opportunity to weed out poor and slow learners. Third, fairness also implies freedom from teach-stereotyping. Practicality and Efficiency Assessments need to take into consideration the teacher's familiarity with the method, the time required, the complexity of administration, the ease of scoring and interpretation, and the cost to be able to determine an assessment's practicality and efficiency. Ethics in Assessment The term "ethics" refers to questions
of right and wrong. Here are some situations in which
assessment may not be called for: Requiring students to answer checklist of their sexual fantasies;
Asking elementary pupils to answer sensitive questions without consent of their parents;
Testing the mental abilities of pupils using an instrument whose validity and reliability are unknown; Ethical Issues 1. The fundamental responsibility of a teacher.

2. Test results and assessment results are confidential results.

3. Deception

4. Temptation to assist certain individuals in class during assessment or testing is ever present Group 2 Vlademir
Giordan
Junmark
Ard
Demelyn
Eula
Tommy Summary Principles of High Quality Assessment Clarity of Learning Target

Knowledge learning target
Reasoning learning Target
Skill Learning target
Product Learning Target
Affective Learning target 2. Appropriateness of Assessment Method
Written-Response InstrumentProduct Rating ScalesPerformance TestsOral QuestioningObservation and Self reports 3. Properties of Assessment MethodValidityReliabilityFairnessPracticality and EfficiencyEthics in Assessment
Full transcript