Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Do you really want to delete this prezi?
Neither you, nor the coeditors you shared it with will be able to recover it again.
Make your likes visible on Facebook?
You can change this under Settings & Account at any time.
principles of high quality assessment
Transcript of principles of high quality assessment
•The measuring instruments are good
•The methods used are appropriate
•The assessment has fair and positive consequences
•The assessment is practical and efficient
•Reporting data is done with ethics. Lesson 2 - Qualities of Good Assessment instrument Lesson 3 - Appropriateness of Assessment Methods Principles of High Quality Assessment Lesson 1 - Clarity of Learning Targets Identifying Learning Targets these learning targets should be clearly defined in your instructional objectives. Three Qualities of Good Assessment - valid
- accurate Five levels of learning targets •Knowledge level – objectives at the knowledge level require the students to remember.
•Reasoning level – objectives written at this level require the student to identify logical errors or to differentiate among facts, opinions, assumptions, hypotheses, or conclusions.
•Skills level – this includes complex over response.
•Product level – learning targets at this level are concerned with the tangible and intangible finished products as output of students achievement.
•Affects level – it concerns with the values of an individual including interests, appreciations and attitudes. I. Validity According to Popham (1999), is the most significant concept in assessment. Types of Validity Face Validity An evaluation procedure should bear a logical relationship to the decision that is to be made. Content Validity A content-valid test matches or fits the instructional objectives. Construct Validity It refers to the agreement of test results with certain characteristics which the test aims to portray. Criterion-Related Validity Scores from a test are correlated with an external criterion. Two Types of Criterion-Related Validity Concurrent Criterion-Related Validity - is the degree to which the test degrees with or correlates with an criterion which is set up as an acceptable measure. Predictive Criterion-Related Validity - refers to the degree of accuracy of how well a test predicts the level of performance in a certain activity which it intends to foretell (Calderon and Gonzales). II. Reliability - The reliability of a test is the degree of consistency of measurement that it gives. Three Methods of Estimating Reliability 1. test of Stability (test-retest)
•Spearman Rank Correlation Coefficient 2. Test of Internal Consistency
Split-half Method (Spearman Brown Phrophecy Formula)
Kuder-Richardson Formula 21 3. Test of Equivalence (Parallel Forms) Variety of Good Assessment Methods Objective Test
Self-Report Objective Test A complete instructional objectives includes an observable behavior (action verb specifying learning target/outcome), any special conditions under which the behavior must be displayed and the performance level considered sufficient to demonstrate mastery. Objectives are more specific than goals. They describe learning outcomes. Taba (1962) defines objectives as behaviors that must be achieved at various levels of the curriculum. These levels include lesson subject, unit and program. Instructional Objectives should be stated in behavioral terms. they must be specific, measurable, attainable, result-oriented, and time-bounded. Spearman rank correlation coefficient or Spearman rho (rs) to correlate the two sets of scores.
The formula is :
Rs = 1 (6 ∑D2 / N3 – N)
∑D2 is the sum of the squared differences between ranks and N is the total number of cases. Spearman Brown prophecy formula:
The formula is :
Rw = 2n / (1 + rn)
Rw is the correlation coefficient of the whole test, and
rn is the correlation coefficient between the two halves of the test.
If rw is high, then the whole test is reliable. The KR21 tends to produce a smaller (less accurate) coefficient than KR20 but has the advantage of being easier to calculate. If there are two equivalent forms of a test, these forms can be used to obtain an estimate of the reliability of the test. – objective assessment generally calls for single words, phrase, numbers, letters and other symbols as responses to items. Essay method – is the traditional type of examination where the examinee is made to discuss, enumerate, compare, state, explain, analyze, or criticize. Performance – is one in which the responses to test questions are in the form of overt manual, vocal and other similar behavioral activities. Oral Questioning/Oral Examinations – this method of assessment is appropriate for making students according to the quality of their answer and such grades form part of the overall achievement scores of the student. Observation – is the most appropriate means obtaining information about the measurable aspects of performance using the five senses: seeing, hearing, smelling, tasting and touching. Self-Report – is a technique commonly used in social-emotional assessment. Self-Reports are usually part of a more comprehensive assessment plan and often involve the use of interviews to obtain data (Ysseldyke). Avoiding teacher stereotypes
-A stereotype teacher is one who is conventional, one who lacks variation in methods of teaching. Lesson 5 – Practicality and Efficiency of Assessment Lesson 4 – Fairness and Positive Consequences of Assessment Student’s knowledge of learning targets and assessment
-It is always important than the students know the learning targets for a particular lesson. Student-Focused
- Assessment makes appraisal of the curriculum, which has to e developed according to the needs of the learners. Learning Curves
-Learning from mistakes is also a lasting experience. Avoiding bias in assessment tasks and procedures
-Biases in assessment tasks and procedures must be avoided. The teacher should make every effort to make assessment output as true as can be. Assessments and its positive consequences on students and parents
-If teachers frequently give examinations, quizzes, assignments, and the like students will more likely form a study habit. Assessment and its positive consequences on teachers
-Teachers also benefit from the results of assessments. Even if they had earned the highest degree an institution can offer there is still room for improvement and development. Efficiency – refers to the speed and economy of data collection.
Practicality – refers to the usefulness and utility of assessment, the assessment instrument and the assessment procedures. Teacher’s familiarity with the method of assessment
-The teacher is a key figure in the instructional program. In assessing student learning, the teacher should be very familiar with each of the methods commonly used. Time and quality Requirement
-High quality assessment requires time because highly accurate and specific information takes longer time to accumulate. But assessment should not take too long a time to be able to assess student learning. Complex of Administration
-The manner in which a test is administered also greatly affects the reliability of a test. Calderon states that explicit directions accompanying a test should be strictly followed. Ease of scoring
-Ease of scoring is one factor of usability. If tests are easy to score they are more in demand. Highly objective tests are easy to score. Cost
-Economy is also one factor of usability. Usually, teacher made tests make use of the test papers as the answer sheets at the same time. Lesson 6 – Ethics in Gathering, Recording and Reporting of Assessment of Data Some General Ethical Standards •Responsibility for the consequences of professional work – The assessment of students is a social act that has specific social and educational consequences. Those who assess students use assessment data to make decisions about the students, and these decisions can significantly affect an individual’s life opportunities. •Recognition of the boundaries of professional competencies – those who are entrusted with the responsibility for assessing and making decisions about students have differing degrees of competence. •Confidentiality of Information – those who assess students regularly obtain a considerable amount of very personal information about those students. •Adherence to Professional Standards on assessment – Those who develop tests behave in accordance with the standards and those who assess students use instruments and techniques that meet the standards. •Test Security – Those who assess students are expected to maintain test security. It is expected that assessors will not reveal to others the content of specific tests or test items.