Send the link below via email or IMCopy
Present to your audienceStart remote presentation
- Invited audience members will follow you as you navigate and present
- People invited to a presentation do not need a Prezi account
- This link expires 10 minutes after you close the presentation
- A maximum of 30 users can follow your presentation
- Learn more about this feature in our knowledge base article
Do you really want to delete this prezi?
Neither you, nor the coeditors you shared it with will be able to recover it again.
Make your likes visible on Facebook?
You can change this under Settings & Account at any time.
Transcript of Assessment Criteria
Grand Canyon Uni.: SPE-536
Sept. 19, 2013
Individuals with Disabilities Education Act (IDEA)
Defines 14 categories of disabilities
Other Health Impairments
Specific Learning Disability
Speech or Language Impairment
Traumatic Brain Injury
Visual Impairment, including blindness
What is descriptive statistics?
Descriptive statistics is the discipline of qualitatively describing the main features of a collection of data, or the quantitative description itself. It measures central tendency, and variability. Central tendency includes the mean, median, and mode, while measures of variability include the standard deviation, and skewness (statistics.laerd.com)
Purpose of the assessment
Screening & Identification
IEP Development & Placement
Educational Assessment is a process of documenting, usually in measurable terms, knowledge, skills, attitudes, and beliefs (www.jmu.edu).
When an educator has the basic understanding of the assessment tools and their outcomes they can be much more effective and efficient in providing the students with the needed services. The earlier an issue is determined the sooner it can be addressed. Thus, affording the students with a free and appropriate public education with effective programming that meets their needs.
Validity & Reliability
According to James P. Key (1997), Validity is the degree to which a test is suppose to measure.
There are three basic approaches:
& criterion related validity
Educational Assessment cont.
Specifically, we will discuss assessments use to identify special needs students and the impact interventions should have on the student through implementing learning objectives, understanding observable behaviors, and determining the expected outcomes from the student after receiving special needs services.
Assessments & Educators
Educators must have a firm understanding of assessments, how to implement them and understand their outcomes.
The Assessment Process
Create a multidisciplinary team(s)
Analyze the data
Evaluate the student and data
Determine the students needs
Recommend the student foe special education services
IDEA 2004 requires that multiple sources of qualitative data are used to determine a students eligibility for special education (GCU, 2011).
With this skill set, educators can effectively meet the student needs by:
providing one-on-one attention
modifying the lesson
conveying vital info. about the assessments to the collaborative team.
(Pierangelo, & Giuliani, 2008)
It also requires that the aforementioned disabilities have a significant impact on the educational performance of the student.
The special education teacher not only needs to be familiar with IDEA's definition of disabilities, also they must be familiar with basic data collection techniques, statistical analysis, and interpreting statistical outcomes.
content validity measures the degree to which the students characteristics represent the domain or universe of the trait(s) being measured.
construct validity evaluates the theory underlying the construct to be measured, and the adequacy of the test in measuring the construct.
criterion-related validity is an approach concerned with detecting the presence or absence of one or more criteria considered to represent traits or constructs of interest. An easy way to test for criterion-related validity is to administer the instrument to a group that is known to exhibit the traits to be measured (Key, 1997).
Validity & Reliability cont.
Reliability of a research instrument deals with the instruments ability to yield the same results repeatedly when tested. There will always be some unreliability to some degree, however, there should be a good deal of consistency in the results of a quality instrument gathered at different intervals.
An easy way to determine the reliability of empirical measurements is by the retest method in which the same test is given to the same people after a period of time. In special education educators essentially want to test the students present levels of academic achievement and functional performance (PLAAFP) to determine if the RTI is effective.
Criterion & Norm-referenced assessments
CRA- in this assessment clear learning goals are provided through explicit criteria. Students' performances are therefore, judged against a pre-set criteria specified in the intended learning outcomes.
NRA- compares students with other students. This method does not say anything about the standard of student's performances only measures which student is better than the other. For instance a teacher may grade on a curve, showing student's performance is on, below, or above average.
(Pierangelo, & Giuliani, 2008). Understanding assessment in the special education process: A step-by-step guide for educators. Thousand Oaks, Ca: Corwin Press. ISBN 13: 9781412917919
Descriptive and Inferential Statistics. (2008). Retrieved from https://statistics.laerd.com/statisticcal-guides/descriptive-inferential-statistics.php
GCU. 2011. (Lecture-SPE-536). Retrieved from www.gcu.edu
Key, J. P. (1997). Reliability & Validity. Research Design in Ocuupational Education. Oklahoma State University. Retrieved from http://www.okstate.edu/ag/agedcm4h/academic/aged5980a/5980/newpage18.htm
Assessments.(2010).The Program Assessment Support Service. Retrieved from www.jmu.edu/assessments