Introducing
Your new presentation assistant.
Refine, enhance, and tailor your content, source relevant images, and edit visuals quicker than ever before.
Trending searches
Assessments used as a source of evidence for CAEP standards that are created and administered by EPPs directly. They may be in the form of subject or pedagogical content tests, observations, projects, assignments, or surveys. EPPs take responsibility for design, administration, and validation of these assessments.
1. Data is compiled at the conclusion of every semester and disaggregated by the Data Analyst by program.
2. Data is compiled by assessment so trends can be seen programmatically.
3. References to past semester data (up to 3 cycles) is provided where applicable.
Video Reflection
3-day Unit Plan
Dispositions
5-day Unit Plan
Portfolio
10-day Unit Plan
Classroom Management Plan
Impact on Student Learning
Assignment
Inter-rater reliability is a measure of consistency used to assess the degree to which different judges (or raters) agree in their evaluation (or scoring) decisions of the same phenomenon. Inter-rater reliability is high when reviewers demonstrate that they consistently reach the same or very similar decisions.
1. Inter-rater reliability exercises performed each month by all EPP faculty and adjuncts. Inter-rater reliability checked for each instrument annually (QAS pp.12-13).
2. Two sample assignments uploaded into Taskstream where faculty will assess the assignments and submit a score.
3. Those scores will be compiled and shared at an EPP reliability meeting to discuss indicators and best practices/scoring.
The extent to which a set of operations, tests, or other assessment measures what it is intended to measure.
As operationalized here, content validity is a measure of agreement by our stakeholders concerning the content we offer and what our students really need to know to be successful educators.
Assignment instructions/rubric sent to members of external stakeholders with a survey (QAS pp. 12-13).
Use Lawshe Method to determine whether or not the rubric's content addresses what students really need to know.
Each member is given a list of indicators for individual assessment items independently and asked to rate each as "Essential," Useful but not essential," or "Not necessary."
Any item/indicator perceived as "essential" by 50% or more of this panel has some degree of content validity. The greater the percentage, the greater the perceived content validity.
Questionnaires completed by P-12 students regarding the performance of teachers and other school professionals. Student surveys are one of the measures that an EPP can use to demonstrate the teaching effectiveness of its candidates and completers.