Introducing 

Prezi AI.

Your new presentation assistant.

Refine, enhance, and tailor your content, source relevant images, and edit visuals quicker than ever before.

Loading content…
Loading…
Transcript

EPP Assessment Cycle

Key Components

Key Assessments

Assessments used as a source of evidence for CAEP standards that are created and administered by EPPs directly. They may be in the form of subject or pedagogical content tests, observations, projects, assignments, or surveys. EPPs take responsibility for design, administration, and validation of these assessments.

What do we do with the data?

What do we do

with the data?

1. Data is compiled at the conclusion of every semester and disaggregated by the Data Analyst by program.

2. Data is compiled by assessment so trends can be seen programmatically.

3. References to past semester data (up to 3 cycles) is provided where applicable.

What are our EPP's 8 Key Assessments?

Video Reflection

3-day Unit Plan

What are our EPP's 8 Key Assessments

Dispositions

5-day Unit Plan

Portfolio

10-day Unit Plan

Classroom Management Plan

Impact on Student Learning

Assignment

Inter-rater Reliability

Inter-rater Reliability

Inter-rater reliability is a measure of consistency used to assess the degree to which different judges (or raters) agree in their evaluation (or scoring) decisions of the same phenomenon. Inter-rater reliability is high when reviewers demonstrate that they consistently reach the same or very similar decisions.

Consistent outcomes are assessed repeatedly, regardless of evaluator

1. Inter-rater reliability exercises performed each month by all EPP faculty and adjuncts. Inter-rater reliability checked for each instrument annually (QAS pp.12-13).

2. Two sample assignments uploaded into Taskstream where faculty will assess the assignments and submit a score.

3. Those scores will be compiled and shared at an EPP reliability meeting to discuss indicators and best practices/scoring.

What does this process look like?

Content Validity

The extent to which a set of operations, tests, or other assessment measures what it is intended to measure.

As operationalized here, content validity is a measure of agreement by our stakeholders concerning the content we offer and what our students really need to know to be successful educators.

What does it measure?

Assignment instructions/rubric sent to members of external stakeholders with a survey (QAS pp. 12-13).

Use Lawshe Method to determine whether or not the rubric's content addresses what students really need to know.

What is

this process?

Each member is given a list of indicators for individual assessment items independently and asked to rate each as "Essential," Useful but not essential," or "Not necessary."

Any item/indicator perceived as "essential" by 50% or more of this panel has some degree of content validity. The greater the percentage, the greater the perceived content validity.

Surveys

Questionnaires completed by P-12 students regarding the performance of teachers and other school professionals. Student surveys are one of the measures that an EPP can use to demonstrate the teaching effectiveness of its candidates and completers.

Reviewing the Data

Learn more about creating dynamic, engaging presentations with Prezi