Loading presentation...

Present Remotely

Send the link below via email or IM


Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.


Key Math 3

No description

Megan Merony

on 8 November 2012

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Key Math 3

Key Math Megan Merony Reliability and Validity What are the positives and negatives of the test? What is Key Math 3 Diagnostic Assessment? Who can administer the test? Reliability Positives The person who administers the test has to have completed their bachelor's degree and had course work in measuring and interpretation of tests and formal mathematics training. Examiners may include, but are not limited to, educational diagnosticians, school psychologists, and special educators (Connolly 2007, p. ix) What is this test about? The Key Math 3 diagnostic Assessment is a comprehensive, norm-referenced measure of essential mathematical concepts and skills. Does it have any special features? This test has several special features such as: progress monitoring, alignment with current math standards, broad content coverage, parallel forms, is linked to a curriculum program, adaptive administration, ASSIST scoring software, and diagnostic and interpretive reports (Connolly 2007, p. 2-4). What content does it cover? The test covers basic concepts such as Numeration, Algebra, Geometry, Measurement, and Data Analysis and Probability (Connolly 2007, p. 5). The test also covers operations such as mental computation, addition and subtraction, and multiplication and division. It also covers application including foundations of problem solving and applied problem solving. What does it look like when someone administers the test? See a demonstration of how it looks to administer the test. (Press play in Quick Time Player) Reliability was measured on three principles internal-consistency, test-retest, and alternate form. Internal- Consistency Internal consistency of the test scored fairly well. Most of the scores are in the low 90s to the high 70s with a few 60s mixed in. The middle grades from 6th through 10th seemed to have the highest scores, while the grades on the outside of that range were scored a little lower. Standard Error of Measurement I found that there was a high rate of standard error across the test. Most scores were 1.0 or higher. I found few scores that were under 1.0. Alternate Form The correlation of the alternate form test scored in the low 90s to high 70s for all grades. Test-Retest The correlation of the test-retest is low 90s to high 70s. Would I say this test is reliable? I would agree that this test is reliable! Validity Validity was measured by construct, content, and criterion measures. They did not show tables and results for content or criterion measures. Construct Validity They have compared subtests against one another and against other tests and they have found the correlation is greater than .60. Content and Criterion Validity Would I say this test is valid? No, I would say with scores ranging from the 80 to the low 60s that this test is not valid. The author mentioned that the test was matched with state standards and was reviewed by experts in the field. The results for content validity were represented in the content blueprint that was created for the test. Each item of the test was reviewed by the author, publisher, and other consultants (Connelly 2007, p. 85-86) I thought the test easel was easy to read and follow when giving the test to the students. The directions were clear and you were able to restate the questions to the students if they did not understand. The pictures were clear and easy to see. The basal and ceiling are clearly defined along with the starting points to each test. The test did not take very long and it switched content with each subtest. The student was engaged the entire time and on task. The test was laid out according to section and was easy to score on the scoring sheet. Critiques The test had many features that I did not find favorable for an administrator of the test. One fault it had was the number of questions for the lower grades. For Pre-K through 9th grade there are only one to three questions per grade level. If they get one wrong at their grade level they are immediately bumped back down. Some of the questions were also two part questions. If the student answered one part right and one part wrong the question was still considered wrong. Norm Group The norm group that was selected was a range from 4 years of age to 21 years of age. They sampled from specific demographic characteristics such as sex, race/ethnicity, parent's education levels, and geographic region. All candidates were English speaking, who were not heard of hearing. There were approximately 220 students per group. The groups may not have represented each grade level appropriately because there were so few in each group (Perez 1996, p. 2). This group is not representative of special education students. It is geared towards general education students. Age Range and Materials The test is designed for students who are from Pre-K to 12th grade. You need the testing manual, easel, and scoring sheet to administer the test. Would I give this test again? Although this test had it's limitations I would give this test again. It allowed me to figure out what skills my students was missing just by how he answered the question. I would keep in mind and note on the side if he was able to answer part of the question. I think the test is better suited for students in older grades. By Austin J. Connolly
Published in 2007 Administration and Scoring How do you administer the test? How do you score the test? Administration of the test is simple. You go to the first subtest, which is numeration, and find your starting point depending on the grade of the child you are testing. I started at question 5, because my student was in second grade. The starting points for the rest of the test are based on the ceiling items from the numeration test. Then I looked at the Basal and Ceiling rules. The Basal for the subteest are the first three consecutive correct responses immediately preceding the first incorrect response or the first item in the subtest. If the student gets the first question wrong you need to go back to the previous question. The ceiling rule is the first four consecutive incorrect responses or the last item in the subtest. Once you have establish the starting point, basal, and ceiling you are ready to begin testing. The easel should be set up for the student to see and you should be sitting to the side of the easel so you can see what the child sees as well as the questions you are suppose to ask. Make sure to have the scoring sheet behind the easel so the student is not distracted by what you are doing. Some of the questions require you to point to an object so make sure that you have previewed the questions in advance to know which ones require you to point. Test the child until they reach their ceiling and then move on to the next subtest. To score the test the students either receives a 1 or a 0. If the student gives an incorrect answer you are to circle the 0 on the score sheet. If the student answers correctly circle the 1 on the score sheet. Each question in clearly marked and gives a description for each question. Once the subtest is complete you need to calculate the raw score. You take the largest ceiling item and subtract the amount of errors to calculate the raw score. Once you have the scaled score you can look in the manual on tables A.1-A.3 to calculate it into a Scaled Score. To figure out the confidence level you need to refer to table A.4-A.6 and find the grade level and subtest to find the interval. To find the Grade/Age equivalence you need to look at tables A.7-A.8. Once you have turned to those tables you need to find the raw score for each subtest and move either to the far left or far right column to find the grade/age equivalence. My students raw scores for the subtest and age equivalence can be found on the scoring sheet located next to the computer.
Full transcript