The Internet belongs to everyone. Let’s keep it that way.

Protect Net Neutrality
Loading presentation...

Present Remotely

Send the link below via email or IM


Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.


Reliability and Validity

No description

Ian Harris

on 10 March 2013

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Reliability and Validity

? v a l i d i t y R e l i a b i l i t y & These two concepts are often a source of confusion. There are lots of reasons for this: Yeah, yeah, very funny, but what's your point? Because they both illustrate the idea of invalidity Well, the thing about a fish on a bicycle is it's a lot like a mule with a spinning wheel Something is invalid when it is not appropriate to the task, e.g. a bicycle is an invalid means of personal transport for a fish and a mule is an invalid operator of a spinning wheel! So, what's the task of research Making observations Taking measurements Testing hypotheses Doing experiments Drawing Conclusions In fact, the guys who study reasoning and argument - the philosophers - will tell you: Validity is a property of conclusions such that: a valid conclusion is one that is securely supported by the evidence presented Internal validity - does the data collected allow us to say anything about the original question we asked? Criterion or Test Validity - does the way we operationalised a variable actually measure what it's supposed to Sample Validity - Does the way we put together our sample allow us to say things about the population? A common criticism of studies that use IQ Tests to operationalise intelligence Think about the schoolboy's spider experiment. Poor control over factors influencing the dependent variable - whether the spider jumps or not - makes it invalid for the schoolboy to conclude that spiders hear with their knees. Sample biases like self selection, opportunity limitations and geographical/temporal limitations restrict how far conclusions can be extended/generalised to more diverse populations Lots of research samples undergraduate students. How far can this sample support conclusions about other sorts of people, e.g. people working on factory production lines or parents of young children? Ecological Validity - does the way we gathered our data fairly represent the way subjects behave in everyday situations Think about how educational psychologists or psychiatrists assessing cognitive functions like memory take the measurements that they need. Short term memory duration can been measured using something called the Brown-Peterson technique: Firstly, the terms are sometimes treated as interchangeable. Secondly, these terms have many possible applications:
Sample There are also quite a few ways of trying to assess or measure validity and reliability. Add to this the general tendency for people to see methodological issues as dull and secondary to the main event and the whole thing starts to look like a bit of a ... & Ok, so you can have lots of different kinds of validity because there are so many different ways you can mess up drawing your conclusions, I get it! But what about reliability? A good question Mr Skinner, but perhaps it's no surprise from someone as concerned with methods as you are! If you used a number of different rulers and tape measures to measure a person's height you would expect there to be a bit of variation in the results. These variations are down to the sorts of inaccuracies we find in any measurement instrument We'd say the rulers and tape measures were unreliable Experimental scientists put a lot of effort into working out how much error this kind of unreliability can cause so that we can clearly state the limits of accuracy in our measurements This is a bit like stating the Alpha values in an inferential test. It's all about being able to quantify our confidence in our findings . Well, I can see that this makes rulers unreliable, but social scientists don't use a lot of rulers, or at least not when they are measuring the things they think are important. Maybe we don't use a lot of rulers but we use lots of different things to make all sorts of different kinds of measurements. Questionnaires to collect opinion. Trait Inventories to measure personalities Heart rate and blood pressure to measure key indicators of healthy function. Observations like those used in the assessment of child development that measure things like anxiety or attachment security FMRI and CAT scans to measure brain activity Different observers will not be completely consistent in their judgements about the behaviour being observed Questionnaire items may not be understood in the same way by different respondents Physical apparatus like these are always going to produce different levels of reaction from different participants that it is hard to allow for. So, the very act of taking a blood pressure reading will increase blood pressure in most people but the amount of increase is difficult to predict and control for. Participants rapidly gain insight into the traits being measured (ah hah! so you think I'm either an introvert or and extravert). This introduces demand characteristics which are hard to allow for and control. Any mechanical device used for taking measurements or making scans with are subject to calibration issues and work with different levels of sensitivity. These differences are usually well documented and so can be controlled. However, as a source of error and inconsistency they cannot be completely ruled out. In short, there are no completely reliable procedures, techniques or research instruments. So we have to work with the different sources of unreliability and try to make allowances for them. Ultimately, the fact that we can never completely rule out the effect of unreliable methods means that we must always state our conclusions with caution if we are going to minimise potential criticisms of invalidity. A schoolboy designed an experiment to show that spiders hear with their knees. First, he placed the spider on a desk and slapped a ruler on the desktop next to the spider and observed it jump. Then he took a pair of nail scissors and cut off the spiders legs. When he slapped the ruler down a second time the spider remained still. "See," the school boy cried. "The spider hears with its knees!" Give the subject a list of stimulus material to review. Take the stimulus material away after a fixed amount of time. Ask the subject to count backwards in threes from a high number for a fixed amount of time. Record how many items of stimulus material that they can accurately recall. Who ever uses their short term memory in such a weird way??? Issues in internal validity are usually a matter of poor control of extraneous variables. This is just a fancy word that you might see in the literature it means "to make measurable"
Full transcript