Loading presentation...

Present Remotely

Send the link below via email or IM


Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.


Evaluating Information and Analyzing Media, work in progress

Prospective course outline, based in large part on lecture slides by Josh Pasek. Still under construction, 2014/15

Stuart Soroka

on 30 September 2015

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Evaluating Information and Analyzing Media, work in progress

Evaluating Information
& Analyzing Media

Data Collection
Make a

Propose a test
Compare test
to prediction

Interpret results
Science versus Inference
Even the scientific method does not rely on science alone - it is usually part Science (Hypothesis -> Data Collection -> Data Analysis) and part Inference (Conclusions -> Theory)
Notice something new
Induction versus Deduction
We can also think of the difference between Science and Inference as being about Deduction (Science) and Induction (Inference)
What is good scientific theory?
It should be a natural, testable, overarching explanation for scientific observations
Natural: science cannot use magic, ghosts, gods, or "because I say so" as explanations.
Testable: science must make predictions that are falsifiable (i.e., that could be wrong).
In social science, we aim to make testable hypotheses; but our theories and conclusions are constrained by the approach we take in data collection and analysis.
From Hypotheses to Measures
Turning Social & Media Processes into Quantitative Data
Reliability & Validity
Indexes and Typologies
The Notion of Sampling
Probability & Nonprobability Samples
Content Analysis
Describing Quantitative Data
Phinney & Ong, 2007
How do we turn
into data?
: a natural, testable overarching explanation for scientific observations
Data: some way of
whatever it is we are interested in.
We can think of data as having two forms:
Things we can describe
(meanings, histories, symbols, ideologies)
Things we can measure
(attitudes, behaviors, how things relate, frequency of events)
Only the latter can be analyzed scientifically
First, recall that we can only do science on things we
Gender of students in class
Age of students in class
Amount of smoking in a television programs
Political orientation
There are lots of things that we can
, however..

These are all
They are
because they (usually) vary
Some vary over time (abstract variables), and some vary across units (concrete variables)
(And the variables that do not vary within a given sample are simply
The possible values of a given variable are called
political orientation
male, female, androgenous
Democrat, Republican, Independent
very happy, somewhat happy, somewhat unhappy, unhappy
are characteristics of people of things ; they describe people or things in specific ways
form the core of scientific measurement.
are how we measure the concepts we care about - and
are the critical link between theory and measurement.
measures for ideas
Essentially, first we figure out what idea we are going to think about, and then we find a way to measure it.
is not simple...
Number of students in class
Age of students in class
Stars in the sky
Artistic merit
can be difficult because of (a) the challenge of figuring out what to measure, and (b) the challege of conducting measurement.
The things we want to measure
The things we actually measure
In some cases the concept makes variable selection pretty clear
, while in other cases
the variables associated with the concept are harder to discern
Overall, we might think of things like this:
Data Collection
Big Ideas
lays out the prediction you are making.
are typically about causes and effects.
can be thought of as causes and effects.
For instance...
Watching television ads leads to depression.
This is the cause
This is the effect
Measures of this are independent variables (or predictors)
Measures of this are dependent variables (or outcomes)
Put differently,
Depression depends on watching television ads
Watching TV ads is proposed as a cause of depression
Watching television ads is independent of depression
Depression is not a proposed cause of watching ads
This module sets out some basic elements in social science inquiry: variables, attributes, concepts, dependent and independent variables, and ways of picturing positive, negative and non-relationships.
Cause and Effect
What is the...
dependent vs independent variable?
cause vs effect?
predictor vs outcome?
Watching more televison makes you like the police more.
Watching more television violence makes you mean to others.
The use of a proscenium set reduces the degree to which viewers identify with characters on television
Picturing causal relationships
positive relationship
negative relationship
no relationship
Independent variable
Dependent variable
Independent variable
Dependent variable
Independent variable
Dependent variable
is the process by which we get from
(Big Ideas) to
leads us to clear questions.
We get to these clear questions by developing, through
, a
research question
Research questions:
How does World of Warcraft influence people's social lives?
Are viewers of Fox News less knowledgeable than viewers of CNN?
Can you be addicted to the Internet?
We can answer these questions only if we know
what they mean - only if we define the relevant
What are "social lives," or "knowledge," or "addition"?
involves defining the specific
that are important for our
research question
Will students who read the news perform better in school?
are the concepts important in this research question.
So we might think about...
What level of student? (elementary, middle school, high school, college?)
Do students have to be full time?
What counts as reading the news? (NYT, ABC Nightly News, Huffington Post, People magazine, blogs, Facebook?)
Who counts as a news reader? (Once a day, once a week, less, more?)
How should we measure performance in school? (grades, quality of work, behavior, participation?)
And in the end our question might become...
Will full time college students who read the newspaper daily have higher grades than full time college students who do not?
Defining the relevant concepts makes clear what, exactly, we are studying.
Will full time college students who read the newspaper daily have higher grades than full time college students who do not?
Full time college students who read the newspaper daily will have higher grades than full time college students who do not.
Then we can take a clearly defined research question and turn it into an hypothesis...
This hypothesis has to be
- we have to be able to think about results that would support the hypotheses, but also results that would show it was false.
Data Collection
Big Ideas
Data Collection
Big Ideas
Operationalization is about turning
Often, what we aim to do is to take a complex
, identify a number of important
, and then find
for those dimensions.
complex concept
Consider a complex concept, like sensationalism in mass media.
Sensationalism coud include a number of very different dimensions: whether the story involves violence, or movie stars, or negative information, or political scandal.
Operationalization involves defining the dimensions we care about, and then finding indicators for those dimensions.
How can we measure violence in mass media?
# minutes during which people are fighting
how many guns or knives in media content
whether there is loud arguing in media content
Each of these is a way to operationalize violence in mass media.
And each comes with a set of attributes: # minutes, # guns, yes/no
Concepts can be operationalized in multiple ways - what we are trying to do is to find an
operational definition
for a concept.
The way we measure a concept matters.

We do not quite have the same informatoin, nor can we analyze things in the same way, if we (a) count minutes of violence, or (b) record whether or not there was violence.

Different response options (or coding options) effectively define a concept differently.
Do I think that more violence leads to Y, or that any violence leads to Y?
There are four types of
, referred to as
levels of measurement
Concepts can be operationalized in multiple ways.
Every variable is associated with a level of measurement.
And the operationalization of variables needs to adequately capture the concept under consideration
Whenever possible, attributes should be mutually exclusive, and exhaustive
Nominal variables
Ordinal variables
Interval variables
Ratio variables
Selected Examples / Discussion
We can only do science on things we can measure
Variables and attributes form the core of scientific measurement
To measure something, you need to figure out what it is
To get from theory to measurement, we need to conceptualize and operationalize
Variables can generally be divided into causes (independent) and effects (dependent)
pretty clear
not so clear
pretty clear
pretty clear
not so clear
There are many ways to think about a single concept
Conceptualization involves converting big idea into a manageable hypothesis
This often involves thinking about defining the concepts that are important, so that we can then think about measuring those concepts
Defining concepts makes clear what we are studying; the need for well-defined, measurement concepts also affects how we develop hypotheses
This module describes the process of conceptualization - moving from Theory to Hypothesis.
This module focuses on operationalization - the process by which we move from Hypothesis to Data Collection.
Same Concept, Different Measures
Data Collection
The first part of the social-scientific process involves
(we test a theory with data)
The second part of the social-scientific process involves
(we use data to develop/change a theory)
Deduction vs Induction
: take an idea and see if it holds up to new facts

: take a bunch of facts and decide what they suggest
Deduction vs Induction
The scientific method includes both deduction and induction
Ideas can become scientific theories as they are refined and tested
If conceptualization and operationalization are done well, the resulting data will clarify the theory
We can operationalize in many different ways
Where does theory come from?
Theory can come from inductive or deductive thinking...
But it is deduction that tests social-scientific theory.
This module considers two important topics in the social-scientific method: individual versus deductive reasoning, and different ways of testing.
When it's time to test an hypothesis, we can consider a range of different types of tests:
(and choosing a test is part of
, since choosing the type of test happens alongside choosing our variables)
Observational Studies
Mixed methods designs
Manufacturing Consent
Negativity in Election Advertising
Issue Framing
Imagine that the United States is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows:

(Group 1:)
If Program A is adopted, 200 people will be saved.
If Program B is adopted, there is 1/3 probability that 600 people will be saved, and 2/3 probability that no people will be saved.

(Group 2:)
If Program C is adopted 400 people will die.
If Program D is adopted there is 1/3 probability that nobody will die, and 2/3 probability that 600 people will die.
In nominal variables, attributes are distinguished only by (un-ordered) unique categories.
In ordinal variables, attributes are distinguished by ordered unique categories. Attributes can be rank-ordered, but the distance between attributes is not meaningful.
In interval variables, attributes are ordered, and distances between them are meaningful.
In ratio variables, attributes are ordered, distances between them are meaningful, and so is the zero point.
(Because the zero point is meaningful, we are able to talk about one case having twice as much of X as another case.)
newspaper ownership (Rich Guy A, Rich Guy B...)
television show category (sitcom, drama...)
gender of protagonists (male, female...)
how negative do you think that political ad is (very negative, somewhat negative, not so negative...)
to what degree are audiences likely to identify with the actors (a lot, a little, not at all...)
rate how much you like that show on a scale of 1 to 100
describe the news story on a scale from 1 to 7, where 1 is boring and 7 is exiting
how many people watch that show?
how many pages are in the newspaper?
how many news anchors are female?
Often a
can be measured in different ways (i.e., using different
levels of measurement
). Consider with nominal, ordinal, interval and ratio-level variables for the following
negativity in media content
political bias in news reporting
Comparing U.S. & World Covers for TIME Magazine
You would like to make a movie that gets people to question their attitudes about gender
You think about the movies that made you question some important belief, and note that the use of satire was a common feature
Inductive logic
suggests that satire can be a useful tool in getting people to question their values
You have a theory about satire and gender attitudes.
You get people to watch movies, some of which involve satirical treatments of gender issues, and you ask viewers questions about gender
The people who changed their minds are the same ones who watching the satirical movies (confirming your theory)
Deductive logic
confirms that satire can be a useful tool in getting people to question their values
You want to know why people don't like welfare policy
You think about news stories about welfare policy, and note that they all portray Blacks taking advantage of the system
You have a theory about race and welfare in media content
You look at (and code) the content of news articles, and you conduct an experiment in which participants read these news stories and answer questions about welfare policy
Participants who read articles where race is a factor show lower levels of support for welfare
Example 1
Example 2
Inductive logic
suggests that biases in news coverage lead people to make racially-charged judgements about welfare policy
Deductive logic
confirms that news stories make people (majority Whites, at least) dislike welfare because of racially-charged attitudes
Experiments give participants (subjects) different treatments and consider whether the two (or more) treatment groups then differ.

Surveys ask a given group of respondents a series of questions related to the concepts (variables) important in an hypothesis, and look at relationship between individuals' responses
Observational studies record real-world observations on a set of concepts (variables), and look at the relationships between those variables.
(We might generate our own codes, or use already-coded data, as in epidemiological studies, or macroconomic studies.)

Mixed methods designs combine several of the other approaches.

This module outlines some issues in measurement - reliability and validity - and some issues in measurement error - random error, and systematic error.
As we move from Big Ideas, to Concepts, to Variables we often have to make sacrifices.

In the end, our variables may be suboptimal - they may not perfectly match our Big Ideas.

There sometimes is error - error in our definition of variables (based on Big Ideas), and error in the measurement of the variables.
Imperfect data doesn't mean that we've failed - counting guns is not a perfect measure of violence in media, but it does the trick most of the time.

But we do need to be cautious and thoughtful about how we measure things.
Data error comes in two forms:
Systematic error
Random error
When error is 'random', we don't know when exactly measures are wrong.
When error is 'systematic', we know that error is more likely for certain groups, or in certain situations.

Measure negativity in news stories by counting the number of negatives words versus the number of positive words in text.

The context of words matters, however - so some words that seem negative may not be, and some words that seem positive may not be - our counts may be wrong, but in random ways across news stories.

Count the number of guns in films to capture violence in media.

The number of guns is especially high in police shows, even though the guns may be rarely used; and the number of guns is low (non-existent) in all shows taking place before guns were invented - so we systematically over-estimate violence in some shows, and systematically under-estimate violence in others.
Reliability is mostly concerned with random error.

Validity is mostly concerned with systematic error.
Reliability indiciates that something is consistent.

There are different types of reliability, for instance...
If you tried to gather the same data again, would you get the same results?

If the answer is yes, your measures exhibit
test-retest reliability
If you tried to gather the same information using a slightly different measure, would you get the same results?

If the answer is yes, your measures exhibit
inter-item reliability
In content analysis, if you had two coders code the same stories, would you get the same results?

If the answer is yes, your measures exhibit
inter-coder reliability
Reliability indiciates that something is accurate.

There are different types of validity, for instance...
Does your measure look like what it is supposed to measure?

If the answer is yes, your measures exhibit
face validity
Does your measure relate to other variables in the way you would expect?

If the answer is yes, your measures exhibit
criterion validity
Data are never perfect.
Researchers should try to minimize both random and systematic errors.
Variables are reliable when the same data could be obtained again.
Variables are valid when the data describe the concept of interest.
Variables should be both reliable and valid.
The Content Analysis of Media Frames: Toward Improving Reliability and Validity
What's in a Frame? A Content Analysis of Media Framing Studies in the World's Leading Communication Journals, 1990-2005
Reliability and Validity:
The Implicit Association Test
Why use an index to measure a concept?
Most variables will have some error.

Often, the errors will be different from one measure to the next.

So combining several related measures can help overcome (some of the) error in individual measures.
If we can successfully minimize both random and systematic error, we will have a measure that exhibits greater reliability and validity.
Why would two (combined) measures be better than one? Consider our measures of violence in media...

Guns on their own is a flawed measure; yelling on its own is a flawed measure; but together they may provide a more reliable and valid measure of violence.
There are 3 different types of
composite measures
direct measures of the concept
different measures of parts of a complex concept
measures of overlapping concepts
each variable gets at the whole idea
each variable gets at some of the idea
each variable captures too much
For instance, measuring television viewing using:

a. self-reports
b. Neilson ratings
c. reports from family members

...produces an
For instance, measuring television violence using:

a. punching
b. kicking
c. guns

...produces an
For instance, measuring kids who watch TV using:

a. age
b. hours per day in front of a television

...produces a
Combining variables can lead to more accurate measurement
Indexes triangulate a core concept with multiple measures
Typologies use multiple measures to define notable categories
But how do we decide whether items fit together into an index? Often, this requires looking at the
between variables....
We deal with correlation in some detail in class. But also see these resources:


This module introduces the idea of building indices in ordr to overcome weaknesses in individual variables.
Prime Suspects: The Influence of Local Television News on the Viewing Public
Gilliam & Iyengar
Local television news is the public's primary source of public affairs information. News stories about crime dominate local news programming because they meet the demand for "action news." The prevalence of this type of reporting has led to a crime narrative or "script" that includes two core elements: crime is violent and perpetrators of crime are non-white males. We show that this script has become an ingrained heuristic for understanding crime and race. Using a multi-method design, we assess the impact of the crime script on the viewing public. Our central finding is that exposure to the racial element of the crime script increases support for punitive approaches to crime and heightens negative attitudes about African-Americans among white, but not black, viewers. In closing, we consider the implications of our results for intergroup relations, electoral politics, and the practice of journalism.
Also see:
This module covers some general issues in
, including
units of observation
units of analysis
, and
sampling error
Ideally, we would find out what everyone thinks by asking them (all of them).

This is the objective of the
- but the census spends 14 billion dollars to ask 10 questions of all Americans. There may be more efficient ways to gather evidence - not just by asking fewer people, but in the case of media, by content analyzing fewer (rather than all) stories.
Another option is to gather a
- a group of people/things for which data will be collected.
But not all
will produce data similar to the
. And we often want to use a sample to generate findings that we can
beyond that sample.
If we use an appropriate
, however, we can talk not just about the people/things we sampled, but also the kinds of people/things we sampled. (Our sample will be representative of a larger group. It will be a
representative sample
Another option is to gather a
- a group of people/things for which data will be collected.
What is the purpose of a representative sample?
If a small group of people looks and acts like a larger group, you can use them to find out about the larger group
A sample is representative if it looks and acts like the population from which it is drawn.
The idea behind a representative sample is that it will lead to the same conclusions as if you had surveyed/coded the entire population.
i.e., findings from a
representative sample
can be
to the
Units of Analysis
are the things we are comparing/measuring - typically people, or media stories, etc.
Units of Observation
are the things that we gather data from.
These needn't be the same...
When we are talking about comparing households, we might talk to individuals to collect data about households
Units of analysis and observation help us identify representative samples
And representative samples can always be generalized to make conclusions about the population
You only learn about the kinds of people you sample
Representative samples can always be generalized
Units of analysis identify the things we are comparing
Units of observation index the sources of our data
All samples have error
This module reviews representative sampling (randomness), sampling error, sampling frames, and types of probability and non-probablity samples.
One way to generate representative samples is through random sampling.
There will be random errors, but these will cancel out as we increase our (random) sample.
5 roles
50 roles
5000 roles
Like rolling dice...
(This is due to the law of large numbers.)
This principle lets us use much smaller samples to generalize to a population.

If we can choose cases from a population at random, we have a good sense for how much random error we are likely to have.

Even better, with a random sample, all of our error is random error, none of it is systematic error or bias.
Sampling error is a type of random error...
This Quinnipiac poll of 1,544 registered voters was conducted Feb. 1-6, 2012 and has a margin of error of 2.5 percent.
The margin of error is 2.5 percentage points if 95 out of 100 random samples would be within 2.5 percentage points
The trouble with a random sample is that we don't usually have the entire population to sample from (that is, we don't know the entire population)

We typically have to use a sampling frame.

And the randomness of our error, and the representativeness of our sample, is a functionof that sampling frame.
Random is a good thing.
The goal is to identify error, not necessarily to eliminate it.
A sample is only as representative as its sampling frame.
You can’t generalize from a non-probability sample.
The sizes and types of errors present depend on the methods used.
discussion in class
There are two major types of sampling frames:
2. probability
1. non-probability
Simple random sampling

Systematic sampling

Stratified sampling

Cluster sampling

Convenience sampling

Purposive sampling

Quota sampling

Snowball sampling

Volunteer sampling
find a group of individuals who are easy to sample, and sample them
sample based on predetermined criteria (i.e., articles with
identify predetermined groups, and sample more broadly with the aim of matching quotas from each predetermined group
find members of the population, and ask them to point you to others to interview (also called network sampling)
Non-probability samples can be easy to generate, but may not be representative of the population. This is a problem if you want to be able to generalize your findings. But there are some situations in which non-probability samples can be useful. (This is especially true when, for various reasons, we simply cannot get a probability sample.)
Probability samples are intended to match the population from which they are drawn. They allow us to generalized our findings.
purely random selection of cases from the population
identify a continuous process you want to sample from, and select every Nth unit
divide cases into groups (strata), and generate a random sample within groups (where the groups are mutually exclusive)
similar to stratified sampling, but where we sample from some of the clusters/groups/strata only
Content analysis involves looking at a number of variables, coded in a sample of text sources (i.e., news stories), to test an hypothesis.

- Are television programs more violent now then they used to be?
- How are women portrayed in TV sitcoms?
- What is the tone of coverage of foreign leaders?
These are all questions that can be answered with content analysis.

Content analysis uses
observational data

content-analytic research, we often distinguish between
content (which we directly observe) and
content (which we infer from content).

First steps:

- Identify research question
- Choose units of observation and analysis
- Design a codebook
- Sample media content
- Code data, with an eye on reliability and validity (i.e., inter-coder reliability)
more information:

text, Chapter 11.
advertise, and use whomever volunteers
Pulling some ideas together
Gillaim & Iyengar talk about "scripts" in news content.
Those scripts make it easy for journalists to produce content; and easier for readers to understand news.
But scripts can be important in helping people understand and remember information more easily.
Scripts can also be problematic. Gilliam & Iyengar suggest one way; Chomsky suggests another.
This may be why sitcoms are an interesting signal (or driver) of social change - because the sitcom format allows writers to take on what otherwise might be more difficult-to-assimilate subjects.
It might also contribute to "bandwagon" effects in political campaigns.
(and to my ability to predict election outcomes...)
And the value of content analyses more generally, in the study of communication, politics and policy, etc.
Though not all content analyses rely on the existence of a "script" - all they really require is that we take seriously the possibility that the content of media (the words used, the pictures used, etc.) matter for the way in which media represent the world around us, and/or the way in which we understand or learn about that world.
The analysis of data starts with the kind of data we have:
, or
In class, we'll look at cross-tabulations, bar graphs and histograms, means and by-group means, and correlations.
Negativity in News Content
Reliability and Validity
Expectations (and Findings)
From our work in Excel in class, based on human-coded data in Young and Soroka...
Crime Coverage
these results point to some other findings in communication...
Environmental Coverage
The coverage of catastrophic environmental issues tends to be linked to weather disasters.
Coverage of Foreign Affairs
Crime coverage tends to focus on violent rather than nonviolent crime.
The basic principles of social scientific research in communication studies
Work in Progress
Stuart Soroka, Communication Studies, University of Michigan
Full transcript