Loading presentation...
Prezi is an interactive zooming presentation

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Make your likes visible on Facebook?

Connect your Facebook account to Prezi and let your likes appear on your timeline.
You can change this under Settings & Account at any time.

No, thanks

Evaluating Information and Analyzing Media, work in progress

Prospective course outline, based in large part on lecture slides by Josh Pasek. Still under construction, 2014/15
by

Stuart Soroka

on 30 September 2015

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Evaluating Information and Analyzing Media, work in progress

Evaluating Information
& Analyzing Media

Data Collection
Analysis
Conclusions
Theory
Hypothesis
Make a
prediction

Propose a test
Compare test
to prediction

Interpret results
Science versus Inference
Even the scientific method does not rely on science alone - it is usually part Science (Hypothesis -> Data Collection -> Data Analysis) and part Inference (Conclusions -> Theory)
Notice something new
Induction versus Deduction
We can also think of the difference between Science and Inference as being about Deduction (Science) and Induction (Inference)
http://www.socialresearchmethods.net/kb/dedind.php
What is good scientific theory?
It should be a natural, testable, overarching explanation for scientific observations
Natural: science cannot use magic, ghosts, gods, or "because I say so" as explanations.
Testable: science must make predictions that are falsifiable (i.e., that could be wrong).
In social science, we aim to make testable hypotheses; but our theories and conclusions are constrained by the approach we take in data collection and analysis.
Operationalization
From Hypotheses to Measures
Conceptualization
Turning Social & Media Processes into Quantitative Data
Reliability & Validity
Indexes and Typologies
The Notion of Sampling
Probability & Nonprobability Samples
Content Analysis
Describing Quantitative Data
Phinney & Ong, 2007
How do we turn
theory
into data?
Theory
: a natural, testable overarching explanation for scientific observations
Data: some way of
measuring
whatever it is we are interested in.
We can think of data as having two forms:
Things we can describe
(meanings, histories, symbols, ideologies)
Things we can measure
(attitudes, behaviors, how things relate, frequency of events)
Only the latter can be analyzed scientifically
First, recall that we can only do science on things we
measure
...
Gender of students in class
Age of students in class
Amount of smoking in a television programs
Political orientation
Happiness
There are lots of things that we can
measure
, however..

These are all
variables
.
They are
variables
because they (usually) vary
Some vary over time (abstract variables), and some vary across units (concrete variables)
(And the variables that do not vary within a given sample are simply
constants
.)
The possible values of a given variable are called
attributes
sex
political orientation
happiness
male, female, androgenous
Democrat, Republican, Independent
very happy, somewhat happy, somewhat unhappy, unhappy
These
attributes
are characteristics of people of things ; they describe people or things in specific ways
Variables
and
attributes
form the core of scientific measurement.
Variables
are how we measure the concepts we care about - and
concepts
are the critical link between theory and measurement.
concept
variable
ideas
measures for ideas
Essentially, first we figure out what idea we are going to think about, and then we find a way to measure it.
But
measurement
is not simple...
Easy
Hard
Number of students in class
Age of students in class
Happiness
Stars in the sky
Artistic merit
Measurement
can be difficult because of (a) the challenge of figuring out what to measure, and (b) the challege of conducting measurement.
The things we want to measure
The things we actually measure
In some cases the concept makes variable selection pretty clear
, while in other cases
the variables associated with the concept are harder to discern
1
2
3
4
5
6
Overall, we might think of things like this:
7
Theory
Hypothesis
Data Collection
Big Ideas
Concepts
Variables
Attributes
Conceptualization
Operationalization
The
hypothesis
lays out the prediction you are making.
Hypotheses
are typically about causes and effects.
So
concepts
can be thought of as causes and effects.
For instance...
Watching television ads leads to depression.
This is the cause
This is the effect
Measures of this are independent variables (or predictors)
Measures of this are dependent variables (or outcomes)
Put differently,
Depression depends on watching television ads
Watching TV ads is proposed as a cause of depression
and
Watching television ads is independent of depression
Depression is not a proposed cause of watching ads
8
This module sets out some basic elements in social science inquiry: variables, attributes, concepts, dependent and independent variables, and ways of picturing positive, negative and non-relationships.
9
Cause and Effect
What is the...
dependent vs independent variable?
cause vs effect?
predictor vs outcome?
Watching more televison makes you like the police more.
Watching more television violence makes you mean to others.
The use of a proscenium set reduces the degree to which viewers identify with characters on television
Picturing causal relationships
positive relationship
negative relationship
no relationship
Independent variable
Dependent variable
Independent variable
Dependent variable
Independent variable
Dependent variable
10
1
Conceptualization
is the process by which we get from
Theory
(Big Ideas) to
Hypothesis
(Concepts).
Conceptualization
leads us to clear questions.
We get to these clear questions by developing, through
conceptualization
, a
research question
.
2
Research questions:
How does World of Warcraft influence people's social lives?
Are viewers of Fox News less knowledgeable than viewers of CNN?
Can you be addicted to the Internet?
We can answer these questions only if we know
exactly
what they mean - only if we define the relevant
concepts
.
What are "social lives," or "knowledge," or "addition"?
3
So
conceptualization
involves defining the specific
concepts
that are important for our
research question
.
Will students who read the news perform better in school?
These
are the concepts important in this research question.
So we might think about...
What level of student? (elementary, middle school, high school, college?)
Do students have to be full time?
What counts as reading the news? (NYT, ABC Nightly News, Huffington Post, People magazine, blogs, Facebook?)
Who counts as a news reader? (Once a day, once a week, less, more?)
How should we measure performance in school? (grades, quality of work, behavior, participation?)
And in the end our question might become...
Will full time college students who read the newspaper daily have higher grades than full time college students who do not?
4
Defining the relevant concepts makes clear what, exactly, we are studying.
Will full time college students who read the newspaper daily have higher grades than full time college students who do not?
Full time college students who read the newspaper daily will have higher grades than full time college students who do not.
Then we can take a clearly defined research question and turn it into an hypothesis...
This hypothesis has to be
falsifiable
- we have to be able to think about results that would support the hypotheses, but also results that would show it was false.
http://isites.harvard.edu/fs/docs/icb.topic1063339.files/Phinney.Ong.2007.pdf
1
Theory
Hypothesis
Data Collection
Big Ideas
Concepts
Variables
Attributes
Conceptualization
Operationalization
Theory
Hypothesis
Data Collection
Big Ideas
Concepts
Variables
Attributes
Conceptualization
Operationalization
Operationalization is about turning
concepts
into
variables
.
Often, what we aim to do is to take a complex
concept
, identify a number of important
dimensions
, and then find
indicators
for those dimensions.
complex concept
dimension
indicator
2
Consider a complex concept, like sensationalism in mass media.
Sensationalism coud include a number of very different dimensions: whether the story involves violence, or movie stars, or negative information, or political scandal.
sensationalism
violence
indicator
Operationalization involves defining the dimensions we care about, and then finding indicators for those dimensions.
3
How can we measure violence in mass media?
indicator
# minutes during which people are fighting
how many guns or knives in media content
whether there is loud arguing in media content
Each of these is a way to operationalize violence in mass media.
And each comes with a set of attributes: # minutes, # guns, yes/no
4
Concepts can be operationalized in multiple ways - what we are trying to do is to find an
operational definition
for a concept.
5
The way we measure a concept matters.

We do not quite have the same informatoin, nor can we analyze things in the same way, if we (a) count minutes of violence, or (b) record whether or not there was violence.

Different response options (or coding options) effectively define a concept differently.
Do I think that more violence leads to Y, or that any violence leads to Y?
There are four types of
attributes
, referred to as
levels of measurement
:
Concepts can be operationalized in multiple ways.
Every variable is associated with a level of measurement.
And the operationalization of variables needs to adequately capture the concept under consideration
Whenever possible, attributes should be mutually exclusive, and exhaustive
6
Nominal variables
7
Ordinal variables
8
Interval variables
9
Ratio variables
Fangate
http://thedailyshow.cc.com/videos/0x77un/democalypse-2014---the-last-perspiration-of-crist
http://faculty.wcas.northwestern.edu/~jnd260/pub/Druckman%20JOP%202003.pdf
http://www.slate.com/articles/news_and_politics/politics/2014/10/rick_scott_charlie_crist_and_a_fan_campaigns_have_fought_over_the_rules.html
http://content.time.com/time/nation/article/0,8599,2021078,00.html
Selected Examples / Discussion
Topics
We can only do science on things we can measure
Variables and attributes form the core of scientific measurement
To measure something, you need to figure out what it is
To get from theory to measurement, we need to conceptualize and operationalize
Variables can generally be divided into causes (independent) and effects (dependent)
pretty clear
not so clear
pretty clear
pretty clear
not so clear
There are many ways to think about a single concept
Conceptualization involves converting big idea into a manageable hypothesis
This often involves thinking about defining the concepts that are important, so that we can then think about measuring those concepts
Defining concepts makes clear what we are studying; the need for well-defined, measurement concepts also affects how we develop hypotheses
This module describes the process of conceptualization - moving from Theory to Hypothesis.
This module focuses on operationalization - the process by which we move from Hypothesis to Data Collection.
10
Same Concept, Different Measures
Data Collection
Analysis
Conclusions
Theory
Hypothesis
The first part of the social-scientific process involves
deduction
(we test a theory with data)
The second part of the social-scientific process involves
induction
(we use data to develop/change a theory)
1
2
Deduction vs Induction
Deduction
: take an idea and see if it holds up to new facts

Induction
: take a bunch of facts and decide what they suggest
3
Deduction vs Induction
The scientific method includes both deduction and induction
Ideas can become scientific theories as they are refined and tested
If conceptualization and operationalization are done well, the resulting data will clarify the theory
We can operationalize in many different ways
4
Where does theory come from?
Theory can come from inductive or deductive thinking...
But it is deduction that tests social-scientific theory.
This module considers two important topics in the social-scientific method: individual versus deductive reasoning, and different ways of testing.
When it's time to test an hypothesis, we can consider a range of different types of tests:
(and choosing a test is part of
operationalization
, since choosing the type of test happens alongside choosing our variables)
Surveys
Experiments
Observational Studies
Mixed methods designs
Manufacturing Consent
Negativity in Election Advertising
Issue Framing
http://en.wikipedia.org/wiki/Framing_(social_sciences)
http://www.jstor.org/stable/1685855
Imagine that the United States is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows:

(Group 1:)
If Program A is adopted, 200 people will be saved.
If Program B is adopted, there is 1/3 probability that 600 people will be saved, and 2/3 probability that no people will be saved.

(Group 2:)
If Program C is adopted 400 people will die.
If Program D is adopted there is 1/3 probability that nobody will die, and 2/3 probability that 600 people will die.
In nominal variables, attributes are distinguished only by (un-ordered) unique categories.
In ordinal variables, attributes are distinguished by ordered unique categories. Attributes can be rank-ordered, but the distance between attributes is not meaningful.
In interval variables, attributes are ordered, and distances between them are meaningful.
In ratio variables, attributes are ordered, distances between them are meaningful, and so is the zero point.
(Because the zero point is meaningful, we are able to talk about one case having twice as much of X as another case.)
Examples:
newspaper ownership (Rich Guy A, Rich Guy B...)
television show category (sitcom, drama...)
gender of protagonists (male, female...)
Examples:
how negative do you think that political ad is (very negative, somewhat negative, not so negative...)
to what degree are audiences likely to identify with the actors (a lot, a little, not at all...)
Examples:
rate how much you like that show on a scale of 1 to 100
describe the news story on a scale from 1 to 7, where 1 is boring and 7 is exiting
Examples:
how many people watch that show?
how many pages are in the newspaper?
how many news anchors are female?
Often a
variable
can be measured in different ways (i.e., using different
levels of measurement
). Consider with nominal, ordinal, interval and ratio-level variables for the following
concepts
:
negativity in media content
political bias in news reporting
http://www.livingroomcandidate.org
Comparing U.S. & World Covers for TIME Magazine
http://truththeory.com/2013/09/26/stunning-comparing-u-s-world-covers-for-time-magazine/
You would like to make a movie that gets people to question their attitudes about gender
You think about the movies that made you question some important belief, and note that the use of satire was a common feature
Inductive logic
suggests that satire can be a useful tool in getting people to question their values
You have a theory about satire and gender attitudes.
You get people to watch movies, some of which involve satirical treatments of gender issues, and you ask viewers questions about gender
The people who changed their minds are the same ones who watching the satirical movies (confirming your theory)
Deductive logic
confirms that satire can be a useful tool in getting people to question their values
You want to know why people don't like welfare policy
You think about news stories about welfare policy, and note that they all portray Blacks taking advantage of the system
You have a theory about race and welfare in media content
You look at (and code) the content of news articles, and you conduct an experiment in which participants read these news stories and answer questions about welfare policy
Participants who read articles where race is a factor show lower levels of support for welfare
Example 1
Example 2
Inductive logic
suggests that biases in news coverage lead people to make racially-charged judgements about welfare policy
Deductive logic
confirms that news stories make people (majority Whites, at least) dislike welfare because of racially-charged attitudes
5
Experiments give participants (subjects) different treatments and consider whether the two (or more) treatment groups then differ.

Surveys ask a given group of respondents a series of questions related to the concepts (variables) important in an hypothesis, and look at relationship between individuals' responses
Observational studies record real-world observations on a set of concepts (variables), and look at the relationships between those variables.
(We might generate our own codes, or use already-coded data, as in epidemiological studies, or macroconomic studies.)

Mixed methods designs combine several of the other approaches.

http://annenberg.usc.edu/pages/~/media/MDSCI/Gender_Inequality_in_500_Popular_Films_-_Smith_2013.ashx
http://deepblue.lib.umich.edu/bitstream/handle/2027.42/83429/2003.Anderson_etal.InfluenceofMediaViolenceonYouth.PsychologicalScienceinthePublicInterest.pdf?sequence=1
http://www.people-press.org/2014/10/23/as-midterms-near-gop-leads-on-key-issues-democrats-have-a-more-positive-image/
http://www.jstor.org/stable/586283
This module outlines some issues in measurement - reliability and validity - and some issues in measurement error - random error, and systematic error.
As we move from Big Ideas, to Concepts, to Variables we often have to make sacrifices.

In the end, our variables may be suboptimal - they may not perfectly match our Big Ideas.

There sometimes is error - error in our definition of variables (based on Big Ideas), and error in the measurement of the variables.
1
Imperfect data doesn't mean that we've failed - counting guns is not a perfect measure of violence in media, but it does the trick most of the time.

But we do need to be cautious and thoughtful about how we measure things.
2
Data error comes in two forms:
3
Systematic error
Random error
When error is 'random', we don't know when exactly measures are wrong.
When error is 'systematic', we know that error is more likely for certain groups, or in certain situations.
Example:

Measure negativity in news stories by counting the number of negatives words versus the number of positive words in text.

The context of words matters, however - so some words that seem negative may not be, and some words that seem positive may not be - our counts may be wrong, but in random ways across news stories.
Example:

Count the number of guns in films to capture violence in media.

The number of guns is especially high in police shows, even though the guns may be rarely used; and the number of guns is low (non-existent) in all shows taking place before guns were invented - so we systematically over-estimate violence in some shows, and systematically under-estimate violence in others.
Reliability is mostly concerned with random error.

Validity is mostly concerned with systematic error.
4
Reliability indiciates that something is consistent.

There are different types of reliability, for instance...
5
Reliability
If you tried to gather the same data again, would you get the same results?

If the answer is yes, your measures exhibit
test-retest reliability
.
If you tried to gather the same information using a slightly different measure, would you get the same results?

If the answer is yes, your measures exhibit
inter-item reliability
.
In content analysis, if you had two coders code the same stories, would you get the same results?

If the answer is yes, your measures exhibit
inter-coder reliability
.
Reliability indiciates that something is accurate.

There are different types of validity, for instance...
6
Validity
Does your measure look like what it is supposed to measure?

If the answer is yes, your measures exhibit
face validity
.
Does your measure relate to other variables in the way you would expect?

If the answer is yes, your measures exhibit
criterion validity
.
Data are never perfect.
Researchers should try to minimize both random and systematic errors.
Variables are reliable when the same data could be obtained again.
Variables are valid when the data describe the concept of interest.
Variables should be both reliable and valid.
http://www.socialresearchmethods.net/kb/relandval.php
http://onlinelibrary.wiley.com/doi/10.1111/j.1460-2466.2008.00384.x/abstract
The Content Analysis of Media Frames: Toward Improving Reliability and Validity
Examples:
What's in a Frame? A Content Analysis of Media Framing Studies in the World's Leading Communication Journals, 1990-2005
http://jmq.sagepub.com/content/86/2/349.short
Reliability and Validity:
The Implicit Association Test
https://implicit.harvard.edu/implicit/research/
1
Why use an index to measure a concept?
Most variables will have some error.

Often, the errors will be different from one measure to the next.

So combining several related measures can help overcome (some of the) error in individual measures.
2
If we can successfully minimize both random and systematic error, we will have a measure that exhibits greater reliability and validity.
3
Why would two (combined) measures be better than one? Consider our measures of violence in media...

Guns on their own is a flawed measure; yelling on its own is a flawed measure; but together they may provide a more reliable and valid measure of violence.
4
There are 3 different types of
composite measures
:
direct measures of the concept
different measures of parts of a complex concept
measures of overlapping concepts
each variable gets at the whole idea
each variable gets at some of the idea
each variable captures too much
5
For instance, measuring television viewing using:

a. self-reports
b. Neilson ratings
c. reports from family members

...produces an
index
.
6
For instance, measuring television violence using:

a. punching
b. kicking
c. guns

...produces an
index
.
7
For instance, measuring kids who watch TV using:

a. age
b. hours per day in front of a television

...produces a
typology
.
Combining variables can lead to more accurate measurement
Indexes triangulate a core concept with multiple measures
Typologies use multiple measures to define notable categories
But how do we decide whether items fit together into an index? Often, this requires looking at the
correlations
between variables....
We deal with correlation in some detail in class. But also see these resources:

http://en.wikipedia.org/wiki/Correlation_and_dependence

http://www.mathsisfun.com/data/correlation.html
This module introduces the idea of building indices in ordr to overcome weaknesses in individual variables.
Prime Suspects: The Influence of Local Television News on the Viewing Public
http://www.jstor.org/stable/2669264
Gilliam & Iyengar
Local television news is the public's primary source of public affairs information. News stories about crime dominate local news programming because they meet the demand for "action news." The prevalence of this type of reporting has led to a crime narrative or "script" that includes two core elements: crime is violent and perpetrators of crime are non-white males. We show that this script has become an ingrained heuristic for understanding crime and race. Using a multi-method design, we assess the impact of the crime script on the viewing public. Our central finding is that exposure to the racial element of the crime script increases support for punitive approaches to crime and heightens negative attitudes about African-Americans among white, but not black, viewers. In closing, we consider the implications of our results for intergroup relations, electoral politics, and the practice of journalism.
Also see:
http://crx.sagepub.com/content/27/5/547.abstract
This module covers some general issues in
sampling
, including
generalization
,
units of observation
and
units of analysis
, and
sampling error
.
1
Ideally, we would find out what everyone thinks by asking them (all of them).

This is the objective of the
census
- but the census spends 14 billion dollars to ask 10 questions of all Americans. There may be more efficient ways to gather evidence - not just by asking fewer people, but in the case of media, by content analyzing fewer (rather than all) stories.
2
Another option is to gather a
sample
- a group of people/things for which data will be collected.
But not all
samples
will produce data similar to the
population
. And we often want to use a sample to generate findings that we can
generalize
beyond that sample.
If we use an appropriate
sample
, however, we can talk not just about the people/things we sampled, but also the kinds of people/things we sampled. (Our sample will be representative of a larger group. It will be a
representative sample
.)
2
Another option is to gather a
sample
- a group of people/things for which data will be collected.
3
What is the purpose of a representative sample?
If a small group of people looks and acts like a larger group, you can use them to find out about the larger group
A sample is representative if it looks and acts like the population from which it is drawn.
The idea behind a representative sample is that it will lead to the same conclusions as if you had surveyed/coded the entire population.
i.e., findings from a
representative sample
can be
generalized
to the
population
.
4
Units of Analysis
are the things we are comparing/measuring - typically people, or media stories, etc.
Units of Observation
are the things that we gather data from.
These needn't be the same...
When we are talking about comparing households, we might talk to individuals to collect data about households
5
Units of analysis and observation help us identify representative samples
And representative samples can always be generalized to make conclusions about the population
You only learn about the kinds of people you sample
Representative samples can always be generalized
Units of analysis identify the things we are comparing
Units of observation index the sources of our data
All samples have error
This module reviews representative sampling (randomness), sampling error, sampling frames, and types of probability and non-probablity samples.
1
One way to generate representative samples is through random sampling.
There will be random errors, but these will cancel out as we increase our (random) sample.
5 roles
50 roles
5000 roles
Like rolling dice...
(This is due to the law of large numbers.)
2
This principle lets us use much smaller samples to generalize to a population.

If we can choose cases from a population at random, we have a good sense for how much random error we are likely to have.

Even better, with a random sample, all of our error is random error, none of it is systematic error or bias.
3
Sampling error is a type of random error...
This Quinnipiac poll of 1,544 registered voters was conducted Feb. 1-6, 2012 and has a margin of error of 2.5 percent.
The margin of error is 2.5 percentage points if 95 out of 100 random samples would be within 2.5 percentage points
4
The trouble with a random sample is that we don't usually have the entire population to sample from (that is, we don't know the entire population)

We typically have to use a sampling frame.

And the randomness of our error, and the representativeness of our sample, is a functionof that sampling frame.
Random is a good thing.
The goal is to identify error, not necessarily to eliminate it.
A sample is only as representative as its sampling frame.
You can’t generalize from a non-probability sample.
The sizes and types of errors present depend on the methods used.
discussion in class
5
There are two major types of sampling frames:
2. probability
1. non-probability
Simple random sampling

Systematic sampling

Stratified sampling

Cluster sampling

Convenience sampling

Purposive sampling

Quota sampling

Snowball sampling

Volunteer sampling
find a group of individuals who are easy to sample, and sample them
sample based on predetermined criteria (i.e., articles with
X
)
identify predetermined groups, and sample more broadly with the aim of matching quotas from each predetermined group
find members of the population, and ask them to point you to others to interview (also called network sampling)
Non-probability samples can be easy to generate, but may not be representative of the population. This is a problem if you want to be able to generalize your findings. But there are some situations in which non-probability samples can be useful. (This is especially true when, for various reasons, we simply cannot get a probability sample.)
Probability samples are intended to match the population from which they are drawn. They allow us to generalized our findings.
purely random selection of cases from the population
identify a continuous process you want to sample from, and select every Nth unit
divide cases into groups (strata), and generate a random sample within groups (where the groups are mutually exclusive)
similar to stratified sampling, but where we sample from some of the clusters/groups/strata only
Content analysis involves looking at a number of variables, coded in a sample of text sources (i.e., news stories), to test an hypothesis.

- Are television programs more violent now then they used to be?
- How are women portrayed in TV sitcoms?
- What is the tone of coverage of foreign leaders?
These are all questions that can be answered with content analysis.

Content analysis uses
observational data
.

When
conceptualizing
and
operationalizing
content-analytic research, we often distinguish between
manifest
content (which we directly observe) and
latent
content (which we infer from content).

First steps:

- Identify research question
- Choose units of observation and analysis
- Design a codebook
- Sample media content
- Code data, with an eye on reliability and validity (i.e., inter-coder reliability)
more information:

text, Chapter 11.
http://academic.csuohio.edu/kneuendorf/content/resources/flowc.htm
http://pareonline.net/getvn.asp?v=7&n=17
http://depts.washington.edu/uwmcnair/chapter11.content.analysis.pdf
http://books.google.ca/books?hl=en&lr=&id=s_yqFXnGgjQC&oi=fnd&pg=PR1&dq=content+analysis&ots=b1YWYYppCY&sig=WpFDwTs80Li9s_36qq2A4VU3GAs#v=onepage&q=content%20analysis&f=false
advertise, and use whomever volunteers
Pulling some ideas together
Gillaim & Iyengar talk about "scripts" in news content.
Those scripts make it easy for journalists to produce content; and easier for readers to understand news.
But scripts can be important in helping people understand and remember information more easily.
Scripts can also be problematic. Gilliam & Iyengar suggest one way; Chomsky suggests another.
http://www.snsoroka.com/files/2014Fournier.pdf
This may be why sitcoms are an interesting signal (or driver) of social change - because the sitcom format allows writers to take on what otherwise might be more difficult-to-assimilate subjects.
It might also contribute to "bandwagon" effects in political campaigns.
(and to my ability to predict election outcomes...)
http://www.cfhi-fcass.ca/sf-docs/default-source/commissioned-research-reports/Soroka1-EN.pdf?sfvrsn=0
And the value of content analyses more generally, in the study of communication, politics and policy, etc.
Though not all content analyses rely on the existence of a "script" - all they really require is that we take seriously the possibility that the content of media (the words used, the pictures used, etc.) matter for the way in which media represent the world around us, and/or the way in which we understand or learn about that world.
http://www.snsoroka.com/files/2013SorokaRedkoAlbaugh.pdf
http://www.snsoroka.com/files/2013DaignaultSorokaGiasson.pdf
http://www.snsoroka.com/files/2013Tiffenetal.pdf
The analysis of data starts with the kind of data we have:
discrete
, or
continuous
.
In class, we'll look at cross-tabulations, bar graphs and histograms, means and by-group means, and correlations.
Negativity in News Content
Reliability and Validity
http://www.snsoroka.com/files/2012YoungSoroka(PolComm).pdf
Expectations (and Findings)
http://www.snsoroka.com/files/GatekeepingJOP.pdf
From our work in Excel in class, based on human-coded data in Young and Soroka...
Crime Coverage
these results point to some other findings in communication...
Environmental Coverage
The coverage of catastrophic environmental issues tends to be linked to weather disasters.
Coverage of Foreign Affairs
Crime coverage tends to focus on violent rather than nonviolent crime.
?
The basic principles of social scientific research in communication studies
Work in Progress
Stuart Soroka, Communication Studies, University of Michigan
Full transcript