Loading presentation...

Present Remotely

Send the link below via email or IM

Copy

Present to your audience

Start remote presentation

  • Invited audience members will follow you as you navigate and present
  • People invited to a presentation do not need a Prezi account
  • This link expires 10 minutes after you close the presentation
  • A maximum of 30 users can follow your presentation
  • Learn more about this feature in our knowledge base article

Do you really want to delete this prezi?

Neither you, nor the coeditors you shared it with will be able to recover it again.

DeleteCancel

Selection of the Problem

No description
by

Rhondda Waddell

on 26 January 2015

Comments (0)

Please log in to add your comment.

Report abuse

Transcript of Selection of the Problem

Developing the Research Proposal
Selection of the Problem
Some factors that should be considered in the selection of the problem (Bailey, 1994).
Theoretical and Practical Values
Paradigms: Positivism; Social Constructivism; Postpositivism
Qualitative Research Methodology
Types of approaches: Document Review, Observation, Focus Group, Case Study, Photography, etc.
Reactivity
A Method of Data Collection.....
Unit of Analysis
Time Frame
Budget
Components Compromising the Setting of the Problem
Limitations
Delimitations
Assumptions
Example of a List of Properly Expressed Research Ideas
1. What drugs are most frequently abused by those students enrolled in junior high schools in Gainesville?

2. What factors play a role in the low compliance rate among males and females at an Eastside primary care clinic?

3. What are the health fears of residents living near a super fund site called Koppers Plant in Gainesville?
Interest
Operability
Scope
The Problem
The researcher should be interested in pursuing the problem area. It should relate to the background and career interests of the student and help develop useful skills for the future.
The nature of the problem should be such that the researcher has both the resources and the time availability to complete the subject.
Scope. While the research problem should not attempt to solve all the social dilemmas of the world, neither should it be so small as to negate the variables necessary for adequate results.
Theoretical and practical value. The research should contribute to the field of social work, perhaps through publication, and be of benefit to social service practitioners
Values of the researcher. The myth of value-free research is just that, a myth. The student of research should be aware that in addition to being unstable, values may prejudice the research effort to the degree that all objectivity is lost. Note that even the selection of a problem is value laden.
Research Methodology. Every researcher has a philosophy of research that affects procedure. Herein the student must certain that the qualitative questions are well written and the appropriate criteria are used to interpret the data to reach conclusions.
Reactivity. The method of data collection should be scrutinized for reactivity. That is, a reactive technique brings about a reaction on a part of those being studied in a way that affects that data. The reactive effect is commonly labeled the Hawthorne effect from the study of the Hawthorne Plant of the Western Electric Company in Chicago, where it was found that worker productivity increased simply because the personnel were being observed.
In research the unit of analysis may be an individual or an entire population, such as in a study of the health habits of a single anorexic patient or seeking patterns among the hospital anorexic population. The research must ascertain which is most appropriate and whether resources are available to collect data.
Time frame. This is particularly important to the student because only a limited amount of time is usually available. In a cross-sectional study a particular population is involved at a single point in time, whereas a longitudinal time frame involves data gathered over an extended period of time such as months or years.

What do these different types of studies examine: Cross-sectional; Longitudinal; Trend; Cohort; Panel studies?
Budget. To ensure that your proposal is feasible, write up a budget for expensive items. These items may include duplicating costs, travel, and postage. Some universities or agencies provide modest financial support for research projects and you should inquire about these sources.
Limitations. Limitations are the boundaries of the problem established by factors or people other than the researcher. For example, The research may wish to investigate five counties, but permission may have been granted for only three and the data limited as a result. Other limitations could be available resources, time, poor response, and honesty of the respondents.
Delimitations deal with the boundaries also, but they are set by the research. Though the problem statement indicates what the researcher will investigate, it is important to know what will not be included. In other words, the delimitations are an answer to the inquiry,
what are the precise limits of the problem?
Delimitation allows the researcher to concentrate on the central effort.
Assumptions. An assumption is a condition that is taken for granted and without which the research effort would be impossible. An assumption is believed to be a fact, but cannot be verified as one. For example: In Shands Hospital in a study on Wellness, the researcher may make the assumption that the practitioners will answer the questionnaire honestly and thereby submit appropriate data.
What is a paradigm?
A paradigm is a fundamental model or scheme that organizes our view of something.
Name two paradigms?
Positivist
Social Constructivist

What Is Evidence-Based Practice?
In the evidence-based practice process, practitioners make practice decisions in light of the best research evidence available.
What are the steps in the formulation of an evidence-based practice research question?
1. Formulating a question
2. Searching for evidence
3. Critically appraising the studies you find
4. Determining which evidence-based intervention is most appropriate for your particular client(s)
5. Applying evidenced-based intervention
6. Evaluating progress and providing feedback
What are the phases that research include?
1. Problem formulation
2. Designing the study
3. Data Collection
4. Data processing
5. Data analysis
6. Interpreting the findings
7. Writing the research report
What other factors influence the research process?
1. Ethical considerations
2. Multicultural factors
3. Organizational and political concerns
PowerPoint presentation developed by:
Allen Rubin, Lin Fang & E. Roberto Orellana

Chapter 5
Conceptualization in Quantitative and Qualitative Inquiry

Illustration of levels of Measurement

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Interval: differences between different levels have the same meanings.
Example: IQ. The difference between an IQ score of 95 and 100 is the same
in magnitude as the difference between 100 and 105.

Ratio: have the same attribute as interval measures, but in addition have a true zero point.
Example: Number of arrests. It’s possible to have no arrests, one arrest, and so on. Because there is a true zero point, we know that the person with 4 arrests has been arrested exactly twice as many times as the person with 2 arrests.

Interval and Ratio Levels of Measurement

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Describes a variable in terms of different attributes that are categorical only, and can be described in terms of how many cases are in each category of the variable, but not the degree of the variable.

Examples
gender
ethnicity
religious affiliation


Nominal Level of Measurement

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Operational Definition: Examine county records of the number of documented incidents of child abuse and neglect .
Testing the hypothesis: See if the number of documented incidents of child abuse and neglect in the counties
receiving the innovative program is lower than the number in the counties receiving the traditional program.

Advantages:
Less costly and time-consuming than either direct observation or self-report.
You don’t have to assume that positive parenting knowledge and skills translate into less abuse; you measure abuse per se.

Disadvantages:
Reliance on adequacy of county records.
Won’t show whether the parents who received your intervention improved their parenting.
Possibility of biased reporting.

Influence and Categories of Operational Definitions: Positive Parenting/Available Records

Types of Relationships between Variables

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Relationships between variables can be positive, negative, or curvilinear

Types of Relationships between Variables

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Illustration of a Spurious Causal Relationship that Disappears When Controlling for a Third Variable

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Mechanisms by which independent variables
can affect dependent variables

Example:



Mediating Variables

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Illustration of a Hypothesis and its Components

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Relationship
Variables that change together in a consistent,
predictable fashion, e.g., height and weight

Hypothesis
Tentative and testable statement about a presumed relationship between variables

Independent variable
The variable in a hypothesis that is postulated to explain or cause another variable

Dependent variable
The variable in a hypothesis that is thought to be explained or caused by the independent variable




Introduction: Definitions of Key Terms

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Introduction: Definitions of Key Terms

Conceptualization in Quantitative Inquiry

Conceptualization in Quantitative Inquiry

Overview

We may not know in advance what all the most salient variables are

Limited understanding of the variables may keep us from anticipating the best way to operationally define those variables

Even the best operational definitions are necessarily superficial

Qualitative Perspective on Operational Definitions

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Observations are not restricted to predetermined operational indicators
Researchers let meanings of little understood phenomena emerge from the observations
Deeper meanings are the purview of qualitative studies

Conceptualization in Qualitative Inquiry

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Describes a variable whose categories can be rank-ordered
according to how much of that variable they are.

We know only whether one case has more or less of something than another case, but we don’t know precisely how much more.

Examples
level of client satisfaction
brief rating scale:

Ordinal Level of Measurement

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Operational Definition: Ask the parents to complete an existing self-report scale that purports to measure knowledge
or attitudes about parenting. Such a scale might ask parents questions about what they would do in various child-rearing situations or how they perceive various normal childhood behaviors that some parents misperceive as provocative.
Testing the hypothesis: See if the average scale scores of parents in the counties receiving the innovative program are better than the average scale scores of parents in the counties receiving the traditional program.

Advantages:
Less costly and less time-consuming than direct observation.
If scales are completed anonymously, parents might be more likely to reveal undesirable attitudes.
Disadvantages:
Parents might distort their true attitudes to convey a more socially desirable impression.
The scale might not be valid.
Knowledge and attitudes may not reflect actual behaviors.

Influence and Categories of Operational Definitions: Positive Parenting/Self Report

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Operational Definition: You might begin by making a list of positive parenting behaviors. Then you might directly observe
the parents or foster parents in a challenging parenting situation (such as getting children to put away their toys) and count the number of times the parents show positive and negative behaviors. Perhaps you will give them +1 for every positive behavior and −1 for every negative behavior; tally up the points to get a parenting skill score.

Testing the hypothesis: See if the average scores of parents in the counties receiving the innovative program are higher (better) than the average scores of parents in the counties receiving the traditional program.
Advantages: Behaviors are observed first-hand.

Disadvantages:
Time-consuming.
Parents will know they are being observed and may not behave the same as when they are not being observed.
Possibility of observer bias.

Influence and Categories of Operational Definitions: Positive Parenting/Direct Observation

Illustration of a Spurious Causal Relationship that Disappears When Controlling for a Third Variable

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Can influence the strength and direction of relationships
between independent and dependent variables
Sometimes called control variables

When controlled for in a study can show that the relationship between the independent and dependent variables is really spurious

Moderating Variables

Hypotheses should:
be clear and specific
have more than one possible outcome
be value free
testable

Developing a Proper Hypothesis

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Concept
A mental image that symbolizes an idea, an object, an event, a behavior, a person, etc.
Attributes
Concepts that make up a broader concept are called attributes, e.g. male/female vs. gender
Variables
Broader concepts that vary (include more than one attribute or level of a concept) and that researchers investigate, e.g. age, gender, level of self-esteem, number of abusive incidents, etc.

Existing scales spare researchers the costs in time and money
Practical concerns:
The length of the scale
Time required to complete the scale
Compensation
The level of difficulty
Sensitivity to changes
Reliability and validity

Existing Scales

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

In quantitative research we must first translate variables
into operational definitions (observable terms) before a study is implemented

Operational definitions differ from nominal definitions. For example, consider the concept level of social adjustment:
Nominal definition: “How well people perform their major roles in life”
Operational definition: “Score on a scale that measures social adjustment”

Operational Definitions

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Agenda:
1. Review Course Calendar
2. Explore "The Research Problem"
3. Discuss Research Critique Guidelines
4. Review Questions Chpts. 1-4
5. Literature Review Videos/(Chapter 5 PPT Provided in Prezi)
6. Review Chapter 6 PPTS.
7. SLU Core Values: Respect & Integrity

PowerPoint presentation developed by:
Allen Rubin, Sarah E. Bledsoe & Jennifer L. Bellamy

Chapter 4
Reviewing Literature and Developing Research Questions

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Consider a problem in social welfare in which you have a special interest (such as child abuse,
mental illness, the frail elderly, and so on). Formulate a research question about the problem that would be important for the field to answer.

Class Exercise

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Refers to the extent to which a study may be done practically and successfully
Feasibility is not always synonymous with methodological rigor or inferential capacity
Researchers must consider:
Scope
Time
Fiscal cost
Ethical issues
Cooperation with research partners
Study participants

Research Questions Should Be Feasible to Answer

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Is the review thorough and up to date?
Does it point out agreements or disagreements among previous studies?
Does it cover relevant theoretical literature
Does it consider whether any of the previous studies are flawed?
Does it show how the current study relates to, yet goes beyond the previous studies?
Is it terse, avoiding monotonous details about many prior studies?
Is it thorough in summarizing the existing literature without becoming so lengthy and
detailed that it becomes tedious?
Does it avoid going off on tangential studies?
Does it succinctly sum up groups of related studies?
Does it read as a synthesis rather than as a list of each prior study?
Does it help the reader understand why the particular line of inquiry was chosen and the rationale for how it was conceptualized?

Critically Appraising
the Quality of Literature Reviews

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Bring the reader up-to-date on the previous research in the area
Point out general agreements or disagreements
What theories address your topic and what do they say?
Are there flaws in the body of existing research?
Show how your study relates to, yet goes beyond and builds on the previous studies
Do not cite monotonous, minute details
Be thorough, but terse and not tedious
If multiple studies had similar findings, sum up the general findings rather than discuss each study separately

Writing the Literature Review

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Utilize reference librarians
Use the library stacks
Examine abstracts
Access library materials electronically
Access Internet Professional Databases
Examine tables of contents in relevant professional journals
Request interlibrary loans

Using the Library

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Literature review is an ongoing process
Initial review of the literature aids in:
Problem selection
Understanding if the question has already been answered
Identifying conceptual and practical obstacles
Learning how to address obstacles
Building on existing research

Literature Review

Critical feedback from colleagues can:
Improve study utility
Clarify ideas
Uncover alternate approaches to the problem
Identify potential pragmatic or ethical obstacles
Involving agencies in problem formulation and
research design planning helps overcome resistance to research

Involving Others in Problem Formulation

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

A good research question:
Is narrow and specific
Has more than one possible answer
Is posed in a way that can be answered by observable evidence
Addresses the decision-making needs of agencies or practical problems in social welfare
Has clear significance for guiding social welfare policy or social work practice

Attributes of a Good Research Question

Literature Review

Policy or Practice Relevance

Feasibility

Agency Information Needs

Personal Interest

Research Question

Research Topic

Narrowing Topics Into Research Questions

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.


Sanford Prison Experiment
The Tuskegee Experiment
Pavlov Experiment
Nuremburg Experiments
Research Article Links & Questions
Criterion-related Validity
Based on some external criterion
Subtypes of criterion-validity
Predictive Validity: Measure can predict a criterion that will occur in the future

Concurrent Validity: Measure corresponds to a criterion that is known concurrently
Known Groups Validity: Measure accurately differentiates between groups known to differ with respect top the variable being measured

Validity

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Internal consistency reliability
Assess whether the items of a measure are internally consistent
Methods to assess internal consistency
Split-halves method
Parallel-forms reliability
Coefficient alpha

Types of Reliability

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

A particular measurement technique, when applied repeatedly
to the same object, would yield the same result each time
The more reliable the measure, the less random error

Reliability

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Random errors have no consistent pattern of effects. They do not bias the measures.
Examples:
Cumbersome, complex, boring measurement procedures
Measure uses professional jargon which respondents are not familiar with

Random Error

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

When the information we collect consistently reflects a false picture
Biases: The most common way our measures systematically measure something
other than what we think they do is when biases are involved, e.g.:
Acquiescent response set
Social desirability bias

Systematic Error

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Sources of Measurement Error
Reliability
Validity
Relationship between Reliability and Validity
Reliability and Validity in Qualitative Research

Overview

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

PowerPoint presentation developed by:
Allen Rubin, Lin Fang & Sarah E. Bledsoe

Chapter 6
Measurement in Quantitative and Qualitative Inquiry

Kronick (1989) proposed four criteria:
“Internally consistent” arguments
Complete interpretation
Convictions: Interpretation given the evidence within the text
Meaningful

Evaluating Validity in Qualitative Research

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Triangulation
Assess whether two independent raters arrive at
the same interpretation
Asking the research participants to confirm the accuracy of observations

Examples of Reliability Check in Qualitative Research

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Qualitative researchers study and describe things from multiple perspectives and meanings
Less emphasis on whether one particular measure is really measuring what it’s intended to measure

By describing things in great depth and detail and from multiple perspectives and meanings, there is less concern about whether one particular measure is really measuring what it is intended to measure

Reliability and Validity in Qualitative Research

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Construct Validity
Assess whether a measure fits theoretical expectations
Convergent validity
Discriminant validity


Validity

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Interobserver and interrater reliability
The degree of agreement or consistency between/among observers
Test-retest reliability
Assessing a measure’s stability over time

Types of Reliability

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Measurement Error: Data do not accurately portray the concept we attempt to measure

Systematic error
Random error

Sources of Measurement Error

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Valid and reliable

Neither reliable nor valid

Reliable but not valid

Reliability does not ensure validity

Relationship between Reliability and Validity

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Face Validity
A crude and subjective judgment by the researcher that a measure
merely appears to measure what it is supposed to measure
Content Validity
The degree to which a measure covers the range of meanings included within the concept
Established based on judgments as well

Validity

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.

Direct Behavioral Observation
Social desirability bias
Observers might be biased

Examining Available Records
Practitioners might exaggerate their records
Improper documenting

Written Self-reports
Item wording
Words vs. deeds



Interviews
Social desirability bias
Different interviewers

Errors in Alternative Forms of Measurement

©2011, Brooks/ Cole Publishing, A Division of Cengage Learning, Inc.
Full transcript