true "evaluation" can be an expensive and extensive process
it almost always needs to be designed from the get-go, not merely an afterthought
even under the best conditions, results are rarely generalizable beyond the specific circumstances of the population examined
What is realistic
$$$$$$$$$$$$$$$$
scientific research
(peer reviewed)
GRANT SIZE
$$$
outcomes measured and analyzed using the appropriate statistical technique
{
level of data REALLY needed
$$
$
note: there is always a cost!
Anecdotal observations made by grantee or funder
source Ridzi, F. (2012). Managing expectations when measuring impact: A framework based on experience. The Foundation Review, 4(4), 98-109.
funders usually like to work at this level
ask yourself
what is the purpose of this grant?
(to help: individuals, organizations, community)
statistical significance
a statistic is significant if it is unlikely to have occurred by chance alone
statistical significance does not necessarily translate to "big" or "important"
"There is incredible 'silver bulletism' around in the donor (and perhaps foundation) worlds - seeking that "one special number" that will tell us if we are succeeding or failing. This is driven by bureaucratic fantasy, not reality."
more?
first,
a few basics...
- don't ask for data that you don't plan to use
- set expectations up-front; no one likes last-
minute surprises
- typically the grantee is the subject matter
expert; be willing to listen to their input
about the evaluation process
most importantly
}
demonstrating
impact
Randy K. Macon, PhD