Evaluation Practice and Theory: Up and Down the Ladder of AbstractionAmerican Journal of Evaluation

Similar

Rape in marriage

Authors:
Lee H. Bowker, o̊Dean of the Graduate School and Research
1983

Automatic aperiodic balance

Authors:
W. and J. George and Becker Ltd.
1951

Urinary Flow Rates in Patients with Benign Prostatic Hypertrophy following Treatment with Alfuzosin

Authors:
P. TEILLAC, M. C. DELAUCHE-CAVALLIER, P. ATTALI and the DUALF Group
1993

Two-phase low-frequency decade oscillator

Authors:
Muirhead and Co Ltd.
1959

The Planets

Authors:
BBC and NOVA Producers
1978

Text

Shulha, L. M., & Cousins, J. B. (1997). Evaluation use: Theory, research, and practice since 1989. Evaluation

Practice, 18, 195–208.

Smith, N. (2010). Characterizing the evaluand in evaluating theory. American Journal of Evaluation, 31, 383–389.

Stufflebeam, D. (Ed.). (2001). Evaluation models (New Directions for Evaluation, No. 89). San Francisco, CA:

Jossey-Bass.

Stufflebeam, D., & Shinkfield, A. (2007). Evaluation theory, models, and applications. San Francisco, CA:

Jossey-Bass.

Torres, R. T., Preskill, H. S., & Piontek, M. E. (1997). Communicating and reporting: Practices and concerns of internal and external evaluators. Evaluation Practice, 18, 105–125.

Tourmen, C. (2009). Evaluators’ decision-making: The relationship between theory, practice, and experience.

American Journal of Evaluation, 30, 7–30.

Trochim, W. T. (1998). An evaluation of Michael Scriven’s ‘‘minimalist theory: The least theory that practice requires.’’ American Journal of Evaluation, 19, 243–249.

Weick, K. E. (1984). Small wins: Redefining the scale of social problems. American Psychologist, 39, 40–29.

Weiss, C. H. (1997). Theory-based evaluation: Past, present, and future (New directions for evaluation,

No. 76). San Francisco, CA: Jossey-Bass.

Wholey, J. (2010). Exploratory evaluation. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (3rd ed., pp. 81–99). San Francisco, CA: Wiley.

Evaluation Practice and

Theory: Up and Down the Ladder of Abstraction

Laura C. Leviton1

Abstract

Debra Rog presented the 2014 Eleanor Chelimsky Forum address, offering ways to integrate evaluation theory and practice by abundant use of practice examples. These examples illustrate the effective use of the Ladder of Abstraction from semantics, working from the concrete to the abstract and back again.

Keywords evaluation policy, evaluation practice, evaluation theory, theory–practice relationship

American Journal of Evaluation ª The Author(s) 2015

Reprints and permission: sagepub.com/journalsPermissions.nav

DOI: 10.1177/1098214015573070 aje.sagepub.com 1 The Robert Wood Johnson Foundation, Princeton, NJ, USA

Corresponding Author:

Laura C. Leviton, The RobertWood Johnson Foundation, Route 1 and College Road East, Box 2316, Princeton, NJ 08543, USA.

Email: llevito@rwjf.org 238 American Journal of Evaluation 36(2)

The Eastern Evaluation Research Society founded the Eleanor Chelimsky Forum specifically to improve a dialogue between evaluation practice and theory. For her presentation to the Forum and her article for American Journal of Evaluation, Debra Rog explored three themes, namely, infusing evaluation theory into practice, infusing practice into theory, and steps that the evaluation profession is taking to integrate the two. Integrating them is a vital function, important for any field of inquiry.

Aristotle first made the formal distinction between episteˆmeˆ (theory) and techneˆ (practice), and the

Stanford Encyclopedia of Philosophy notes that they have ‘‘an intimate positive relationship’’ (Parry, 2014). As Pressman and Wildavsky (1984) put it in their seminal book on implementation, there is an important difference between ‘‘knowing that’’ certain features are desirable and ‘‘knowing how’’ to do anything about it. Knowing both theory and practice denotes excellence in evaluation. Debra’s article exemplifies this kind of excellence.

Infusing Theory Into Practice

Debra outlined how program theory, social science theory, and evaluation theory help us make sense of the evaluation context and offer guidance about how to create more successful evaluations. Her examples infused theory into practice very well indeed. Debra’s application of social science theory in the areas of small group participation and psychological ownership represent important tools for the evaluator’s tool box—they very directly help evaluators get high-quality data from stakeholders, settle on an evaluation question (where possible), and prevent derailing of the process. These observations represent generalized knowledge for evaluation practice! Debra’s ‘‘cupcake intervention’’ (to make sure that program staff feel appreciated) had an undeniable impact on data quality, although by itself it would be deemed too concrete for theory. Yet, as one of a set of activities under the more abstract rubric of ‘‘program staff engagement,’’ it was a brilliant addition. Her illustration of evaluation theory was even more compelling. Evaluation theory recognizes the dimensions of knowledge and utility. By engaging homeless shelters directly, rather than sitting on an evaluator’s ‘‘throne’’ to accept whatever participants were offered, Debra worked to assure a high-quality sample of participants so that better knowledge and utility were assured. This example drives home not only her identification as an evaluator but also the way that concrete practice impinges on our general understanding of what constitutes good evaluation.

Infusing Practice Into Theory

One of the five dimensions of evaluation theory that Shadish, Cook, and Leviton (1991) discussed was practice, so it sounds circular to say one should infuse something into itself. Yet this aspect of evaluation theory needs much more discussion and amplification. As Debra pointed out, expertise in evaluation is an individual professional’s generalized knowledge about practice, that is, the ability to recognize contexts and draw upon a wide ranging repertoire of responses. It is not too far a step from individual expertise, to more general, shared knowledge about evaluation. Theorists ranging from

Weiss, to Rossi, to others in the current day, derived many of their ideas from their extensive experience of evaluation practice.

Debra distinguished general evaluation expertise from content expertise. We need a mix of both, just as an evaluation team could use collective expertise in sampling, management of data collection, writing of reports, and stakeholder engagement—evaluators have different strengths, but together they can offer better evaluation capacity (Leviton, 2001). Content expertise is often an indispensable part of the mix, however. In my experience of overseeing over 100 evaluations in my current role, I can state that it is sometimes disastrous to give evaluation contractors ‘‘on-the-job training’’ in a content area for which they lack familiarity. Debra described her ability to recognize patterns in the area of homelessness that a generalist simply could not do. Sometimes it is helpful for evaluators to