Demystifying Program Evaluation in Criminal Justice: A Guide for Practitioners
What is Program Evaluation?
Program evaluation that is rooted in science is critical for criminal justice.[1] Criminal justice programs should engage in evaluation to provide proof effectiveness and legitimacy and justify taxpayer support.[2] According to Weisburd (2003), criminal justice researchers have an ethical and professional “obligation to provide valid answers to questions about the effectiveness of treatments, practices, and programs.”[3] In recent years, policy-makers and funders have focused on investing in evidence-based, criminal justice programs.[4] In addition, a movement toward evidence-based policymaking has created an increased demand for quality research evaluation.[5] An evaluation can allow a program to make improvements, better secure funding, and potentially expand.
Research is often conducted by academics at universities, think tanks, government agencies, and nonprofits. However, external research can be time consuming and costly. In a review of the top five academic criminal justice journals over five years, only 5.2 percent of articles were evaluation research.[6]
Program Evaluation and Funding
Criminal justice programs are often costly and practitioners often compete to secure funds from county boards, city councils, federal and state granting agencies, and private foundations. Rigorously evaluated programs can distinguish themselves from the pack by showing evidence of program effectiveness.
When Should I Start my Program Evaluation?
Evaluation planning ideally starts during the initial stages of program development.[7] This encourages program staff to clearly define goals and objectives and gather data to document program implementation and outcomes.
Practitioners should hold off on conducting outcome evaluations until their programs have operated long enough to adequately address initial implementation issues and produced enough data to rigorously analyze outcomes.[8] Some researchers advocate for an “evaluability” assessment, which provides information on whether a program has reached the appropriate stage for systematic evaluation. Evaluability assessments also can prevent a premature undertaking of an impact evaluation.[9]
It is worth noting that as a program embarks on an evaluation, staff and stakeholders may be nervous of negative findings about the program. Researchers conducting the evaluation may encounter their own challenges, such as isolating the effects of the program, competing interests of stakeholders, attrition, and resource constraints. However, these challenges should not impede evaluations which should be viewed as opportunities for programs to learn, improve, and grow.[10]
Where Should I Start in Conducting a Program Evaluation?
Develop a logic model
Logic models provide a graphical depiction of a program that outlines and links relationships among program activities, outputs, and outcomes.[11] Figure 1 depicts a logic model and explains each component. A logic model can be a first step in the development and management of a program and can guide evaluation activities.
Develop research questions
Research questions will guide program evaluation and help outline goals of the evaluation. Research questions should align with the program’s logic model and be measurable.[13] The questions also guide the methods employed in the collection of data, which may include surveys, qualitative interviews, field observations, review of data collected by the program, and analysis of administrative records. Finally, questions should be tailored to how the resulting information will be used (e.g. internal practitioners looking to incrementally improve the program, or external funders determining program continuation).[14]
What Kind of Evaluations Can be Done?
An evaluation can focus on certain aspects of a program and to what extent they impact outcomes. The two main evaluation types— process and outcome evaluations—focus on specific program areas and are equally important.[15] When considering the kind of evaluation needed, program administrators and evaluators should take an inventory of available resources, including staff time, to conduct or assist in an evaluation.
Sources: Miller, J. M., & Miller, H. V. (2015). Rethinking program fidelity for criminal justice. Criminology & Public Policy, 14(2), 339-349; Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach. (7th ed.). Thousand Oaks, CA: Sage Publications, Inc.
Research design options for outcome evaluations
The value of an outcome evaluation is directly related to what can and cannot be concluded so the most rigorous evaluation option should be employed.[16] In research, outcome evaluations that incorporate randomized control trials, where participants are randomly assigned to an experimental group or a control group, are the gold standard.[17] The groups are differentiated by the program or practice (outcome variable) being studied; the control group may receive alternative programming or no programming. This method allows evaluators to isolate the impact of the program on participants.
An alternative is quasi-experimental design which compares outcomes of program participants (the treatment group) with a similar group of individuals with which to compare (comparison group). The individuals are not randomly assigned, but the design allows for valid causal inferences, despite some drawbacks.[18] The lack of random assignment causes non-equivalent test groups, which can limit the generalizability of the results, reduce internal validity, and restrict definitive conclusions about causality.[19] Biased evaluation findings from individual studies may under- or overrepresent the effects of the program.[20]
Ethical Considerations in Evaluation
In addition to considering sound methods, careful examination of how criminal justice research is conducted based on research ethics and standards is important. The evaluator has a responsibility to maintain integrity, balance, and fairness in study design and data collection methods and a duty to respect the security and dignity of all parties involved in the evaluation project. If the evaluation findings on an agency’s work are to be published and the agency receives federal funding, the research must follow the code of federal regulations for the protection of human subjects. The regulations outline procedures including approval of the research study from an institutional review board.[21]
Putting It All Together: Using What is Learned from a Program Evaluation
As Escamilla and colleagues (2018) noted,
To seasoned practitioners with years of field experience, a formal evaluation guided by researchers may seem irrelevant. After all, most programs make adjustments over the course of development and changes on the ground are easily recognized. However, even the sharpest practitioners may not recognize that complex external factors unrelated to programmatic decisions may be driving observable changes.[22]
An evaluation can provide insight and offer recommendations to improve programs. Rigorous evaluation research can assess not only whether the target problem was reduced, but determine if the program caused the problem to be reduced.[23] Evaluation findings can be used in many ways, including:
- Demonstrating the effectiveness of your program.
- Identifying ways to improve your program.
- Modifying program implementation and operations.
- Demonstrating accountability to program stakeholders and funders.
- Justifying funding and potentially aid in securing additional funding.[24]
Findings should be disseminated in documents or reports that can help other jurisdictions and programs learn from the evaluation. To maximize utilization of evaluation findings, the evaluator may tailor the report style to the intended audience (e.g. academic journal article, technical report, executive summary).[25] The evaluation also may include information for the user about how the findings can be best employed and disseminated to additional constituencies.[26]
Figure 2 depicts the steps of an evaluation, from logic model creation through sharing findings and making programmatic adjustments.
Building Evidence of Program Effectiveness
Evaluation findings of similar programs can build a body of knowledge, leading to the model being deemed evidence-based. An evidence-based program has strong evidence that it is effective based on reliable and replicated research, while an evidence-informed practice has less evidence of its efficacy (Figure 3).[28] There are numerous program models that have proliferated at a rate far beyond that supported by accompanying evaluation research.[29] Syntheses of multiple evaluations of similar program concepts can be employed to guide program and policy planning efforts.[30] However, research evaluation is a continuous process that will always be needed. Many programs adapt evidence-based programs to meet the needs of their unique communities, participant populations, program policies, and organizational climates. Many programs may produce an impact in the short-term, however relatively few evaluations examine outcomes beyond a year following a program’s initiation. Therefore, continued research is needed on long-term outcomes to identify the sustained impact of programs and policies.
Figure 3
Building Evidence of Effectiveness[31]
Source: Corporation for National and Community Service
Conclusion
The field of criminal justice has fallen behind other fields, such as medicine, marketing, and business, in its use of evaluation research to inform and improve programming.[32] As a result, perception, anecdotal evidence, and “business as usual,” rather than rigorous, empirical testing, become influential in program development.[33] Rigorous evaluation is necessary to make informed decisions on program improvement and bring accountability to the utilization of limited resources.
Evaluation research can provide important information to influence the policymaking in the criminal justice field. Conflicting stakeholder interests on policy or ideology should be superseded by rigorous research.[34]
Evaluation also contributes to improved treatment and service provision given to individuals. Without rigorous evaluation, it is unknown whether a program is providing any benefit to its participants. Ineffective programs may even cause unintended harm to those who participate. There is increasing agreement that criminal justice programming and practices should be grounded in scientific research to the greatest extent possible.
Resources
- American Society of Evidence-Based Policing (ASEBP)
- Betagov
- Crimesolutions.gov
- Campbell Collaboration
- Centers for Disease Control and Prevention, Program Performance and Evaluation Office
- Coalition for Evidence-based Policy
- Cochrane Collaboration
- Council of State Governments: What Works in Reentry Clearinghouse
- An Introduction to Evidence-Based Practices (JRSA)
- OJJDP Model Programs Guide
- SAMHSA’s National Registry of Evidence-based Programs and Practices
- UC-Boulder’s Center for the Study of Prevention of Violence: Blueprints
- Washington State Institute for Public Policy
Braga, A. A., & Weisburd, D. L. (2013). Editor’s introduction: Advancing program evaluation methods in Criminology and Criminal Justice. Evaluation Review, 37(3-4), 163-169. ↩︎
Janeksela, G. M. (1977). An evaluation model for criminal justice. Criminal Justice Review, 2(2), 1-11. ↩︎
Weisburd, D. (2003). Ethical practice and evaluation of interventions in crime and justice: The moral imperative for randomized trials. Evaluation Review, 27(3), 336-354. ↩︎
Fagan, A. A., & Buchanan, M. (2016). What works in crime prevention? Compaison and critical review of three crime prevention registries. Criminology and Public Policy, 15(3), 617-649. ↩︎
Leeuw, F. (2005). Trends and developments in program evaluation in general and criminal justice programs in particular. European Journal on Criminal Policy and Research, 11, 233-258. ↩︎
Tewksbury, R., DeMichele, M. T., & Miller, J. M. (2005). Methodological orientations of articles appearing in criminal justice’s top journals: Who publishes what and where. Journal of Criminal Justice Education, 16(2), 265-279. ↩︎
Tilley, N. (2004). Applying theory-driven evaluation to the British Crime Reduction Programme: The theories of the programme and of its evaluations. Criminal Justice, 4(3), 255-276. ↩︎
Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th ed.). Thousand Oaks, CA: Sage Publications, Inc. ↩︎
Wholey, J. S. (1987). Evaluability assessment: Developing program theory. New Directions for Program Evaluation, 33, 77-92. https://doi.org/10.1002/ev.1447 ↩︎
Weisburd, D., Lum, C. M., & Petrosino, A. (2001). Does research design affect study outcomes in criminal justice?. The Annals of the American Academy of Political and Social Science, 578(1), 50-70. ↩︎
Bureau of Justice Assistance. (n.d.) Center for research partnerships and program evaluation. Retrieved from http://bit.ly/2Xeg1EU ↩︎
National Institute of Corrections. (n.d.) A framework for evidence-based decision making in local criminal justice systems. Retrieved from http://bit.ly/2XnoQfX ↩︎
Corporation for National and Community Service. (n.d.) How to develop the right research questions for program evaluation. ↩︎
Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th ed.). Thousand Oaks, CA: Sage Publications, Inc. ↩︎
Harrell, A., Burt, M., Hatry, H., Rossman, S., Roth, J, & Sabol, W. (1996). Evaluation strategies for human services programs: A guide for policymakers and providers. Washington, DC: The Urban Institute. ↩︎
Braga, A. A., & Weisburd, D. L. (2013). Editor’s introduction: Advancing program evaluation methods in Criminology and Criminal Justice. Evaluation Review, 37(3-4), 163-169. ↩︎
Braga, A. A., & Weisburd, D. L. (2013). Editors’ Introduction: Advancing program evaluation methods in criminology and criminal justice. Evaluation Review, 37(3-4), 163-169.; Lum, C., Yang, S. (2005). Why do evaluation researchers in crime and justice choose non-experimental methods? Journal of Experimental Criminology, 1, 191-213.; Nagin, D. S., & Weisburd, D. (2013). Evidence and public policy: The example of evaluation research in policing. Criminology and Public Policy, 12(4), 651-679.; Weisburd, D. (2003). Ethical practice and evaluation of interventions in crime and justice: The moral imperative for randomized trials. Evaluation Review, 27(3), 336-354.; Weisburd, D., C. Lum, and A. Petrosino. (2001). Does research design affect study outcomes in criminal justice?. Annals of the American Academy of Political and Social Science, 578, 50–70.; Weisburd, D., Perosino, A., & Fronius, T. (2014). Randomized experiments in criminology and criminal justice. In David Weisburd and Gerben Bruinsma (Eds.) Encyclopedia of criminology and criminal justice, p. 4283-4291. New York: Springer Verlaag. ↩︎
Campbell, D. T., & Stanley, J. C. (2015). Experimental and quasi-experimental designs for research. Boston, MA: Houghton Mifflin Company. ↩︎
Escamilla, J., Reichert, J., Hillhouse, M., & Hawken, A. (2018). BetaGov supports practitioners and evaluators in conducting randomized control trials to test criminal justice programs. Translational Criminology, 15, 29-31. ↩︎
Weisburd, D., Lum, C. M., & Petrosino, A. (2001). Does research design affect study outcomes in criminal justice?. The Annals of the American Academy of Political and Social Science, 578(1), 50-70. ↩︎
American Evaluation Association. (2018). American Evaluation Association guiding principles for evaluators. Retrieved from http://bit.ly/2XiK2ng; Konrad, E. L. (2000). Commentary: Alleviating the fears of the anxious administrator. American Journal of Evaluation, 21, 264–268.; See 45 CFR 46 and U.S. Department of Health and Human Services, Office for Human Research Protections http://bit.ly/2Xg0Qex ↩︎
Escamilla, J., Reichert, J., Hillhouse, M., & Hawken, A. (2018). BetaGov supports practitioners and evaluators in conducting randomized control trials to test criminal justice programs. Translational Criminology, 15, 29-31. ↩︎
Eck, J. (2003). Police problems: The complexity of problem theory, research and evaluation. Crime Prevention Studies, 15, 79-114. ↩︎
Centers for Disease Control and Prevention. (n.d.) Introduction to program evaluation for public health programs: A self-study guide. Retrieved from http://bit.ly/2Xg0Z1z ↩︎
Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th ed.). Thousand Oaks, CA: Sage Publications, Inc. ↩︎
Solomon, M. A., & Shortell, S. M. (1981). Designing health policy research for utilization. Health Policy Quarterly, 1(3), 216-237. ↩︎
Centers for Disease Control and Prevention. (1999). Framework for program evaluation in public health. MMWR 48 (No. RR-11). ↩︎
Gleicher, L. (2018). Reducing substance use disorders and related offending: A continuum of evidence-informed practices in the criminal justice system. Chicago, IL: Illinois Criminal Justice Information Authority. ↩︎
Cross, A. B., Mulvey, E. P., Schubert, C. A., Griffin, P. A., Filone, S., Winckworth-Prejsnar, K., … & Heilbrun, K. (2014). An agenda for advancing research on crisis intervention teams for mental health emergencies. Psychiatric Services, 65(4), 530-536.; Roesch, R. (1978). Does adult diversion work? The failure of research in criminal justice. Crime & Delinquency, 24(1), 72-80. ↩︎
Eck, J. (2003). Police problems: The complexity of problem theory, research and evaluation. Crime Prevention Studies, 15, 79-114. ↩︎
Corporation for National and Community Service. (n.d.) How to develop the right research questions for program evaluation. ↩︎
Escamilla, J., Reichert, J., Hillhouse, M., & Hawken, A. (2018). BetaGov supports practitioners and evaluators in conducting randomized control trials to test criminal justice programs. Translational Criminology, 15, 29-31. ↩︎
Escamilla, J., Reichert, J., Hillhouse, M., & Hawken, A. (2018). BetaGov supports practitioners and evaluators in conducting randomized control trials to test criminal justice programs. Translational Criminology, 15, 29-31. ↩︎
Short, J. F., Jr., Zahn, M. A., & Farrington, D. P. (2000). Experimental research in criminal justice settings: Is there a role for scholarly societies? Crime & Delinquency, 46(3), 295-298. ↩︎
Jessica Reichert manages ICJIA’s Center for Justice Research and Evaluation. Her research focus includes violence prevention, corrections and reentry, women inmates, and human trafficking.
Alysson Gatens is a Research Analyst in ICJIA's Center for Justice Research and Evaluation.