Evaluation of DMC Efforts in the Target Areas

An integral element of Pennsylvania's approach is the incorporation of evaluation from the very beginning. In the first year of implementation, the National Center for Juvenile Justice Training and Research (the Center) at Shippensburg State University was contracted to perform the first-year evaluation of the Harrisburg programs. Through this evaluation, the Center found that of the 200 adolescent clients referred to the coalition during its first year of operation, 169 satisfied a minimum attendance criterion. While 50 percent of the coalition clients had prior involvement with the juvenile justice system, just 20 percent were referred to juvenile probation subsequent to their involvement with the coalition. Further, involvement in coalition programs was associated with significantly lower levels of truancy and suspension and with slight improvements in academic performance.

As the model expanded to Philadelphia in the second year, a more comprehensive study was commissioned in 1993 through Temple University to evaluate nine funded programs in the two minority overrepresentation initiatives (five in Harrisburg and four in Philadelphia).11 The evaluation consisted of three parts: a community assessment of the target areas, which included social, economic, and crime indicators; an evaluability assessment (i.e., ability to be evaluated) and process evaluation, which included clarification of program goals, activities, and objectives and an examination of service delivery; and an outcome evaluation of client performance both during and after participation in the programs through a review of police, juvenile justice, and school records from 1992 to 1995. The evaluation strategy is an interactive model in that the programs and coalition receive feedback from the evaluators regarding areas of success and difficulty, which is used on an ongoing basis to improve the programs. For example, certain program implementation issues were noted as impeding successful intervention and valid evaluation (Welsh, Harris, and Jenkins, 1995). Ways to rectify these issues, rendering the programs more effective and more evaluable, were subsequently suggested (see table). In terms of outcome evaluation, the most positive outcome reported was for the 1992-93 Harrisburg target site. The rate of recidivism over a 3-year period for the high-attendance group (25.8 percent) was impressive, especially considering that nearly half of the clients had arrests prior to their referral. In contrast, the low-attendance control group had a recidivism rate of 53 percent for the same period.

Program Implementation Issues That Affect Program Effectiveness and Evaluability*

Program Implementation Issues

Suggested Actions


Target selection procedure

Clearly define characteristics of intended clients and regularly monitor client population

Client participation and completion of program

Develop incentives for participation.
Provide outreach to clients.
Provide an interesting and challenging array of services.

Staffing levels and staff turnover

Increase program resources.
Provide ongoing staff training and development.
Realistically address staff qualifications.

Information and recordkeeping

Provide program resources.
Emphasize importance of accurate, complete data to staff.

Family component

Provide tangible incentives for family involvement.
Conduct parent support groups.

Educational component

Provide tutoring and learning opportunities on a daily basis.
Use volunteers.
Provide positive feedback to students and volunteers.
Work with the neighborhood school system.

Volunteers/mentors

Recruit, screen, train, monitor, and support.

Program structure

Engage in goal-oriented activities that are implemented in a consistent manner at a regular time.

Adequacy of physical facilities

Send positive messages through pleasant, clean, and well-maintained physical space.

Monitoring by program director/ executive director

Employ hands-on directors.
Engage in a continuous process of growth and self-evaluation.
Welcome criticism in addition to positive feedback.

* Based on information found in Welsh, W.N., Harris, P.W., and Jenkins, P. 1995. Evaluation of Minority Overrepresentation Programs in Pennsylvania: Evaluability Assessment and Process Evaluation. Report #2. Philadelphia, PA: Department of Criminal Justice, Temple University.


  1. SAVE and the Truancy and Dropout Prevention project, the other two Philadelphia programs, started more recently and were not included in the Temple evaluation.


Line
Disproportionate Minority Confinement: 1997 Update Juvenile Justice Bulletin   ·  September 1998