The need to provide comprehensive child care services to more and more children and families with the pressure and competition for scarce funding has placed a new challenge upon the field of out-of-home child care. The challenge is to provide effective and efficient services to children and families that funding agencies are willing to buy. One response is comprehensive program evaluation. Such evaluation provides possible answers to several very important questions: e.g., Whom do we serve? How effective are our services? How can we improve what we do? This article provides an overview of the philosophy and purpose of program evaluation, a discussion of two basic kinds of evaluation, and teclmiques for measuring program outcomes. Suggestions are made as to how to gather and use information to answer these three questions about program outcomes and improvement. Examples are chosen from programs at The Starr Commonwealth Schools.
Philosophy and purpose of program
evaluation
Comprehensive program evaluation provides information that
allows us to answer a number of questions about the clients we serve and
the impact our program has on those clients. Such information allows us
to respond to the following areas:
The type of client served
The needs of those clients
The impact of the program on the clients
How to improve the program
Of these four areas, program improvement is, perhaps, the most compelling reason for engaging in program evaluation. As in any organization, agencies providing services to children and families need to find more effective and efficient ways of impacting the needs and problems prevalent in the field. In this context, evaluation is an integral part of the organization and of the programs that organization provides.
Types of evaluation
Program evaluation can
be defined as “Evaluation is the process of delineating, obtaining, and
providing useful information for judging decision altematives.” Very
clearly, this definition implies that the focus of evaluation is
decision-making. While there are a number of models for doing program
evaluation and a variety of names given to the kinds of evaluation, the
field of evaluation, in general, recognizes two broad types of
evaluation. Process evaluation is the mechanism that provides
information for deciding how well a program has been implemented. It
answers the basic question: was the program implemented in the manner in
which it was designed? In other words, did we do what we said we were
going to do? Product evaluation, as a complement to that, is the
mechanism for determining how effective the program is, in terms of
outcomes. This type of evaluation attempts to answer the question: did
the program have the expected positive impact?
Process evaluation
The
focus of process evaluation is the refinement of how a program is implemented.
Information is gathered that allows us to compare what was
planned, in terms of implementation, to what was actually done. In
residential care, the plan for any program is to provide a positive
climate in which quality treatment will occur. So one might ask the
question: is the treatment climate a positive one? Process evaluation
provides the techniques for getting the answer. There are a number of
measures which can tell us something about treatment climate. Those are
depicted in Figure 1.
Figure 1. Process evaluation measures of treatment climate
Monitoring such things as truancy and unusual incident reports can provide information about the client’s perceptions about the atmosphere in which they live. For example, a high truancy rate may indicate that clients feel threatened by staff and their group and are leaving because of personal safety issues. A high rate of physical restraint may indicate that the staff and group response to problems and issues is physical and negative rather than nurturing and positive. Providing this information to administrators and staff provides a way of raising questions about treatment and how to interact with children and families.
Another way to get a sense of the climate in treatment is fairly simple and direct, i.e., ask the clients themselves. Through the use of surveys, it is possible to get client perceptions on a number of issues related to treaunent. Such issues may include the following:
Relationships within the treatment group
Threats of physical intimidation
The development of negative countercultures
The atmosphere created by staff
Feelings of coercion by staff
Quality of communication with staff
Perceived impact of treatment on client and family.
By using highly reliable and valid questions information can be generated which provides a “picture” or an assessment of the treatment climate from the clients' point of view.
Parallel to client perceptions, staff perceptions can also be used to assess climate. Climate, as it relates to administrative support, can be assessed by asking questions about how administrators manage the following areas:
Understanding of organizational goals
Recognition of performance
Conflict resolution
Personnel development
Communication
Administrative competency
Basis of influence
Administrative style
Staff job satisfaction.
If staff work together on treatment teams and in work groups, it’s also important to assess what kind of climate that team creates for itself. This can be done by asking staff questions about their teams in relation to the following:
Communication
Team support
Conflict resolution
Decision-making
Teamwork.
Finally, staff may be asked questions about how effective they think they are with the children and families they serve.
The information gathered from truancy and incident rates, along with the assessments made from the climate surveys provide a way of looking at the process of treatment. For example, a positive treatment climate would be one in which kids felt physically safe. If truancy and restraint rates are high, in conjunction with survey results which indicate kids feel physically intimidated by other kids or staff, the issue of safety needs to be addressed and the process of treatment adjusted to address those perceptions. In the same way, if staff indicate, via their climate survey, that communication with their administrators has become disrupted or ineffective, those administrators and staff need to work together to solve the problem.
A positive treatment climate is only one of the components that determine the quality of treatment to children and families. There are others. For example, one major focus of treatment may be interaction with families, or therapy with families. Here again we ask ourselves the question: are we doing everything with families that we said we would do? To answer that question we may monitor the amount of contact staff have with families, how often families come to campus to visit their children, how many families engage themselves in family therapy. All of this information helps us to do a “reality check” on what we intended our work with families to be.
Product evaluation
Process evaluation helps to confirm that what we plan
to do is, in actuality, being done. Once that’s done, product evaluation
can be implemented. At this point, the question becomes: is what we did
working? In other words, product evaluation gives us a way of learning
what the impact of our treatment process is. As with process evaluation,
it is a good idea to take a multi-faceted approach to measurement and
find several ways to define program outcomes. Some suggestions are shown
in Figure 2
.
Figure 2. Product evolution measures for residential care programs
When we look at the concept of a successful program, one of the first questions that comes to mind is: how many clients completed their treatment programs? In the above figure, this is the variable referred to as completion rate. This is calculated as the percentage of all clients leaving in a specific time period who completed their treatment programs or treatment goals. A second question that often arises is the length of time clients are in treatment. This is referred to as the average length-of-stay. This is usually calculated in average number of months for residential care. The truancy dismissal rate indicates the number of cases dismissed due to excessive truancy. Educational gain reflects how much educational achievement children make while in treatment. Self-esteem gain indicates how much children's attitudes about themselves change as a result of being in treatment.
The variables discussed so far reflect the impact of the program while the child is in placement. Another outcome for residential care is the level of adjustment the child has after placement. This adjustment can be assessed at anytime, although three month and twelve month follow-ups help give information about immediate post-treatment adjustment and more long-term adjustment. During these follow–ups, the types of placement clients are in are monitored to provide a way of looking at recidivism back into institutional care. The productivity level is also assessed in terms of whether or not kids are in school, employed, or both. Arrest rates and police contacts can also be monitored. These kinds of data provide feedback about how well clients are doing once retumed to their home communities.
Yet another outcome variable is the feedback that other consumers of our services have. Useful information can be obtained by asking referral agencies what they think about our program, our communication with them, and the impact of our services. Those same questions can be asked of families and school systems.
As has been shown, there are a number of ways of measuring the outcomes of residential care. However, there needs to be a way of judging the value of those outcomes as well. What is a “good” completion rate? What is a “successful” length-of-stay? As organizations begin to conduct program evaluations, standards can be set to help interpret what the outcome data tell us. Referring agencies may have performance indicators as part of their contracts with organizations. Administrative and treatment team staff will have some ideas of what those standards ought to be. Other organizations may also have standards for review. The performance standards set for residential care at The Starr Commonwealth Schools are shown in Figure 3, as an example:
Figure 3 Performance standards for residential care
Measure | Statistic |
Completion rate Average length-of-stay Truancy dismissal rate Educational gain Self-esteem gain Three month follow-up Placement status Productivity Twelve month follow-up Placement status Productivity |
75% 12 months 10% 1.5 years + 10 pts. 90% 90% 70% 70% |
Standards such as those above set the stage for expectations for program performance. Over time, performance on the standards, along with process evaluation information, will point to programs that can be labeled as successful. A program that provides a positive treatment climate and quality family interventions will result in achievement of perfonnance. The performance of The Staff Commonwealth Schools, over the last four years, is shown in Figure 4, as an example.
Figure 4. Performance of the Starr Commonwealth schools
Measure | Standard | 1983 | 1984 | 1985 | 1986 |
Completion rate Average loss Truancy dismissal Educational gain Math Reading Self-esteem gain 3 month follow-up Placement Productivity 12 month follow-up Placement Productivity |
75% 12 months 10% 1.5 1.5 + 10 pts. 90% 90% 70% 70% |
71% 13.9 12% NA NA NA NA NA 73% 69% |
68% 14.1 10% NA NA NA 92% 93% 71% 61% |
76% 12.9 5% NA NA NA 92% 92% To be determined |
78% 12.1 5% 1.7 1.7 NA 94% 85% To be determined |
These data suggest that the performance standards were not met until the recent years of 1985 and 1986. As the standards have been monitored over the past four years, however, there has been steady improvement in almost all measures. This information actually reflects the aggregated performance of all five residential programs at Starr. Over the past four years, the individual program performance has seen steady improvement as well.
The need for
client description
Findings such as a 78 percent completion rate and an
average length-of-stay of 12.1 months need to be put into the context of
the kind of child served in residential care. Perhaps it is the case
that younger children will be in treatment longer, emotionally disturbed
children may exhibit higher truancy dismissal rates. One reaction to the
data just presented might be that the completion rate standard is too
low. One might question why “only” 75 percent of the clients complete
treatment. Client descriptors provide a way of responding to these
issues. An example of the descriptors characteristic of the residential
care population at Starr Commonwealth is shown in Figure 5.
Figure 5. Client descriptors for residential care population at Starr Commonwealth
Characteristic | Finding |
Ethnic distribution Average age at admission Family setting Documented abuse rate Habitual drug use rate Truancy risk in previous school setting Disciplinary rate in previous school setting Average number of charges Average number of adjudications Average number of previous placements Predominant criminal offenses |
40% minority 14.5 years 50% single parent 15% both parents 31% 32% marijuana 14% alcohol 62% 66% 4.7 4.0 3.2 40% larceny theft 34% burglary 25% assault |
These data
suggest that the type of youth served by The Starr Commonwealth Schools
comes from disrupted family situations where, physical and sexual abuse
often occur. Their experiences in school have been, for the most part,
negative. They have been arrested and convicted for serious crimes and
have been in out-of-home placements frequently, very rarely for purposes
of treatment. In general, these youth are highly delinquent
adolescents living in chaotic family settings, alienated from their
schools and communities, and disheartened by out-of-home placements that
do not address their needs. In this context, the performance standards
set for the residential programs that serve these youth are optimistic.
References
Stufflebeam, Daniel L., Foley, Walter J., Gephart, William J., Guba, Egon G., Hammond, Robert L., Merriman, Howard O.and Provus, Malcolm M. (1971). Educational Evaluation and Decision-Making. Itasca, Illinois: F.E. Peacock Publishers.
The Joint Committee on Standards for Educational Evaluation (1981). Standards for Evaluations of Educational Programs, Projects, and Materials. New York: McGraw-Hill Book Company.
This feature: Ameen, C.A. and Mitchell, M.L. “Whole days, whole lives: Building competence in the child care environment” – How do you know when your programs are working. Journal of Child Care, Special Issue. Spring 1988. pp. 59- 67.