Grant Charles raised a nagging issue in his column over the last two months (August, September) when he reminded us that unless we had clear criteria for defining success or failure in our work, we would remain in doubt as to our own usefulness “and about our job satisfaction. He referred to a colleague who had been crushed because a young sex offender she had worked with had reoffended. Grant no doubt got many readers thinking ...
He introduces what could be a most fruitful debate. One of the features of the past two decades in our field has been the expectation of state departments and other funders that we should be able to demonstrate by figures the effectiveness our programs. “What do we get for our money?" has been the question, with the required answer being “562 fewer instances of ... and 23.367 more case where ..." The numbers game. This has always been open to abuse. For one, we can reduce this youth’s violence by locking him up and throwing away the key -- that cuts the problem he presents to society by 100%. Alternatively, we could refuse admission to the really resistant kids (or transfer them to programs which “have better resources" or are “more appropriate") and this would greatly enhance our agency's performance figures. In any case, the whole “behavioural accounting" exercise was always open to “cooked books" reporting.
Not that the laying down of outcome criteria was in itself unrealistic. The results of child and youth care programs in the past had been characteristically fuzzy: high proportions of youth left us seemingly unimproved; strict behavioural programs did produce some impressive statistics on a case-by-case basis; there were few, if any, criteria for “success" and many of us went by subjective, “gut feeling" or anecdotal yardsticks. Perhaps the single enduring lesson was from studies twenty years ago which showed that best outcomes were related to programs which kept youngsters in closer touch with (rather than removing them from) their families and communities of origin.
So are we nearer to accurate measures of success today? Can we expect any measures to apply across the whole clientele of any one agency “or are we still limited to case-by-case results? Do we even want the kind of results which funders are asking for?
I, for one, resist the unitary diagnosis basis for measuring success. This seems to be at odds with our developmental approach. If I can reduce the mere number of one young person's “offences", what have I really done for him as a person? Do we not want, rather, to widen his repertoire of experience, thinking, self-understanding, relatedness, skills, achievement, possibilities, etc., so that this single diagnosed problem becomes less significant and less likely within a growing, maturing and more differentiated person? Oh that we had clearer curriculum for achieving these things, and that it was as easy to measure them as easily as we can measure his reading or mathematical progress. But we certainly want to move the youth beyond being “John the sexual offender" or “John the substance user".
Is it not true, also, that the best measure of success is the youth’s ability to function, just adequately, back home in his family and neighbourhood? So that much of our effort is directed not at the kid but at the people and circumstances he lives with. This widens our focus “and vastly complexifies our search for measurement criteria for our intervention across all of the role-players “but at least keeps him in the game, not on the sideliness. Hobbs1 says this better than I can:
"We assume that life is more healing than we are, and that our intervention is an emergency measure, that our goal is not the complete remaking of a child. What we try to do is to get the child, the family, the school, and the community just enough above the threshold of the requirements of each from the other, so that the whole system has a just-significant margin of probable success over probable failure ... It is possible for a system to work without the necessity of any intrapsychic change in the child at all."
To add a little more confusion to this debate, before we flagellate ourselves over our successes and failures, might we not also be asking “So how successful is this community being with this kid, or how successful is our city, our country, our world, being with this kid?" We might prove to have been the only people who didn't reject him, who took the time to be with him, who inculcated some self-esteem, who built better survival and effectiveness skills, who tried to influence his family and school and community to offer more hope and more opportunity ... and how are we going to measure all that in the context of his whole life “even if he does offend again?
1. Nicholas Hobbs in a paper entitled The process of re-education,
delivered at the first annual workshop for the staff of Project Re-Ed,
Gatlinburg, Tennessee, 1964
See also Generosity Without Measurement: It Can't Hurt?
Welcome to new members
Well over two hundred new members joined our daily discussion group list during September. Many of these were students starting the new term at universities and colleges; many others were child and youth care professionals at all levels who visited our web site and signed up. You are all warmly welcomed.
It is good for a group like this to experience an inflow of newcomers, for this changes us all. CYC-NET is a unique “cybercommunity" of people with a common interest in work with children, youth and families. We are people ... long-standing and new members, experienced practitioners and interns, younger and older, teachers and students, policy-makers and administrators, writers and readers ... to whom any of us can turn at any time with an idea to try out, a problem to raise, a request to make or information to share.
This is an extraordinary meeting place for an important profession, and we thank our experienced members for their willingness to be of help when questions are asked.