Teaching How to Monitor for Quality
Monitoring is not a word always embraced in the quality field. To some, it evokes images of Big Brother in the workplace. To others, the word feels paternalistic. Dad is keeping a watchful eye out on things. To others, especially those in the health services realm, it seems old-fashioned. The quality world in health has moved beyond quality assurance to the more glitzy world of quality improvement.
But ask social service administrators or board members these questions, as I have.
“Would you like this agency to become more data-driven?”
“Would you like to be able to see, in metrics, how your agency programs are performing?”
“Would you like to be able to spot trends over time, in how they perform?”
The answers are almost always, Yes, please.” They are tired of describing their work in terms of numbers served. I tentatively started down this road professionally a few years ago, when a public agency administrator I respect told me, “Curtis, I think my employees would respond very well to data. I’d love for us to be a data-driven agency, but I have no idea where to start.” At that time, I myself had only vague ideas about how to help. As a social service researcher, this sounded like something I should know.
This year, I am teaching, as a pilot effort, a course in a school of social work called, Quality Monitoring and Improvement in the Social Services. The students are second year masters students, evenly split between those who want careers in direct practice and those who want careers in social service administration. We have completed the quality monitoring portion of the class and I thought I’d report on how it went, the challenges I faced and how the students fared on their major assignment related to monitoring. I finished this teaching module convinced more than ever that quality monitoring activities are a vital agency function and the skills associated with these tasks are important ones for agency-based quality professionals to possess.