Who Would Have Guessed That Research Methods Would Get Me All Excited?

Weist, M. D., Youngstrom, E. a, Stephan, S., Lever, N., Fowler, J., Taylor, L., … Hoagwood, K. (2014). Challenges and ideas from a research program on high-quality, evidence-based practice in school mental health. Journal of Clinical Child and Adolescent Psychology, 43(2), 244–55. doi:10.1080/15374416.2013.833097


Two weeks ago I wrote about a study completed on school mental health (SMH) in two schools in Baltimore, MD. That study was conducted by Mark D. Weist, who I’m learning is THE man for SMH theory and practices. He has his hands in just about everything I come across. This week I took a look at another study he recently published, “Challenges and Ideas From a Research Program on High-Quality, Evidence-Based Practice in School Mental Health” (Weist et al., 2014).

In this study, Weist, et al. (2014) discuss their findings from two separate research projects funded by the National Institute of Health. Well… they sort of discuss their results. They give some information about what the studies found, but this article is really more about the process of collecting and analyzing the data. (I am already seeing my brain change… If you had told me 6 weeks ago that I would get so excited when I came across clarifying research methods, I would have thought you were crazy!)

Ultimately, they determined that some of the same things are always a problem for program pilots: practitioners trying to learn too much in too short a time and not enough follow-up coaching to carry out the skills with fidelity. In the past, they relied on manuals provided to each clinician. These manuals were nice because they covered many areas, but there were  “concerns about their perceived ‘one size fits all’ approach, and associated concerns about the rigid need for adherence in spite of changing presentations in students and their circumstances” (Weist et al., 2014). This time, they attempted modular evidence-based practices, which allow for more flexible training opportunities in the specified areas. These were somewhat successful, but there were SO. MANY. To get through that the participants often felt overwhelmed with the requirements. The researchers also met some of the concerns last time, such as, “competing responsibilities (of clinicians), lack of support from school administration and teachers, lack of family engagement, (and) student absenteeism (Weist et al., 2014, p. 253).

They also ran into difficulty with their statistical models. I’m in the middle of my statistics class, so I wasn’t able to understand all the problems they mentioned, but I definitely understood the things they mentioned as problematic. Anytime there was a change in a practitioner or family leaving, it messed things up statistically (as well as in the children’s lives, I imagine). They ran into difficulties of statistical power (because it was an inherently small sample size), reliability, missing data and even what type of analysis they were doing.

At this point I have read a lot of articles with Dr. Weist’s involvement. I don’t know if it’s really his doing or not, but I am consistently pleased with the way these articles are set up. He uses a lot of headings and subheadings that make it easy to follow and find information I read earlier. I have also really appreciated the apparently high level of transparency in his writing. He is explicitly up-front with the funding sources for the projects; they aren’t just hidden in the fine print or buried on the title page. That information is in the body of the introduction and he explains his own ties to SMH. Given that he is so closely involved with so many SMH projects, I am really impressed with his transparency about what has gone well and what has not. I’m sure there are ethical rules about these sorts of things, but I feel like I come across studies periodically in which I don’t really trust the findings because I don’t think they’re giving all the information. For these SMH articles, though, I do. They are very honest with the things that haven’t worked, and their problems tend to match the problems I have met implementing other programs.

One last thing I enjoyed about this particular article was that the two studies mentioned were both Randomized Control Trials. I have read a few other articles that use this method, but it’s hard to carry out in a school setting with real students. How do you say to one family, “Sorry; you are still in the study but you don’t actually get help for your kid.” Who would sign up for that? Also, would it actually pass an IRB committee? (An IRB is an Institutional Review Board, which acts as an ethics committee for all biomedical and behavior research completed on humans.) In this case, though, they were able to give Personal Wellness training to the control group. So even though they weren’t specifically addressing mental health, they were not just leaving the kids to fend for themselves. The other problem with randomization they addressed was how to set up the randomized parts. Rather than randomize the students (you can’t make a kid go to a different school just because they are in the control group), they randomized the clinicians. Brilliant!

The only mildly annoying thing was the amount of acronyms. We have a lot in special education, so I recognize how helpful they are to people who use them a lot. But it was a little tedious at first to have to keep checking what various ones stood for. I ended up making a list on a note I kept to the side while reading. This helped and by the end I was hardly looking at it. So it was a little annoying at first, but not overbearing and I’m not sure that I would recommend spelling the words out, as they were used  A LOT (and that would be annoying, too!).

This article was a little different than the others ones I’ve posted about because it wasn’t as focused on the findings as it was the process. I have thought more about my action research and how I will actually be able to put something into place. Obviously it won’t be on this scale. But I am considering using some of the methods discussed in this article. Will randomized control trials be an option? I’m not sure, but before reading this I hadn’t even considered them. The authors reference a tension I am just starting to think about myself:

Is the primary “participant” the clinician or the students? From a policy and public health standpoint, student-level outcomes are imperative. However, the intervention of interest manipulates the training and support for clinicians, and our hypotheses emphasize effects on clinicians’ attitudes, knowledge, and behavior. (Weist et al., 2014, p. 249)

Finally, they referenced a questionnaire they used to help measure participants’ attitudes toward, understanding of, and implementation of SMH. Specifically, they used the School Mental Health Quality Assessment Questionnaire (SMHQAQ; (Weist, Stephan, Lever, Moore, & Lewis, 2006). While I don’t think this particular questionnaire will be helpful yet (there needs to be at least a semblance of SMH in order for answers to be helpful), it did give me some ideas for other types of questionnaires I can look for, or create myself if necessary.



Weist, M. D., Stephan, S., Lever, N., Moore, E., & Lewis, K. (2006). School Mental Health Quality Assessment Questionnaire. Baltimore: Center for School Mental Health Analysis and Action.

Weist, M. D., Youngstrom, E. a, Stephan, S., Lever, N., Fowler, J., Taylor, L., … Hoagwood, K. (2014). Challenges and ideas from a research program on high-quality, evidence-based practice in school mental health. Journal of Clinical Child and Adolescent Psychology, 43(2), 244–55. doi:10.1080/15374416.2013.833097