More questions than answers…

Heller, J. I., Daehler, K. R., Wong, N., Shinohara, M., & Miratrix, L. W. (2012). Differential effects of three professional development models on teacher knowledge and student achievement in elementary science. Journal of Research in Science Teaching, 49(3), 333-362.

I’ll admit that over the past few years, after receiving my Masters in Educational Administration and Supervision, I’ve been plagued by the question, “What are the elements of great professional development that make it effective and how do we know that?”. Over the years I’ve gleaned bits and pieces of knowledge and experience from here and there like sustained and supported professional development, tailored learning, collaborative learning communities, and reflection cycles. As I move forward in the inquiry process of this doctoral program, I look forward to delving deeper into this quagmire of a question and finding out more about the “how do we know that?” portion.

So that brings us to my highlighted journal article of the week. This week I read the Differential effects of three professional development models on teacher knowledge and student achievement in elementary science (Heller, et al., 2012). What I immediately appreciated about this research study, is that it aimed to take three unique approaches to professional development for elementary science teachers and compare the results from teacher and student assessments as to the effectiveness of the different programs. This study took place in a range of states and utilized a train-the-trainer model. The study encompassed 253 teachers after all data was collected and exclusions were made. These teachers were divided up into four groups: 1) a control group that got the same science content knowledge as the other teachers, 2) a group who studied the content and analyzed student responses and teacher instructional choices via a set of case studies, 3) a group that received content instruction and analyzed their own student work throughout the unit, and 4) a group that utilized metacognition and analysis to develop their professional practices. The content that all of the groups were learning focused on electrical circuits which often is covered in fourth grade.

After the initial, study year some teacher participants and their new students were asked if they would take the electrical test again to see how much knowledge they retained over time and whether or not they were impacting their students positively still. The results from the survey was that all three different professional development courses produced large gains in the initial study year and also continued to have positive effects on teacher knowledge and achievement in the second year. The three different courses also had significant increases in content knowledge in the initial year and the follow up year. One of the more surprising results was the effect the professional development had on ELL student content scores. One element that was also being tracked was how well teachers and student could explain or justify the answers that they were providing in a selected-response assessment. In this area, one of the professional development courses really stood out: the Teaching Cases program.

I think in general this study had some really important strengths. Some of the first that I recognized would be the fact that the study utilized selected-response assessments that had extensive reliability and validity testing. The manner in which the teachers were taught the electrical circuit content was also done in a very intentional means of exploration and collaboration. There was much care taken and procedures in place to make sure that assessment scoring was reliable and that individual trainer skill was not affecting the teacher, and therefore, student scores. I think another strength of the study was how thoughtful the designers were to not just create a professional development that had to be done in a very perfect, almost impossible to achieve environment. The study included hundreds of teachers, thousands of students and employed a very difficult training-the-trainer set up that covered a large geographical range. There was also diversity in student and teacher population.

I think that some of the critiques I have for the article come from its literature review and its organization. There were design elements decisions made for the study but some of those decisions weren’t explained until later until the results section of the article. The literature and framework for why the three course elements were created they were seemed to be lacking. This is especially true when I consider that it was unclear why there were some smaller components repeated amongst the three course types.

Another critique I had was in the execution of one of the methods that they had chosen. The control and two of the other courses were done before the actual student unit was taught and testing was done as a pretest and posttest. However, due to the nature of the Looking at Student Work course, this professional development happened concurrently to the unit. I think this actually is essential when you take into account that course was the only course that significantly improved the students’ abilities to justify their answers.

One other critique I had for the article, and this was the inspiration for the title for today’s blog, is the fact that though this article appeared to really get some great results, its design and execution caused for there to many more questions at the end of it than at the beginning. One element is that there was no real intentionality around impacting ELL students through these professional development courses, but nonetheless there was a significant impact. Unfortunately, it is unclear to what or whom we owe that credit to.

One last critique that I had for one of the methods in this study was that teacher participants themselves administered the student assessment. It seems that in areas of research like this you would utilize a proctor or have a partner proctor present to help validate the execution of the protocol.

I found the reading of this article to be particularly interesting as we just finished reading one of Dr. Beardsley’s recent articles around Value Added Models in teacher evaluation (Paufler & Amrein-Beardsley, 2013). This particular article highlights the importance of random student placement to avoid bias and influencing data due to “stacked” classes, etc. Since that concept was not even addressed in this article, I assume that was not factored into the analysis or design of the project.

I think one great way that this study could be built upon is to take a deeper look at each one of the professional development courses and really unpack the literature and theoretical frameworks behind them and really make an intentional case for why these courses will impact ELL students and then measure that element.

Overall, this article was illuminating but I really found myself asking more questions about the research design rather than “How can I take some of these elements and put them into practice?”

Paufler, N. A., & Amrein-Beardsley, A. (2013). The Random Assignment of Students Into Elementary Classrooms Implications for Value-Added Analyses and Interpretations. American Educational Research Journal, 0002831213508299.

The following two tabs change content below.

Has spent six years in education as a classroom teacher and administrator at the K-12 level and another three more years as a clinical instructor and curriculum coordinator in higher education. Personally and professionally holds great interest in online learning K-20.

Latest posts by lelovilla (see all)

Leave a Reply

Your email address will not be published. Required fields are marked *