PBL + SL = A Successful Developmental Learning Community

Butler, Alison  & Christofili, Monica (2014). Project-based learning communities in
developmental education: A case study of lessons learned. Community College Journal of Research and Practice, 38:7, 638-650. doi: 10.1080/10668926.2012.710125

For this week’s readings, we were assigned Michelle E. Jordan’s and Reuben R. McDaniels, Jr.’s article focusing on managing uncertainty during a collaborative activity.  The paper documented students’ attitudes and perceptions toward this style of teaching.  This article reminded me of my own experiences with project-based learning (PBL), as well of much of the literature I have read over the years about PBL.    Consequently, I decided to focus my research review this week on the efficacy of project-based learning in a community college developmental classroom.  Below is a recent summary of an article in the Community College Journal of Research and Practice.
 
Article Summary

The purpose of Butler and Christofili’s study is to further examine the relationship between project-based learning and service learning, within the context of a developmental education learning community.  The goal of the study was to “help instructors avoid some of the pitfalls that arise when forming and implementing PBL and to help instructors implement successful PBL” (Butler & Christofili, 2014) by strategically designing the project.

The study was conducted at a large urban community college in the Pacific Northwest: Portland Community College.  The study focused on four learning communities involving developmental courses (math, reading, English and college success).  Furthermore, the study examined a learning community over the duration of four terms/semesters.  The first two semesters of this study involved developmental education students, a learning community, and project-based learning.  The second two semesters introduced service-learning into the learning community (Butler & Christofili, 2014).

The researchers documented each semester’s learning community in the following manner: project question, project implementation, competency assessment, and lessons learned.  Overall, the researchers provided conclusions regarding the design of PBL, with a service-learning component, integrated into a developmental learning community.  Specifically, projects must be of proper scope, instructors need to be flexible given all the potential moving parts, projects must be relevant to learning in respective courses, and managing student group dynamics must be purposeful and strategic (Butler & Christofili, 2014).

Strengths and Critiques

The strength of the article is the practical application to designing learning communities within a community college environment.  The authors provide tangible recommendations to design elements and strategies to integrate service-learning into a learning community.  The authors provide a solid literature review, that includes references to many studies and articles that illustrate the efficacy of learning communities and service-learning for community college students.

The overall research described in the study is lacking.  The researchers reviewed student feedback and their own experiences as both researchers and the instructors.  Overall, I expected greater emphasis of student voice in the research, but this was not evident. I found no evidence that students were interviewed to determine their attitudes and perceptions.  Furthermore, I did not find evidence that all the instructors across the four disciplines were interviewed either.

The authors make many claims regarding the success or failures of the respective learning communities, but do not clearly describe the evidence for which those claims are based.  For example, the authors state that the story theme of the third term project generated “overwhelming student buy-in”  (Butler & Christofili, 2014).  But, I did not find evidence as to how the researchers came to this conclusion.  The majority of the authors’ conclusions are based on their observations of the students and review of students’ projects.  However, I question the strength and objectivity of this case study analysis as both authors were also the instructors of the program.  I appreciate the perspectives of the instructors; however, I believe the research could have been enhanced with a third-party observer/researcher interviewing students, observing classes, and reviewing final projects.

I was very disappointed that this article did not include persistence data  (students enrolling in the next semester and remaining at the college) for the students participating in the learning community.  The article would have been strengthened with more quantitative data.  The only statistic provided was that the retention rate for the third term was higher than previous terms (Butler & Christofili, 2014).  Statistics, as we have discussed, do not tell the whole story.  But in this case, I believe evidence that a learning community designed in this manner could lead to a) increased retention, b) increased persistence, and/or c) higher percentage of course success is vital to other instructors or community colleges adopting this type of instructional model.

Consequently, I offer the following suggestions to improve this study:

  1. Utilize an observer who is not an instructor;
  2. Provide data as to the success, retention and persistence rates of the respective co-horts;
  3. Provide evidence for the conclusions and assertions that are made; and
  4. Focus more on student learning outcomes and impact on the community organizations involved in the service-learning component of the instruction.

My Take

Despite the reservations I have regarding this case study analysis, I am excited about how this article relates to my current role at GCC.  I have been charged with launching our service-learning effort at the college.  We have had pockets of service-learning offered by faculty in various disciplines; however, we do not have a coordinated effort which supports faculty in these endeavors.  Furthermore, I do not believe we have an understanding across our college that service-learning is and can be a meaningful instructional strategy that promotes learning.  Most individuals, when talking about service-learning, tend to focus on the service; the benefits to the community organization and how participation in service-learning improves students feelings and perceptions toward school.  This article, though, reminded me of the need to emphasize that service-learning can and does improve student learning.  And, the article sparked in me an interest to learn more about the impact of service-learning on the developmental student population.  I would venture a guess that the majority of service-learning programs in community colleges across the US focus more on high achieving students (possibly a research question to explore….).  But, this strategy has proven to have a positive impact on developmental students. Ultimately, I am now rethinking how we roll out our service-learning initiative.  Possibly we target a range of interested faculty across multiple disciplines, with developmental education students being a focus. This may prove to be a strategy that positively impacts our success rates, while also emphasizing the role we play as a college in our community.

Another take-away from the article is that instructors struggled to build accountability into their group projects.  I am continually surprised at how frequently this comes up as a challenge for instructors.  Designing effective collaborative learning experiences is challenging. Instructors need to plan extensively to build individual and group accountability into the course for all students involved.  Repeatedly, the instructors indicated how students were upset at how some of their classmates did the majority of the work, while others students apparently did less.  This has always been a challenge of collaborative learning, and there are a lot of articles and guides developed to assist faculty in developing strategies to make sure students are accountable for the work of the group, as well as their individual role within that group.  This article serves as a reminder that additional professional development is probably needed locally at GCC to provide faculty with the skills and strategies to design meaningful and effective collaborative learning experiences.

Finally, I have a renewed sense of excitement around the benefits of learning communities and service-learning in developmental education.  And, this renewed excitement may inspire me to focus my research efforts in this direction.

Menu: Accelerated Learning – Best with Sides

Hodara, M., & Jaggars, S. S. (2014). An Examination of the Impact of Accelerating Community College Students’ Progression Through Developmental Education. The Journal of Higher Education, 85(2), 246–276. doi:10.1353/jhe.2014.0006

Menu:  Accelerated Learning – Best with sides

Most community colleges are feverishly trying to meet President Obama’s College Completion Challenge of increasing the number of students who complete a degree or another credential by 50% by the year 2020.  The task is large.  Fewer than 30% of community college students graduate within 6 years.  Even fewer who come in testing into below-college level (aka developmental) courses graduate within 6 years (College Completion Challenge Fact Sheet).  Whether you are neo-liberal and want students to contribute to the economy or an old-fashioned liberal and want equality for all people especially those who have been traditionally under-served, you may find this article examining accelerated learning in English and math classes at the City University of New York (CUNY) community colleges a worthy read.

There are several items on the menu of strategies to helping the under-prepared learner progress toward graduation – e.g. accelerated learning, contextualized learning, and problem-based learning.  This article focuses on accelerated learning – an approach in which the developmental sequence of courses an under-prepared student must take is shortened or sometimes offered concurrently with college-level courses.  The authors examined data from the six CUNY community colleges.  Students in the CUNY system are diverse:  “15% of students are Asian, 29% are black, 37% are Latino, and 19% are White; …48% are first-generation college students; and 46% have household incomes under $20,000” (City University of New York 2011 in Hodara & Jaggars 2014).  Overseeing all this diversity is a centralized developmental education testing policy with firm cut off scores at which students are placed in developmental education classes.  Each of the six colleges, though, was more or less free to design their own “menu” or developmental course sequence.

The authors found that the English and Math departments did not tend to consult with their sister departments in the district resulting in varying developmental sequences at each college.  Though to me that seems an oversight of administration, it provided the researchers with a ripe opportunity to compare length of developmental course sequences across the district through analysis of data without having to design an experiment.  In English, the researchers designated the treatment group as two colleges that had a single course of either of six or seven credits and compared those to four colleges with two classes in their sequence.  In Math, the control group was determined to be the five colleges that had three developmental math classes compared to the one college making up the treatment group that had only two developmental math courses.  Data were made available to the researchers over a 10 year period which allowed for some longitudinal following of students out of the community college into the CUNY universities.

The methodology is where things get complicated (and honestly over my head at this very early point in my doctoral program).  The researchers were concerned that just comparing outcomes of students in the short (treatment) vs long (control) term sequences would not account for confounding variables.   They noticed gender, race, and financial aid assistance differences right away and wanted to account for high school performance and students’ academic and professional goals.  By using a couple of logistic regression models, the researchers were able to compare like students to each other e.g. students with similar high school, region of birth, citizenship status, and college major.  This section has much more detail to it and I encourage readers with appropriate expertise to explore it further and those without to trust in the prestige of the Journal of Higher Education to believe the researchers did it right!

In general, students in the accelerated courses had better outcomes in their courses and in subsequent college courses than students in the control groups.  The results were not robust, though, and there was some difference between the English and the math sequences.  Students in the accelerated English sequences were more likely to get to college-level English and to accumulate credits and graduate.  However, students in the math sequence, though they passed college-level math, did not demonstrate long-term college success.  Academic policies within the institutions may contribute to that finding.  The authors report that passing college-level English is required for many other courses allowing those students to continue to make progress toward degree completion while passing college-level math does not necessarily lead to progress in other courses for non-STEM majors.

One aspect of this article’s contribution to the field is its interesting perspective on the role of community colleges in the field of higher education.  The authors suggest that community colleges may actually create barriers against achieving a college education – the opposite of their mission of increasing accessibility to higher education for those who might not have the option to attend university.  Since more first-generation and students of color start at community college that may inadvertently create a class system stratifying the middle and upper class white students into the universities and the students of color and low-income students to community college.

Another contribution is the authors’ acknowledgement in the discussion that the generally modest gains seen for the students in the accelerated classes can likely be improved by “more thoughtfully designed reforms incorporating stronger student supports” leading to “substantial increases in developmental students’ college-level credit accrual and graduation rates” (Hodara & Jaggars 2014).  Also, that as colleges continue to work on meeting the completion challenge with improved graduation rates that collaborative conversations around developmental education are more likely to happen thus building relationships and infusing diverse perspectives to provide a more nutritious meal that includes healthy sides in addition to the main dish of accelerated learning.

The influence this article has on me and my evolving line of inquiry is that, before taking this class, I was considering pursuing an interest in data analysis – partly because I was getting tired of feeling as though I wasn’t making the difference I had hoped to in the classroom.  Though I believe familiarity with data and careful analysis is crucial to effective teaching and effective programs, I find that the the analysis is not quite as tasty to me as the prospect of creating a colorful and nutritious “meal” with a variety of sides that complement the main dish.

 

References

Hodara, M., & Jaggars, S. S. (2014). An Examination of the Impact of Accelerating Community College Students’ Progression Through Developmental Education. The Journal of Higher Education, 85(2), 246–276. doi:10.1353/jhe.2014.0006

The College Completion Fact Sheet. American Association of Community Colleges.  Retrieved from http://www.aacc.nche.edu/About/completionchallenge/Documents/Completion_Toolkit_Complete.pdf

 

Self-reflection – Try it, you’ll be more successful!

Hudesman, J., Crosby, S. Flugman, B., Issac, S., Everson, H., & Clay, D. (2013). Using formative assessment and metacognition to improve student achievement.  Journal of Developmental Education, 13(2), 2-13.

Teachers, students, researchers do it….You’ll be more successful if you do it….Try it, I bet you’ll like it!

The above article from the Fall 2013 issue of the Journal of Developmental Education shows that students who engage in regular and on-going reflection about their learning process show improved results in developmental Math courses.  This metacognitive process is essentially the same process that we as nascent researchers are being asked to do for ourselves and, as teachers, that most, if not all, of us do by training or temperament:  thinking about what we do, doing it, examining and reflecting on our results, and making adjustments to improve.

Data from four studies of students in developmental Math courses at an urban college of technology were gathered.  More than a thousand students were included over the course of three summer sessions and four academic years in the mid-2000’s.  In each study, an experimental group engaged in a special program (embedded in the class) that included regular self-reflection and a continuous feedback loop where the students and instructors both adjusted behavior.  This process was called EFAP-SRL (Enhanced Formative Assessment Program with features of Self-Regulated Learning).  For each experimental group, there was a control group of students taking the same level Math course without the added formative assessment, reflection, and feedback.  Whether the classes were short-term summer classes or full year classes, the students in the experimental groups earned higher pass rates in the course as well as higher pass rates on the Math portion of the ACT.  Some data support that students also did better on subsequent Math courses.

To facilitate the metacognitive process and “teach the students how to better plan, practice, and evaluate their ‘learning how to learn’ strategies,” (Hudesman, Crosby, Flugman, Issac, Everson, and Clay, 2013) instructors gave students regular (weekly during the school year, more often during the summer sessions) quizzes.  Students had to first predict their score on the quiz and write how much time they had spent preparing.  Prior to answering each of the five quiz questions, students had to rate their level of confidence in getting it correct and rate their expectation of having solved the problem correctly after completing each problem.   After the corrected quizzes were returned, students had to complete reflections comparing their predictions to the actual results.

The metacognitive process continued and was enhanced with instructor-facilitated class discussions about the reflective process and learning opportunities for the students using personalized data.  For example, students created graphs comparing their predictions of success with their actual quiz scores and then had to generate explanations for the results.  Students also came up with a plan for improvement that could include strategies discussed previously in class.

Instructors received training on the theory and practice of the EFAP-SRL process prior to teaching in the experimental groups and were observed throughout the term to see how often they were using the EFAP-SRL strategies.

This research, though it considered success in courses that I don’t teach, is still very exciting to me because it supports the value of on-going reflection and two-way feedback in the classroom setting.  I found the literature review rich in its explanation of formative assessment and student-regulated learning.  The Methods section is comprehensive; it took me several readings to understand, but I attribute that to my lack of familiarity with research methods.  The results and data tables are clear, simply presented, and easy to read.  The theoretical framework is strong and carried throughout the article.  The contribution to the field is significant because this study supports the efficacy of metacognitive learning which can be applied to all subjects and gives examples of instructor strategies that can be adapted for other subjects as well.  The Appendices contain examples of a quiz and a post-quiz reflection sheet.

The authors acknowledge that engaging in the EFAP-SRL process created more work for instructors and students.  Some instructors gave the researchers feedback that they were uncomfortable with the role of “educational psychologist” in the classroom.  A possible collaboration that I could see even before it was mentioned by the authors was to link the Math course with either a college success course or a counselor who could more comfortably handle the self-reflection piece.  I saw no mention of how the instructors were selected to participate which may have some influence on results.  The authors acknowledge that several interventions were included in this collection of studies and that further research would benefit from separating out the quizzes from the self-reflections to compare the impact of the different interventions.

I am interested to know more about whether students’ level of engagement with the reflections had any impact.  If a student only cursorily reflected was their pass rate still as high?  Yet, rating a student’s depth of reflection seems subjective.  Also, if a student started in the program and then dropped out, was there any measurable difference when they attempted Math again?

One small critique/confusion I have is that the abstract mentions that students’ pass rates on the ACT were higher after students completed the courses engaged in metacognitive activities; whereas in the results section it talks about COMPASS results.  From what I could tell from a Google search, ACT publishes the COMPASS test, but that connection could be more clearly stated in the article.

This research is important because as the authors point out, a third of students who enter college come in needing developmental classes to prepare for their college level classes.  Colleges need to be better prepared to help those students achieve their goals.  With President Obama’s completion agenda, community college funding will be tied to students’ graduation rates which provides another incentive to colleges to help students move through required course work in order to graduate.

All in all, I found these results highly encouraging.  The EFAP-SRL process seems replicable – think about what you are doing, do it, examine the results and reflect, adjust.  I am anxious to be more intentional with my students about using metacognitive strategies.  I am also beginning to think this may be a more viable line of inquiry I could tackle for my research.

Hudesman, J., Crosby, S. Flugman, B., Issac, S., Everson, H., & Clay, D. (2013). Using formative assessment and metacognition to improve student achievement.  Journal of Developmental Education, 13(2), 2-13.

Placement Tests Matter – High Stakes for Community College Students

Maggie P. Fay, Susan Bickerstaff,  and Hodara, Michelle (2013).  Why Students Do Not Prepare for Math Placement Exams: Student Perspectives. (CCRC Working Paper No. 57).  New York, NY: Columbia University, Teachers College, Community College Research Center.

As I continue to explore Developmental Education, I keep wanting to learn more about the placement process.  Higher education is struggling with accurately placing students. And, this placement is ca critical factor the a student’s likelihood for degree and/or certificate attainment.  Many factors contribute to this inaccurate placement, and this article tackles one of the issues – lack of student preparation.

Article Summary

The purpose of this article was to explore community college students’ experiences and attitudes toward placement tests.  The study includes survey responses from 122 students at four community colleges as well as 34 students who participated in four focus groups at those same colleges.  The community colleges are part of a community college system on the East Coast, and at the time of the study, this system decided to undertake new placement testing procedures and instruments.  The students who participated in the study all tested into and were enrolled in developmental math in the fall 2012 semester.

The study concluded that there are four related reasons why students do not prepare for the math placement test.  The reasons were explained as follows: Students’ misperceptions about the stakes of the placement exam, a lack of knowledge about test preparation materials, an unawareness of why and how to prepare for the placement exam, and a very low-level of confidence with their math abilities (Fay, Bickersaff, & Hodara, 2013).

The study provided recommendations for colleges regarding placement testing.  It is recommended to create more student awareness about the importance of the exam, increase awareness of test preparation materials, and design materials that teach both what and how to study (Fay et. al., 2013).

Strengths and Critiques

Overall, the article is well-organized and is written at a very understandable level as it is trying to reach a wide audience of readers.   The authors provided a coherent study as the research consistently focuses on the premise that lack of student preparation contributes to low placement test scores.  But, the findings of the research do not necessarily contribute anything original to the field of developmental education.  Furthermore, the study is limited as it only addresses math placement testing, not reading and English which are utilized nationwide as well.  I was also disappointed that the article did not include a literature review.

Regarding their data collection methods,  the researchers utilized student surveys and student focus groups.  One way to improve and enhance this research would be to consider the timing of the surveys and focus groups.  Students were surveyed after taking the placement test and receiving their scores.  Results may be different if students were surveyed prior to taking the placement exam.  I would also be interested in responses from students who did not test into developmental coursework.  Did those students prepare?  Did those students receive the same information and had the same level of awareness as students who tested into developmental education?  Finally, greater attention could be paid to those students who testing right out of high school versus those students who had been away from school for two or more years.

The findings are logical, but not necessarily significant.  The finding regarding the correlation between self-confidence and placement tests was one I had not read before.  I am not aware though if these findings were revealed during the survey itself, or during the focus group responses.  Also, I am unable to determine how many students indicated this lack of self-confidence.  The conclusion was logical, although it is difficult to determine the frequency of responses that indicated a lack 0f self-confidence with math prior to testing.  I would recommend including more specificity as to how this conclusion came to be.

My Take

I found this research did provide me with greater insight into the placement test process.  I am aware from my experiences at Glendale Community College that students do not prepare for the placement exam.  GCC , and many of the Maricopa colleges, are similar to the colleges discussed in the study as materials are prepared and offered to students.  But, test preparation materials are not necessarily promoted.  They are available on our web site, but a student would have to seek them out to prepare in advance for the exam.  This research confirms that it is imperative that community colleges put forth much more effort to create and to disseminate test preparation materials for incoming students.

This article also reinforced for me the need to communicate the ‘high-stakes’ of the placement exam to incoming students.  The authors state that, “staff members’ attempts to allay students’ anxiety about placement testing (i.e., by telling students not to worry about the exam) contributed to students’ tendency not to prepare and may have served to understate the stakes of the exam” (Fay et. al., 2013). My experiences also confirm this finding.  GCC staff members, nor our print materials, communicate the importance of the placement exam.  Students need to have a greater understanding of the consequences – a strong understanding that their placement matters.  Most students surveyed indicated they did not realize there was even a concept of developmental courses, and they were not aware the results of this exam could place them into below college-level course work.

The survey did reveal a finding that was one I had not considered.  Most articles and research I have read focus on the anxiety students have regarding math, and that anxiety impacts performance in college coursework.  However, I have not considered the effect of low self-confidence with math skills on the placement exam.  The study revealed findings that students “were worried about placing into a course that would be too difficult” (Fay et al., 2013).  And, students reported being satisfied with their placement in below 100-level courses.  The implications for this finding are two-fold in my opinion.  First, much of the communication leading up to the placement exam does not address student fears, anxiety, or low self-confidence.  Furthermore, the test preparation materials are primarily delivered on-line, and again, do not address a lack of confidence for the student.  As community colleges develop test preparation materials and as some embark on the test preparation workshop, it is critical that these materials and courses in some manner address the students’ lack of confidence.

To build on this research, I believe you could expand the study to include attitudes and perceptions of students regarding the English and reading placement exams.  Are student perceptions about those tests the same?  Do students have the same lack of confidence as displayed in the math results?  Another way to build on the research is to explore this lack of confidence further.  Specifically, students completed this survey after having taking the placement exam, and after having already enrolled and started their developmental coursework.  I would want to explore whether students had the lack of confidence prior to the placement exam, or was this lack of confidence fueled or reinforced by a poor placement test score and subsequent placement into a developmental course.