Learning Outcomes and Engagement

Strayhorn, T. (2008). How College Students’ Engagement Affects Personal and Social Learning Outcomes. Journal of College and Character, X(2), 1–16.

Summary

This article is presents possible interventions to influence student engagement, which then results in student learning. A widely accepted model for identifying change is presented. This model is called I-E-O and was developed by Astin in 1991. In the model, I represents “inputs”, E represents “Environment” and O represents “Outcome”. The model developed by Astin is considered a foundational model in evaluating the impact of planned interventions (or activities) with students. Through the use of data collected in the College Student Experiences Questionnaire (CSEQ, the researcher conducted quantitative analysis to identify potential activities (inputs) that would yield a measurable increase in student learning (outcome). The possible outcomes originated from the Council for the Advanced of Standards in Higher Education. Therefore, the research was attempting to determine appropriate input that would correlate to the desired CAS outcomes.

Literature Review

The literature review focused mainly on the frameworks for analysis, the I-E-O model and the standards identified for CAS. In accordance with the I-E-O model, student learning is the result of inputs and environment. The specific desired learning outcomes were identified from the CAS standards. See the table below in which the researcher categorized the standards based upon desired outcomes. Table 1

According to the researcher, the CAS standards are commonly agreed set of outcomes we hope for students that include categories related to developing effective communication practices, accepting diversity in thought and experience, forming meaningful relationships, and acquiring the ability to think critically.  The researcher also defined student engagement as “’the time and energy that students devote to educationally purposeful activities and the extend to which the institution gets students to participate in activities that lead to student success’ (Kezar & Kinzie, 2006, p. 150)” (Strayhorn, 2008, pg. 6)

Quantitative Research

 The research study is seeking to answer two research questions “(a) Is there a statistically significant relationship between students’ engagement in college experiences and personal/social learning gains and (b) What is the relationship between students’ engagement in college experiences and their self-reports personal/social learning gains, controlling for background differences” (Strayhorn, 2008, pg 2). The researcher is adding to the body of work based upon a possible gap in research in this field.

The CSEQ is administered by Indiana University Bloomington. It is typically used for assessment. It is comprised of 191 items “designed to measure the quality and quantity of students’ involvement in college activities and their use of college facilities” (Strayhorn, 2008, pg 4).   It was administered to 8000 undergraduates attending 4-year institutions.  The researcher used survey data used and identified certain questions thought to correlate to specific learning outcomes from CAS. Component factor analysis was used for the initial round of quantitative analysis. The next step was the incorporation of hierarchical linear regression. In hierarchical linear regression, variables are entered into the data set based up an order determined by the researcher.

Limitations of this research include a lack of detail about how participants in the survey were selected. Also, only 4-year institutions were selected. Community college students might have been included if they had transferred. However, that information was not provided. The initial review of the data can be replicated, since it is available. However, the researcher used assumptions to first, correlate what he perceived to be relevant data points along with the CAS standards and then second, to organize their analysis based upon a possible impact.

 Implications and Future Research

 Based upon the analysis, the researcher concluded that peers and active learning were found most impactful on student engagement. Therefore, programs should consider programs that bring students together and support learning such as peer study groups, peer mentors, social outreach. Since faculty provide the opportunities for active learning, this was further discussed in terms of possible research opportunities that faculty could provide to students. Strayhorn (2004) specifically suggests “programs should be (re-) designed for faculty and students to collaborate on research projects, co-teaching experiences, and service learning activities…” (pg. 11). Future research opportunities might be beneficial in showing how peer and faculty engagement opportunities do correlate to successful student outcomes. Strayhorn (2004) further clarifies this by stating “future research might attempt to measure the impact of engagement on other learning outcomes such as those identified by CAS including effective communication, appreciating diversity, and leadership development…” (pg. 12).

Another possible extension of this research is to incorporate the I-E-O model along with student development theories. Student development theories are theories advisors can use to understand how a student is maturing and growing (Williams, 2007). I mention this to suggest that a student’s phase of development could potentially be an influential factor in how the student responds to inputs and environments.   This is a possible extension of this research and relates to my research field since I am beginning to explore outcomes related to advising interventions. This could include qualitative research alongside the quantitative research analysis. An example would be to conduct interviews to get a sense of whether the inputs suggested by this research lead to different levels of outcomes based upon the phase of the students’ development.

References

Williams, S. (2007). From Theory to Practice: The Application of Theories of Development to Academic Advising Philosophy and Practice. Retrieved from NACADA Clearinghouse of Academic Advising Resources Web site:
http://www.nacada.ksu.edu/Resources/Clearinghouse/View-Articles/Applying-Theory-to-Advising-Practice.aspx

 

Who is responsible for my graduation?

Christian, T. Y. (Appalachian S. U., & Sprinkle, J. (Appalachian S. U. (2013). College Student Perceptions and Ideals of Advising: An Exploratory Analysis. College Student Journal, 47(2), 271–292. Retrieved from http://essential.metapress.com/content/7781552X33313470

“Advising is viewed as a teaching function based on a negotiated agreement between the student and the teacher in which varying degrees of learning by both parties to the transaction are the product” (Crookston 1972)

This article sought to assess advisor performance as a function of students’ expectations of the advising experience and the student’s own sense of responsibility. The researchers developed a questionnaire intended to identify the appropriate type of advising based upon the students’ needs, perceptions, and level of Locus of Control. The researchers deployed a questionnaire and evaluated the responses. Multiple hypotheses were presented and evaluated. The article concluded with recommendations for future study.

Literature Review
The researchers described two execution styles for advising, prescriptive and developmental (or collaborative). The developmental approach is a collaborative relationship in which the responsibilities of advisor and student are clear. If one has an internal locus of control one will “take responsibility for their actions, achievements, and consequences”. Those with an external locus will not take that internal responsibility, instead focusing on external factors influencing success. The researchers suggest a correlation exists between locus of control and preferred advising approach. “Viewed through this lens, college students with an internal locus of control are likely to prefer collaborative ad- vising while those with an external locus will prefer prescriptive advising as it places the burden of responsibility firmly on the advisor.” The researchers further suggested a correlation that as we age, our internal locus of control becomes more developed and we instead have a preference for collaborative advising.

Minimal literature was provided. Student development theory, as introduced by Crookston, 1972, is presented, but not fully explained. For example, student development theory uses alternative terms (not just responsibility) to describe this process of internalizing responsibility and it is often viewed a continuum. While Crookston (1972) does support the integration of a sense of self-responsibility and ownership, the advising relationship is much more complex. “Historically, the primary focus of both the academic advisor and the vocational counselor has been concerned with helping the student choose a major or an occupation as a central decision around which to begin organizing his life. The emergence of the student development philosophy in recent years necessitates a critical reexamination of this traditional helping function as well as the assumptions which undergird it” (Crookston, 1972). Essentially one factor of developmental advising was considered. However, the responses by the students could have been much more a function of the other factors introduced by Crookston (1972).

The rationale for assessment in advising is lacking in the literature review. Only one factor, time to graduation, is presented. Current research suggests drivers for assessment also include student retention, academic performance, and advising performance management (Teasley and Buchanan, 2013).

Research
The researchers developed and deployed a questionnaire. The purpose of the questionnaire was to investigate which type of advising was being utilized (prescriptive or developmental) and capture the students’ ideals of advising. The analysis was intended to discuss the relationship, if any, which exists between those two factors. The researchers developed several hypotheses: students are currently receiving developmental advising and that as they age it becomes more prescriptive, gender and ethnicity do not influence student perception of advising interactions, GPA correlates to a tendency towards prescriptive advising, students will use their ideals of advising as a foundation for their own assessment.

A 50-question questionnaire was administered to students in multiple courses within the same department. 125 completed questionnaire were provided to the researchers. Class time was provided to complete the questionnaire but it was not mandatory.

There were two sub-scales considered with the analysis, student perceptions and student ideals. Factor analysis was used to evaluate the data obtained. The researchers explained how they employed the factor analysis. Used factors that had a load higher than their preferred level. Any questions with loads lower than those minimums were removed. After analyzing the results, each hypothesis was discussed in terms of the findings.

Limitations. The researchers proposed two limitations; a relatively small sample size was used and all students were from the same academic department. However, other limitations could be considered that no differential was made between faculty advisors or professional advisors. An extension of that limitation is advisor training and development. Finally, multiple factors, as described by Crookston (1927) influence the advising relationship, not just a student’s locus of control.

The source of the questions was not included. The article by Teasley & Buchanan (2013) included that detail and also highlighted the refinements made to the survey with each round of analysis. This research presented conclusions and a tool that has been tested much less than the tool introduced by Teasley & Buchanan. Duplication of these findings or enhancement of the assessment questionnaire is much more challenging based upon this research.

The questionnaire was only administered to 125 students across three courses within the same department.  Researchers included conclusions based upon the hypotheses. A larger sample size and additional classes could further support the application of the findings.  The researchers identified these factors as a limitation themselves.

Future research opportunities. An advising syllabus has become a key piece of advising execution (Trabant, 2006). An advising syllabus is a tool in which advisors convey their responsibilities and highlight the students’ individual responsibilities. I appreciated the concept of what the researchers were trying to capture in terms of the students’ perceptions of their own level of responsibility. The syllabus is intended to convey responsibility and establish a foundation for the advising relationship. A new idea to consider is how an advising team establishes the syllabus and what is intended by it. From there, is it appropriate to also consider assessing correlation between what is written in the syllabus and what is executed by the advisor?

It was interesting to include a psychological perspective in the assessment of advising. Typical advising literature considers assessment based upon generally accepted advising development theories (Williams, 2011). This could be an expansion of that consideration. McClellan, 2011 suggested the use of widely accepted business assessment models. These different approaches add a new lens by which to evaluate student expectations and development, along with advisor performance.

References

Crookston, B. B. (1972). A Developmental View of Academic Advising as Teaching. Journal of College Student Personnel. US: ACPA Executive Office.

McClellan, J. L. (2011). Beyond Student Learning Outcomes: Developing Comprehensive, Strategic Assessment Plans for Advising Programmes. Journal of Higher Education Policy and Management, 33(6), 641–652. doi:10.1080/1360080X.2011.621190

Teasley, M. L., & Buchanan, E. M. (2013). Capturing the Student Perspective: A New Instrument for Measuring Advising Satisfaction. NACADA Journal, 33(2), 4–15. doi:10.12930/NACADA-12-132

Trabant, T.D. (2006). Advising Syllabus 101. Retrieved from NACADA Clearinghouse of Academic Advising Resources Web site:
http://www.nacada.ksu.edu/Resources/Clearinghouse/View-Articles/Creating-an-Advising-Syllabus.aspx

Williams, S. (2007).From Theory to Practice: The Application of Theories of Development to Academic Advising Philosophy and Practice. Retrieved from NACADA Clearinghouse of Academic Advising Resources Web site:
http://www.nacada.ksu.edu/Resources/Clearinghouse/View-Articles/Applying-Theory-to-Advising-Practice.aspx

 

Research Topic Post – Online Learning Readiness Assessments

Dray, B. J., Lowenthal, P. K., Miskiewicz, M. J., Ruiz-Primo, M., & Marczynski, K. (2011). Developing and instrament to assess student readiness for online learning: A validation study. Distance Education, 32(1), 29-47.

With the dramatic increase in online learning deliverables seen within K-20 environment, researchers have begun to examine not only the validity of this mode of education, but also the student’s preparedness for web-based learning, and success for the online learning environment. Using Kerr, Kerr, and Rynearson’s (2006) framework for assessing online learning readiness, the author’s found that students felt they were indeed ready for online learning, but, that their self-assessment was based on their own experiences with technology leaving room for additional variables to be examined.

The authors of this article were familiar with previously conducted survey findings presenting information on student readiness for web-based learning; however, the results of those surveys provided limited information and translating that information into tangible data was challenging as stated by the researcher. Therefore, the authors of this article conducted a study to develop a more detailed tool to determine student readiness for online learning through a three-phase study: the survey development phase in which faculty/experts reviewed questions for clarity, the item analysis phase where the content of the tool and research questions were refined through focus groups and interviews, and finally, the survey validation phase in which questions from previous surveys were combined with new questions to cover topics relating to student demographic, learner characteristics, and technological ability (Dray, Lowenthal, Miskiewicz, Ruiz-Primo, & Marczynski, 2011).

The participants in their study were comprised of 26 graduate students pursuing a degree in educational computing. The results of the study showed that many of the students were scored as ”ready” for online learning, yet, the implications of study were direly impacted by the lack of sub-groups based on age, sex, or socioeconomic influences on their preparedness. This has importance because results may show that certain ages, genders, or socioeconomic factors play a role in determining whether the student is prepared for web-based learning. For example, if a student is under the age of thirty, they may rank as being prepared for online learning because it is reasonably assumed that students in this age group use online communication daily through social networking. Researchers also determined that the term “readiness” needed further clarification for the study’s purpose, as oddities were discovered as to whether readiness was determined by one’s technical ability or by their use and engagement of web-based tools, equipment, and material (Dray, Lowenthal, Miskiewicz, Ruiz-Primo, & Marczynski, 2011).

This article proves beneficial to my study in that the literature review coherently presented previous research done on the topic along with critiques of the strengths and weaknesses of each article reviewed.  The authors found  literature gave information on general learner characteristics, interpersonal communication abilities,  and technological skils(word processing, using spread sheets, use of search engines, etc.). However, noticeably absent was information on the student’s work schedules, access to technology, and the expectations for being successful in an online course. I found it interesting that the authors found an unexplored angle of questioning based on self- concept, esteem, and efficacy which could lead to quite different results of surveys proving to be an excellent contribution to the field. The authors of this article likened their study to that of Kerr, Kerr and Rynearson (2006) whose article, “Student Characteristics for Online Learning Success” also discussed student esteem, efficacy, and self-confidence as a means to determine success in online learning.

The authors create a stimulating argument, showing that readiness is a complex term, and must be defined as more than general characteristics. I found the first phase of their survey to have the strongest argument where skills regarded as being part of traditional learning can be easily carried over into online learning. Such skills include writing and expression, time management, and responsibility. Additionally, the authors were diligent enough to alter questionnaires where inconsistencies were present. For example, during the first phase of their study, it was found that students were answering questions based on their personal experience with web-based tools, rather than within the education context as expected. Therefore, the authors revised the prompts to require the students to answer the questions from their educational experience.

The article presented the questionnaire through its various stages of reconstruction showing how questions were revised during each phase of the study. However, the article lacked a clear definition of how the surveys were administered. Was the survey an in-person sheet where students entered answers long-hand? Was the survey administered online through a course management software program, or was the survey collected in focus group setting in a qualitative manner? These are questions that presented areas of concern as the setting in which the survey was administered could possibility present differing data results. While the authors presented information the ages, ethnicities, and major of study for the participants, the study failed to present information on the whether participants were taking an online course for the first time, and what distance learning model was used for this specific course in which the survey was given (completely online, hybrid, etc.). Additional areas of study could examine  the comparison between undergraduate student online learning preparedness, and graduate student preparedness in online learning environments to see if results of the study vary between the two populations. Another area of exploration could be centered on how the level of social media experience of the participants  impacts online learning success. Finally, the study could be extended to present data on minority student success in online learning environment, including information on whether one’s socioeconomic status has an impact on online learning. The further study, as suggested, would prove to as an effective analysis for researchers and teacher-educators to examine underrepresented populations.

This article can be compared to Lau and Shaikh’s (2012) article, “The Impacts of Personal Qualities on Online Learning Readiness at Curtin Sarawak Malaysi” in which the authors of the article developed a questionnaire to gather information on student’s personality characteristics as diagnostic tool for both faculty and instructional designers (Lau & Shaikh, 2012). Where Lau and Shaikh’s study shows a higher level of evidence is that they surveyed over 350 participants in their study as compared to Dray, Lowenthal, Miskiewicz, Ruiz-Primo, and Marczynski’s study in which only 26 graduate students were surveyed. The findings of Lau and Shaikh’s (2012) study were that students were less satisfied with online learning in comparison to traditional learning environments and felt less prepared for the objectives of the course. Both articles support my research in different by equally necessary ways: Lau and Shaikh’s (2012) article presents compelling statistical data on online learning readiness, while Dray et al (2011) article provides information on how to compose efficient survey questionnaires.

References

Dray, B. J., Lowenthal, P. K., Miskiewicz, M. J., Ruiz-Primo, M., & Marczynski, K. (2011). Developing and instrament to assess student readiness for online learning: A validation study. Distance Education, 32(1), 29-47. Retrieved from http://web.b.ebscohost.com.ezproxy1.lib.asu.edu/ehost/pdfviewer/pdfviewer?sid=2ae58e61-960f-4907-be25-93dcd5ba5c38%40sessionmgr114&vid=2&hid=120

Kerr, M., Rynearson, K., & Kerr, M. (2006). Student characteristics for online learning success. The Internet and Higher Education, 9(2), 91-105.

Lau, C. Y., & Shaikh, J. M. (2012, July). The impacts of personal qualities on online learning readiness at Curtin Sarawak Malaysia. Educational Research and Reviews, 7(20), 430-444.