Training Advisors to Conduct Research

Hurt, R. L., & State, C. (2012). An Applied Introduction to Qualitative Research Methods in Academic Advising. NACADA Journal, 32(2010).

This article presents approaches for conducting academic advising research. Presents qualitative and quantitative approaches, however, emphasizes the use of qualitative approaches. Advising as a form of teaching can be evaluated. The writers suggest reluctance on advisors’ part to conduct research due a lack of training in statistical analysis. The writers present frameworks to provide validity to qualitative research.

 Qualitative Research

             The article includes a description of qualitative research. It begins by outlining four characteristics of valid qualitative research: targeted to address a specific question, have qualities which can be measures, seeking to understand the factors influencing behavior, ensure the researcher is genuinely interested in the people he/she is researching so that a deep connection can be made.  Four areas to be mindful of when conducting qualitative research: may last a long span of time, be aware to ensure the story of the data is told and bias is reduced in interpretation, review possible journals that might publish a qualitative (as opposed to quantitative) study, and to possibly incorporate quantitative research (where appropriate).

Three Approaches to Quantitative Research

The writers presented three types of quantitative research approaches and provided examples of how each type can be used. Ethnography is “rooted in cultural anthropology and sociology” (pg. 65). In academic advising, we are trying to describe the student experience. That is an example of an ethnographic study. In my research, I’m interested in describing the learning students may do as a result of advising interactions. This would be supported through this type of research framework. Appreciative Inquiry, is the second type presented. It is described as a form of small group discussion, which leads to the production of the most effective form of “x”. Discussion is guided and attempts to cover four phases: Discovery, Dream, Design, and Destiny. These are described further as identifying the current status of “x”, considering what could be, designing what could be, and finally, creating what could be for “x”. The third type presented is Case Studies. In this type, varying groups are selected by the researcher to discuss a topic. Perhaps this would be different years of students within a major (i.e. Freshman, Sophomores, Juniors, or Seniors) and are guided with open-ended questions to discuss a specific topic. The responses are collected and common themes or further research topics emerge.

 Article Critique

            This article presented very little literature review. It’s based upon a supposition that academic advisors are reluctant to do quantitative research due to lack of understanding of statistics. That idea lacks any reference. The authors suggest identifying journals that will publish qualitative research, but did not recommend any. That would have been helpful, too. In addition, each of the three types of qualitative research presented did not include significant literature support for the explanation or definition.   It could be considered an introductory piece for an advisor beginning to consider research questions. However, further inquiry would need to be done before crafting a research methodology.

Application to My Research

            Well-developed research could build more validity into the field of advising. As I have been doing my literature review, I would agree that much of what I am finding is qualitative research. However, I have seen a transition within the day-to-day execution of advising, as we have been integration technology based tools with our practice. We track and record interactions, use data sets to identify which students to reach out, and have fully online tools available. This is developing data sets that will be available (with the appropriate approvals) for quantitative research. As this is evolving, I am hopeful I can access this information for my research.

I’ve administered surveys and led focus groups in the past, with varying degrees of success. The results from the surveys were presented with very basic statistical analysis along with the focus group comments. The focus groups comments helped to put the data from the surveys into context. In my current role, I’ve been working towards the implementation of quantitative survey assessments. The first two I am considering employing include post appointment surveys and pre and post surveys to test knowledge of a specific academic tool. I hope to publish or present findings in the future. I’m hoping that by collecting data from these surveys, I can open a dialogue with students and advisors to find out more about the factors, which might be contributing to what is being measured in the surveys. This article was helpful in that it will present a starting point for how to conduct the qualitative methodologies, which I need to consider. However, I’ll need to conduct a more in-depth literature review before moving forward.

What drew me to the article was that it helped me understand the context in which my research will be presented.  If the authors are correct in their assumption about advisors’ capacities for statistical analysis and they are the primary audience for my research, I need to understand how to convey my findings in a way that is accepted and understood.

Do Only “Good” Peers Matter?

In this article, Pivovarova (2013) discussed and evaluated the school tracking system. Tracks are classrooms or programs targeted toward homogenous groups. As an economist, her perspective is interesting in that she is considered the financial cost of the trade off between providing an educational experience that is equal or efficient. Pivovarova (2013) states “school tracking is defined as ability grouping with or without design of the specific curriculum for different ability groups. The opponents of tracking argue that channeling students into different tracks increases the inequality of opportunity and aggravates future economic inequality. Proponents of tracking usually cite the increased efficiency when students are grouped by abilities in schools or classes” (pg. 4).

The article includes a thorough literature review and presentation of current models. I found it interesting that a mathematical formula was used to evaluate the data and try to assess the influence of students’ academic performance on one another. The Linear-in-means model of classroom interaction seems to be the common model used to assess peer learning. Pivovarova (2013) was analyzing data available through existing educational assessments and applying this model to determine the model’s applicability.   In the literature review, Pivovarova (2013) shared multiple research views and stated, “the overall consensus in recent literature on peer effects in education is clear – the data do not support the simple linear-in-means model. The evidence suggests that the structure and nature of peer effects in elementary and middle school are more complicated than the standard linear-in-means model implies” (pg. 7).

The researcher was questioning how peers in classrooms affect classroom learning and how do the students in the classroom complement and influence the learning environment. I had always thought of success in school being a function of ability and attitude. I have certainly always loved learning. However, my experience included tracks. I was tracked in gifted programs and had some fantastic educational experiences. I have always wondered what my other classmates did when we all left the room to participate in “gifted” activities. This was my peer, my community. The cost of delivering this type of unique experience for gifted students was the additional instructor and the expenses of the additional programming. When considering the cost of that, the school systems need to consider the cost of delivering a program that is targeting only one group. This would not be inline with an approach trying to provide a high quality experience to all students. There is a cost, according to Pivovarova’s (2013) findings. The costs are financial and beyond in the sense that students who are low achievers might not be receiving the best education if grouped in a certain way. Ultimately, the findings were that high achieving students added to the environment advanced the group even further.

I work with a group of high achieving students. I define them that way based upon a couple of measures. The first is the College Index (CI) score. This is a measure that is determined based upon a variety of typical high school related measures (SAT score, ACT score, class ranking, GPA). The average CI score of the students I work with is 126. The highest possible score is 144. (Interestingly, there are specific programs at ASU for students who have a score less than 100 and those who score at a high level are invited to other specific programs). The other measure I use to qualify my evaluation of the students is that our major has the most students than any other major in the Barrett Honors College (BHC). We have two groups of students in the major, those in BHC and those not in BHC. Pivovarova (2013) found that high ability students do well when grouped with other high ability students. However, the impact of lower ability students was not as clear.

As I read the article, I reflected on the students in this major and what the experience must be like as a “lower” achieving student. I’m not even certain how I would define that, except that I do perceive an imbalance between the two groups based on how they refer to one another. It definitely made me reflect on the programming we deliver and how we frame advising conversations with students.

References

Pivovarova, M. (2013). Should we track them or should we mix them? Tempe, AZ: Arizona State University.

 

 

Learning Outcomes and Engagement

Strayhorn, T. (2008). How College Students’ Engagement Affects Personal and Social Learning Outcomes. Journal of College and Character, X(2), 1–16.

Summary

This article is presents possible interventions to influence student engagement, which then results in student learning. A widely accepted model for identifying change is presented. This model is called I-E-O and was developed by Astin in 1991. In the model, I represents “inputs”, E represents “Environment” and O represents “Outcome”. The model developed by Astin is considered a foundational model in evaluating the impact of planned interventions (or activities) with students. Through the use of data collected in the College Student Experiences Questionnaire (CSEQ, the researcher conducted quantitative analysis to identify potential activities (inputs) that would yield a measurable increase in student learning (outcome). The possible outcomes originated from the Council for the Advanced of Standards in Higher Education. Therefore, the research was attempting to determine appropriate input that would correlate to the desired CAS outcomes.

Literature Review

The literature review focused mainly on the frameworks for analysis, the I-E-O model and the standards identified for CAS. In accordance with the I-E-O model, student learning is the result of inputs and environment. The specific desired learning outcomes were identified from the CAS standards. See the table below in which the researcher categorized the standards based upon desired outcomes. Table 1

According to the researcher, the CAS standards are commonly agreed set of outcomes we hope for students that include categories related to developing effective communication practices, accepting diversity in thought and experience, forming meaningful relationships, and acquiring the ability to think critically.  The researcher also defined student engagement as “’the time and energy that students devote to educationally purposeful activities and the extend to which the institution gets students to participate in activities that lead to student success’ (Kezar & Kinzie, 2006, p. 150)” (Strayhorn, 2008, pg. 6)

Quantitative Research

 The research study is seeking to answer two research questions “(a) Is there a statistically significant relationship between students’ engagement in college experiences and personal/social learning gains and (b) What is the relationship between students’ engagement in college experiences and their self-reports personal/social learning gains, controlling for background differences” (Strayhorn, 2008, pg 2). The researcher is adding to the body of work based upon a possible gap in research in this field.

The CSEQ is administered by Indiana University Bloomington. It is typically used for assessment. It is comprised of 191 items “designed to measure the quality and quantity of students’ involvement in college activities and their use of college facilities” (Strayhorn, 2008, pg 4).   It was administered to 8000 undergraduates attending 4-year institutions.  The researcher used survey data used and identified certain questions thought to correlate to specific learning outcomes from CAS. Component factor analysis was used for the initial round of quantitative analysis. The next step was the incorporation of hierarchical linear regression. In hierarchical linear regression, variables are entered into the data set based up an order determined by the researcher.

Limitations of this research include a lack of detail about how participants in the survey were selected. Also, only 4-year institutions were selected. Community college students might have been included if they had transferred. However, that information was not provided. The initial review of the data can be replicated, since it is available. However, the researcher used assumptions to first, correlate what he perceived to be relevant data points along with the CAS standards and then second, to organize their analysis based upon a possible impact.

 Implications and Future Research

 Based upon the analysis, the researcher concluded that peers and active learning were found most impactful on student engagement. Therefore, programs should consider programs that bring students together and support learning such as peer study groups, peer mentors, social outreach. Since faculty provide the opportunities for active learning, this was further discussed in terms of possible research opportunities that faculty could provide to students. Strayhorn (2004) specifically suggests “programs should be (re-) designed for faculty and students to collaborate on research projects, co-teaching experiences, and service learning activities…” (pg. 11). Future research opportunities might be beneficial in showing how peer and faculty engagement opportunities do correlate to successful student outcomes. Strayhorn (2004) further clarifies this by stating “future research might attempt to measure the impact of engagement on other learning outcomes such as those identified by CAS including effective communication, appreciating diversity, and leadership development…” (pg. 12).

Another possible extension of this research is to incorporate the I-E-O model along with student development theories. Student development theories are theories advisors can use to understand how a student is maturing and growing (Williams, 2007). I mention this to suggest that a student’s phase of development could potentially be an influential factor in how the student responds to inputs and environments.   This is a possible extension of this research and relates to my research field since I am beginning to explore outcomes related to advising interventions. This could include qualitative research alongside the quantitative research analysis. An example would be to conduct interviews to get a sense of whether the inputs suggested by this research lead to different levels of outcomes based upon the phase of the students’ development.

References

Williams, S. (2007). From Theory to Practice: The Application of Theories of Development to Academic Advising Philosophy and Practice. Retrieved from NACADA Clearinghouse of Academic Advising Resources Web site:
http://www.nacada.ksu.edu/Resources/Clearinghouse/View-Articles/Applying-Theory-to-Advising-Practice.aspx

 

Leadership and Uncertainty

I read the article by Jordan and McDaniel (in press) in terms of not just how elementary school students can deal with uncertainty, but also how adults also manage uncertainty. While their article focused on elementary students, I kept wondering if that type of uncertainty and learning through peer interactions occurs in adult learning communities as well. As an individual moves into a new community of practice, he/she will experience uncertainty (Wenger, 2000).  I thought about the fellow classmates of the elementary students as fellow peers in a learning community and drew a correlation that perhaps joining a new community begins with uncertainty and that a leader has a responsibility to understand that uncertainty.

Communities of practice help people thrive and manage uncertainty.  Those that have established the norms and culture for a group set the stage for how someone can be successful within that group. Collaboration is a strategy which can enables learning about a culture. An individual learns who is in charge, how decisions are made, and what outcomes are expected (Wenger, 2000). These peer interactions are very influential, as discovered by Jordan and McDaniel (in press) in their study of elementary students. Learning can occur as a result of this lack of balance of power.

Social supportiveness was closely evaluated in the study by Jordan and McDaniel (in press). The social supportiveness helped the students deal with uncertainty while completing the project task. A factor that influenced whether a peer responded in a socially supportive manner was prior experience with the individual expressing uncertainty. The social support varied based upon whether a student wants something from a fellow student who was expressing uncertainty. If not, the uncertainty was dismissed. If so, the need was addressed. The socially supportive responses were more likely to occur when one’s peers were also uncertain or believed the uncertainty was appropriate to the situation at hand.

In terms of leadership, the authors found that framing the uncertainty helped the students move through the uncertainty. Awareness about the community of practice can then help a leader understand how to introduce someone into the community. The other readings this week, though, highlighted the lack of awareness that people outside of marginalized groups may experience as a result of trying to exist within a white community.

I believe a leader should ensure all members of the community are thriving, engaging, collaborating, supporting, etc. What do you do, though, if you don’t have the opportunity to relate to people within the community or even understand that social support is being offered? Is leadership then a function of realizing whose knowledge you are including or not including? And, is leadership ensuring the social support needed for community members to engage and succeed? These were some questions that came to mind as I reviewed the articles this week. As we begin to learn about the communities we plan to study, perhaps action research, as outlined by Bautista and Morrell (2013) can suggest a model by which leaders can learn more about the communities they lead and determine methods to provide the social supportiveness which can enable learning and success by the community members.

References

Bautista, M., Bertrand, M., Morrell, E., Scorza, D. & Matthews, C. (2013). Participatory Action Research and City Youth: Methodological Insights From the Council of Youth Research. Teachers College Record, 115(100303), 1­23.

Jordan, M. E. & McDaniel, R. (in press). Managing uncertainty during collaborative problem solving in elementary school teams: The role of peer influence in robotics engineering activity. Journal of the Learning Sciences. doi: 10.1080/10508406.2014.896254

Wenger, E. (2000). Communities of Practice and Social Learning Systems. Organization, 7(2), 225–246.

 

Who is responsible for my graduation?

Christian, T. Y. (Appalachian S. U., & Sprinkle, J. (Appalachian S. U. (2013). College Student Perceptions and Ideals of Advising: An Exploratory Analysis. College Student Journal, 47(2), 271–292. Retrieved from http://essential.metapress.com/content/7781552X33313470

“Advising is viewed as a teaching function based on a negotiated agreement between the student and the teacher in which varying degrees of learning by both parties to the transaction are the product” (Crookston 1972)

This article sought to assess advisor performance as a function of students’ expectations of the advising experience and the student’s own sense of responsibility. The researchers developed a questionnaire intended to identify the appropriate type of advising based upon the students’ needs, perceptions, and level of Locus of Control. The researchers deployed a questionnaire and evaluated the responses. Multiple hypotheses were presented and evaluated. The article concluded with recommendations for future study.

Literature Review
The researchers described two execution styles for advising, prescriptive and developmental (or collaborative). The developmental approach is a collaborative relationship in which the responsibilities of advisor and student are clear. If one has an internal locus of control one will “take responsibility for their actions, achievements, and consequences”. Those with an external locus will not take that internal responsibility, instead focusing on external factors influencing success. The researchers suggest a correlation exists between locus of control and preferred advising approach. “Viewed through this lens, college students with an internal locus of control are likely to prefer collaborative ad- vising while those with an external locus will prefer prescriptive advising as it places the burden of responsibility firmly on the advisor.” The researchers further suggested a correlation that as we age, our internal locus of control becomes more developed and we instead have a preference for collaborative advising.

Minimal literature was provided. Student development theory, as introduced by Crookston, 1972, is presented, but not fully explained. For example, student development theory uses alternative terms (not just responsibility) to describe this process of internalizing responsibility and it is often viewed a continuum. While Crookston (1972) does support the integration of a sense of self-responsibility and ownership, the advising relationship is much more complex. “Historically, the primary focus of both the academic advisor and the vocational counselor has been concerned with helping the student choose a major or an occupation as a central decision around which to begin organizing his life. The emergence of the student development philosophy in recent years necessitates a critical reexamination of this traditional helping function as well as the assumptions which undergird it” (Crookston, 1972). Essentially one factor of developmental advising was considered. However, the responses by the students could have been much more a function of the other factors introduced by Crookston (1972).

The rationale for assessment in advising is lacking in the literature review. Only one factor, time to graduation, is presented. Current research suggests drivers for assessment also include student retention, academic performance, and advising performance management (Teasley and Buchanan, 2013).

Research
The researchers developed and deployed a questionnaire. The purpose of the questionnaire was to investigate which type of advising was being utilized (prescriptive or developmental) and capture the students’ ideals of advising. The analysis was intended to discuss the relationship, if any, which exists between those two factors. The researchers developed several hypotheses: students are currently receiving developmental advising and that as they age it becomes more prescriptive, gender and ethnicity do not influence student perception of advising interactions, GPA correlates to a tendency towards prescriptive advising, students will use their ideals of advising as a foundation for their own assessment.

A 50-question questionnaire was administered to students in multiple courses within the same department. 125 completed questionnaire were provided to the researchers. Class time was provided to complete the questionnaire but it was not mandatory.

There were two sub-scales considered with the analysis, student perceptions and student ideals. Factor analysis was used to evaluate the data obtained. The researchers explained how they employed the factor analysis. Used factors that had a load higher than their preferred level. Any questions with loads lower than those minimums were removed. After analyzing the results, each hypothesis was discussed in terms of the findings.

Limitations. The researchers proposed two limitations; a relatively small sample size was used and all students were from the same academic department. However, other limitations could be considered that no differential was made between faculty advisors or professional advisors. An extension of that limitation is advisor training and development. Finally, multiple factors, as described by Crookston (1927) influence the advising relationship, not just a student’s locus of control.

The source of the questions was not included. The article by Teasley & Buchanan (2013) included that detail and also highlighted the refinements made to the survey with each round of analysis. This research presented conclusions and a tool that has been tested much less than the tool introduced by Teasley & Buchanan. Duplication of these findings or enhancement of the assessment questionnaire is much more challenging based upon this research.

The questionnaire was only administered to 125 students across three courses within the same department.  Researchers included conclusions based upon the hypotheses. A larger sample size and additional classes could further support the application of the findings.  The researchers identified these factors as a limitation themselves.

Future research opportunities. An advising syllabus has become a key piece of advising execution (Trabant, 2006). An advising syllabus is a tool in which advisors convey their responsibilities and highlight the students’ individual responsibilities. I appreciated the concept of what the researchers were trying to capture in terms of the students’ perceptions of their own level of responsibility. The syllabus is intended to convey responsibility and establish a foundation for the advising relationship. A new idea to consider is how an advising team establishes the syllabus and what is intended by it. From there, is it appropriate to also consider assessing correlation between what is written in the syllabus and what is executed by the advisor?

It was interesting to include a psychological perspective in the assessment of advising. Typical advising literature considers assessment based upon generally accepted advising development theories (Williams, 2011). This could be an expansion of that consideration. McClellan, 2011 suggested the use of widely accepted business assessment models. These different approaches add a new lens by which to evaluate student expectations and development, along with advisor performance.

References

Crookston, B. B. (1972). A Developmental View of Academic Advising as Teaching. Journal of College Student Personnel. US: ACPA Executive Office.

McClellan, J. L. (2011). Beyond Student Learning Outcomes: Developing Comprehensive, Strategic Assessment Plans for Advising Programmes. Journal of Higher Education Policy and Management, 33(6), 641–652. doi:10.1080/1360080X.2011.621190

Teasley, M. L., & Buchanan, E. M. (2013). Capturing the Student Perspective: A New Instrument for Measuring Advising Satisfaction. NACADA Journal, 33(2), 4–15. doi:10.12930/NACADA-12-132

Trabant, T.D. (2006). Advising Syllabus 101. Retrieved from NACADA Clearinghouse of Academic Advising Resources Web site:
http://www.nacada.ksu.edu/Resources/Clearinghouse/View-Articles/Creating-an-Advising-Syllabus.aspx

Williams, S. (2007).From Theory to Practice: The Application of Theories of Development to Academic Advising Philosophy and Practice. Retrieved from NACADA Clearinghouse of Academic Advising Resources Web site:
http://www.nacada.ksu.edu/Resources/Clearinghouse/View-Articles/Applying-Theory-to-Advising-Practice.aspx

 

Joining a new community

We’ve all had the experience of walking into a new environment and wondering how best to fit in and succeed. We are experiencing it now as we start this program. In “Communities of Practice and Social Learning Systems” (Wegner, 2000), the author presents the complexity around learning and outlines the scope and purpose of learning communities. It presents definitions and explanations of how communities are formed and knowledge is built. It discusses the complexity around learning and how learning is not just about displaying competence.

As I read the article, it put an experience of my own into context. Two years ago I was laid off from a job. I had worked in that school for 12 years. I had been actively involved in creating many of the tools, processes, and resources we used in student support. I was extremely active with a professional network and had a reputation of success throughout our industry. Then, I moved to a completely different School within ASU. I realized an entirely different community existed within undergraduate advising. Although I had worked at ASU for 12 years, it was as-if I came into a brand new organization. Our communities of practice shared some basic technology resources and facilities, but other than that, were extremely different. It was shocking.   It required that I modify my own definition of my success based upon new criterion. As the article stated “we define ourselves by what we are not as well as by what we are, by the communities we do not belong to as well as by the ones we do”. And, my community was entirely different. As I read the article, I pinpointed much of what I had experienced: the boundaries of moving within communities, the jargon, the internal tools and resources, etc. On the other hand, I’ve also been able to bring a new perspective to my new community. It was a growth experience, but I have definitely broadened my own ‘knowledge’ and can now exist within these two communities.

As I prepare for conducting my own research in a community, I recognized a broader application for understanding communities of practice. It took some time to learn what I did about the new community, but I learned it. In the article about “Participatory Action Research and City Youth”, the authors established the rationale for engaging in Participatory Action Research (PAR). After reading the article by Wenger (2000) I found significant value in considering PAR in relation to communities of practice.

In regards to PAR, the authors discussed the rules, norms, leaders, and boundaries of the youth action research and I found such great guidance for outlining my own approach to my research area. (Bautista & Morrell, n.d.) I’m interested in measuring the effectiveness of advising practices as well as advisor performance. While their case study discussed the engagement of youth, the application to my research that I identified was the need to ensure advisors and students (my communities of interest) are actively engaged. This article even made the clear point that these communities should be engaged in the creation/identification of the problem itself.

With the influence of these two articles, I reflected on various questions. How can I contribute to the action research of my own advising community? What are the parameters (boundaries) by which I can answer questions about my own proficiency? Isn’t that understanding really critical before I start questioning others’ proficiency? And, shouldn’t I be sure to involve them when I start asking?

“Identity is crucial to social learning systems for three reasons. First our identities combing competence and experience into a way of knowing…Second, our ability to deal productively with boundaries depends on our ability to engage and suspend our identities…third, our identities are the living vessels in which communities and boundaries become realized as an experience of the world” (Wenger, 2000, pg. 239)

References:

Bautista, M., Bertrand, M., Morrell, E., Scorza, D. & Matthews, C. (2013). Participatory Action Research and City Youth: Methodological Insights From the Council of Youth Research. Teachers College Record, 115(100303), 1­23.

Wenger, E. (2000). Communities of Practice and Social Learning Systems. Organization, 7(2), 225–246.

 

 

 

 

Assessing Advisor Practices with the Student Perspective

Teasley, M. L., & Buchanan, E. M. (2013). Capturing the student perspective: a new instrument for measuring advising satisfaction. NACADA Journal, 33(2), 4–15. doi:10.12930/NACADA-12-132

Summary: This article discusses the rationale for assessing student perceptions of advisor interaction in the context of the application of advising practice theory. It proposes a survey instrument, which institutions can use to assess the interactions. The article seeks to identify any connection between certain student development theories and the students’ perceptions of the experience (i.e. does a certain type of advising theory more significantly influence the experience). The purpose of this research is to more effectively quantify how advisor interactions influence student retention (Kuh, 2008). A survey instrument was developed and deployed three times. Each time it was administered, statistical analysis was conducted and modifications were made for the next iteration.

O’Banion (1971,1994, 2009) identified the 5 major dimensions of the advising experience. These are meant to outline the types of discussions advisors lead with students. This type of advising is called developmental advising. This theory suggests that advisors support the development of understanding and skills, which support a successful college experience. Another type of advising is prescriptive advising. That type of advising includes course scheduling, discussing graduation requirements, etc. It is meant to outline next steps for students. Each type of advising has a place however advisors are encouraged to leverage developmental advising practices as it is most beneficial to a student’s growth.

Teasley and Buchanan (2013) developed a survey “originally designed to measure satisfaction with prescriptive functions…developmental functions…and overall advisor traits” (pg. 6).  Three factors were initially considered in the survey: prescriptive, developmental, and advisor functions. Those three factors correlate to student developmental advising theories. Those three factors were considered in the first two versions of the survey. The third version of the survey eliminated the third factor related to advisor functions.  The article concluded with support for the validity of the survey and the encouragement of its adoption within institutions.

Organization: The article was organized well. It began with a review of key literature and effectively demonstrated the need for assessment of advising. The literature review included key pieces in student development theory and general assessment for the purposes of increasing student retention. It included a discussion of the limitations of the research. The research data and collection methodology were conveyed. It concluded with appropriate recommendations and possible future steps.

Methodology: Exploratory Factor Analysis was used with the first two iterations. As previously discussed, the designers were interested in learning about three factors, which influenced the students’ perceptions. The first and second surveys were administered to the undergraduate research pool. Based upon the analysis, modifications were made to the questions in survey 1 and survey 2.

The analysis suggested that only two factors were contributing to the students’ perceptions: general advising concerns and outreach functions. Students did not distinguish between the use of development or prescriptive advising.

As a result, Confirmatory Factor Analysis was employed with the third survey.  The participants involved in the third survey came from the university research pool.

(Since I am not as familiar with these two statistical analysis options, I researched them on Wikipedia. The links above were explanations of them and based upon that information, it appeared the data analysis techniques used were appropriate and the guidelines for reliability and validity were followed.)

Limitations:  Initial discussion of the source of the questions was not included. The origin of those questions would have been beneficial in terms of replication.

The undergraduate research pool was utilized. Students could have participated in both the first and second survey. It would have been helpful to understand how many students did participate again and if they made an impact on the findings.

The third iteration of the survey targeted only two factors, but very little information was shared or discussed regarding the second factor related to advisor outreach. It would have been helpful to understand or convey next steps with those findings. The third survey was intended to further examine the validity of the instrument.

Reflections: My research interests are directed towards the integration of student learning outcomes and advisor performance.  White and Shulenberg (2012) highlight and define the value of student learning outcomes:

“The challenge of coming to grips with the questions about learning outcomes is twofold: (1) each institution needs to accept advising as an educational endeavor and identify the relevant learning outcomes and (2) reliable and valid methods to determine if these outcomes that have been met need to be developed”(pg. 14)

I need to be mindful that assessment of advising is not the same as assessment of student learning outcomes. There is a lot of literature in the field about assessing advising. However, student learning outcomes aim to measure the student’s learning as a result of advising interactions. Advisor performance, knowledge, and training are certainly components of that, however, student learning outcomes look at specific interactions to induce learning.

The article references other sources for possible assessment tools. I certainly want to research and learn more about those. Those other tools were described as instruments created within the institutions themselves and perhaps were not statistically tested for validity and reliability. This annotated bibliography on the NACADA website includes valuable resources for learning more about assessment.

An additional aspect for me to consider with my research area includes that of the inclusion of advisor training and application of student development theory. I had been considering the application of assessment to influence training, development and performance management.

References:

Kuh, G. D. (2008). Advising for student success. In V. N. Gordon, W. R. Habley, & T. J. Grites (Eds.), Academic advising: A comprehensive handbook (2nd ed.) (pp. 68–84). San Francisco, CA: Jossey-Bass.

King, M. C. (2005). Developmental academic advising. Retrieved from NACADA Clearinghouse of Academic Advising Resources Web site: http://www.nacada.ksu.edu/Resources/Clearinghouse/View-Articles/Developmental-Academic-Advising.aspx

White, E., & Schulenberg, J. (2012). Academic advising-a focus on learning. About Campus, 16(6), 11–17.

 

 

 

Doctoral Degree Discussion

The authors of the article “Reclaiming Education’s Doctorates: A Critique and a Proposal” are employed as participants in the Carnegie Initiative on the Doctorate (CID), which is sponsored by the Carnegie Foundation. The critical analysis by the CID process is meant to crystalize the purpose of the doctoral degree. The authors participated in discussions with various institutions to help clarify the purpose/objectives of their degrees. This article specifically presents the rationale for introducing a new type of degree for education practitioners and administration (specifically, those individuals not planning on a career in teaching or research).

In my opinion, the authors are clouding the discussion with an additional program suggestion. It seems the authors are suggesting questions as to whether the Ed.D. appropriately prepares one for a career of scholarly research and/or if that degrees is even needed for a career in administration. The organization of the article would suggest that a re-evaluation and re-deployment of the Ed.D. would be successful.

This was demonstrated with the example of USC’s success. “The decision-making and implementation processes, though sometimes rocky, resulted in two programs with clearly different goals, requirements, and student populations”. However, later in the article suggests a professional practice degree would enable career progression, as a reader, I’m left wondering, “why shouldn’t it be the Ed.D.?”

Ed.D. students will, according to the authors, not experience the depth of scholarly inquiry done in the program. They cite the lack of full-time study or participation in learning communities as examples. On the other hand, is it more likely that we are more effectively preparing them to be better consumers of academic research in that they can then interpret and act on research? The authors wrote, “we believe it must be a requirement for the P.P.D., to enable practitioners to make practice and policy decisions—not to add new knowledge per se to the field”. As the article progresses, instead the authors suggest that a lack of clarity in purpose for the degree (Ph.D. vs. Ed.D) leads to confusion about what grads can accomplish and are prepared for in their careers. According to the authors, a doctoral degree should teach you to collect information, research a hypothesis, and present findings, which are all part of critical analysis. Good decision makers need to be able to balance the need for an effective decision and the time available to make it.

Perhaps the article should instead be advocating for a thorough review following the CID method. The authors write, “the process of reflection, implementation of program changes, and assessment that these departments and programs engaged in is leading to stronger doctoral programs and changed habits of mind in participating faculty and students”.

Questions raised by the review of the program might include: What is the value of a dissertation? Is it just to deeply researching an issue to recommend a course of action? Is it to advance scholarly research?

The authors demonstrated a need for more clarity within the field, however, ultimately lacked rationale for the creation of a new degree. Alternatively, suggesting further review of existing degrees might engage more dialogue and commitment from those involved with existing programs.

Shulman, L., Golde, C., Bueschel, A., & Garabedian, K. Reclaiming Education’s Doctorates: A Critique and a Proposal. Educational Researcher, 43, 25-32. Retrieved from the ASU Blackboard database.