Online Learning as Professional Development?

Holmes, A., Singer, B., & MacLeod, A. Professional development at a distance: a mixed-method study exploring inservice teachers’ views on presence online. Journal of Digital Learning in Teacher Education, p76-85. Retrieved June 19, 2014, from http://files.eric.ed.gov/fulltext/EJ907004.pdf.

 

Professional Development, I’m finding, is being viewed as more and more essential for teacher preparedness to deal with diverse student populations and a teacher’s ability to respond to the ever-changing educational landscape that seemingly shifts its priorities quite often. This is a good thing, as it ensures that I’ll have a job for a long time to come. However, all joking aside, Professional Development, when implemented and facilitated correctly and effectively, can have significantly positive impacts on students’ achievement and outcomes (Holmes, Singer, & MacLeod, 2011). Yet, the challenges presented by this raise questions of access, impact, and excellence.

In their article, Holmes, Singer, and MacLeod (2011) seek to address two of the three aforementioned challenges, each of which I will discuss in turn, as well a missed opportunity to reflect upon the impact of their study, which I will also address in this post, by examining the role of online learning in Professional Development. Using a mixed methods approach, the authors looked at the outcomes of five different online Professional Development courses, as measured by participant course evaluations, which utilized 24 Likert-scale questions, as well as two long-answer responses. The teachers who participated in this study taught, exclusively, at private schools, with the majority working with students in grades 3-8 (Holmes, et al., 2011).

Upon an analysis of the data, they found several connections between teacher demographic information and satisfaction with online Professional Development; most notably, there was a strongly positive correlation between the number of online Professional Development modules a teacher had previously taken and their overall satisfaction with the course they were currently enrolled in. This suggests that teachers who have enjoyed Professional Development online in the past are the ones, who, by and large, are the ones who come back for further development in this medium, which, when one thinks about it, makes sense. If I’ve found value in something in the past, given its convenience, my ease and comfort in the medium, I will likely engage with it again.

Traditional teacher Professional Development, which occurs in person, through face-to-face interactions and facilitation, can be stymied as schools and/or educational agencies are concerned about cost effectiveness, something I can personally understand, given that I work in the field. This issue of access to content and facilitation is meant to be mitigated by the cheaper online modules, as suggested in Holmes, Singer, and Macleod’s (2011) discussion of the background of Professional Development and Online Learning.  However, the idea of access also presents an additional challenge when it comes to teachers who are not technologically proficient. Holmes, Singer, and Macleod (2011) suggested that teachers who self-assessed as being weak or uncomfortable with technology, or had only ever participated in in-person Professional Development, were unlikely to rate the course highly, and responded that they were also unlikely to take such courses again. If facilitators and providers of Professional Development seek to use this medium for large swaths of the teaching population, then they will also need to find ways to support those who lack the technological proficiency to be successful in such a program.

The idea of supporting educators who struggle to use technology has implications for me and for my community of practice, as I begin to think about my innovation. Participants, almost universally, see the role of the facilitator as crucial to the success or failure of a Professional Development session or module (Holmes, et al., 2011). For successful online learning and Professional Development, then, the person or persons in charge of facilitating the modules must ensure that the participants are comfortable with the medium, prior to engaging with the content, or that they have the support systems in place so those educators know where to turn, when they have questions, which they ultimately will.

The second issue raised by this research study is one of excellence, which I am operationally using to mean high quality, for the context of this post. Previous research has suggested that certain criteria must be met, in order to meet a threshold of quality: purposeful design, skillful facilitator(s), rich conversations and reflections centered on classroom instruction, and integration with powerful teaching methods (Holmes, et al., 2011). If online learning will be used to engage teachers and other educators in Professional Development, then the sessions, courses, or modules must meet the above requirements for quality Professional Development. If participants do not see connections to their daily teaching lives, and do not have meaningful opportunities to engage with their fellow colleagues, then the online learning and Professional Development will not meet the requirements of excellence, and will be a waste of teachers’ time.

This, to me, is one of the most important considerations for any innovation I seek to implement into my community of practice; if I cannot implement my innovation well, then it is not an innovation that is worth being implemented at all. This underscores the importance of being very purposeful and thoughtful in the design of any innovation, so as to make it an effective and useful experience for anyone who participates in it.

The last issue raised by this article that represents a missed opportunity on the part of the researchers was to study the impact that their Professional Development courses had on the outcomes of students in the classrooms of the teachers. The authors, by their own admission, suggest that effective Professional Development should better prepare teachers to work with their students in some capacity, for example, classroom management, differentiation, or instructional strategies, among others (Holmes, et al., 2011). The researchers did ask participants if they had implemented any changes in their classroom based on the online Professional Development, and, while 74.8% of them said that they had, there was no measure on the outcomes for students, and whether those changes led to an improvement in student achievement (Holmes, et al., 2011). Seeing this missed opportunity serves as a good example to learn from, in that I should always try, whenever possible to measure the impact that my innovation has on students and their achievement, as that is what really matters.

Do Educators Utilize PD?

Doherty, I. Evaluating the impact of professional development on teaching practice: research findings and future research directions. US-China Education Review, A, 703-714. Retrieved June 12, 2014, from http://eric.ed.gov/?id=ED527691

 

Professional Development (PD) that is centered on meaningful learning activities is professional development that is generally considered to be highly effective. In his article, Iain Doherty (2011) sought to address whether or not there is a correlation between satisfactory participant experience following a professional development session and educators actually implementing changes and utilizing skills and strategies learned in their teaching practice (Doherty, 2011). Previous research suggested that meaningful professional development focuses on several principles: 1) contextual realism that intimately connects with teaching practice, meaning PD modules are linked to challenges and practical teaching situations; 2) content that allows learners to connect new information with preexisting schematic frameworks; 3) the utilization of authentic activities that mimic how new information can and will be used in future activities; 4) offering multiple and diverse perspectives, and; 5) collaborative reflection that promotes articulation of new knowledge (Doherty, 2011).

Doherty (2011) utilized PD modules that were built with the aforementioned considerations firmly in mind. They gave University-level educators in New Zealand information regarding the implementation of various technological tools that could enhance the learning of their students. These tools included things like blogs, social networking sites, Wikis, among others (Doherty, 2011). Further, the PD sessions were not a sage-on-stage/sit-and-get style session; each educator in attendance had the opportunity create accounts and begin to use them during the session. To measure his results, Doherty (2011) gave the educators a “pretest,” that asked them to assess their own familiarity with the various web-based tools that they were about to learn about. Following the session, participants were, again, asked to self-assess their own knowledge, awareness, familiarity, and ability to implement the tool into instruction. Doherty (2011) found that participants were significantly more knowledgeable about the resources following the training, than they were before it.

To truly assess their own effectiveness, they came back to the educators three months after the PD modules had been completed and gave a survey designed to assess whether the participants had, in any fashion, begun to implement or use the knowledge gained in the PD sessions in their instruction.  Doherty (2011) found that the vast majority (91-96% depending on the technology) of participants had not utilized any of the technology showcased, despite very strong reviews immediately following the sessions. Doherty then sought to supplement his quantitative results with qualitative information, ascertained through interviews with willing participants. Doherty’s (2011) sample size had diminished from an initial 27 to only seven who agreed to the interview; only one of the seven had made use of one of the multiple technology tools in their instruction, and the others were unable to articulate the reasons as to why they had not begun to implement learned strategies.

Upon reflecting on Doherty’s (2011) methods and results, there are both connections and areas of strength and weakness, each of which I want to take a moment to address in turn. There are a number of connections of this research to my own community of practice. One of the things that we emphasize in my role is the follow up to ensure that educators 1) feel supported as they begin to utilize the methods discussed during the actual PD session and 2) actually implement the strategies and tools into their professional work. I think that had Doherty offered on-going implementation support to the educators, he may have seen significantly higher rates of tool utilization. I know when I have been a participant in professional development sessions, I’ve  left feeling very motivated by all that I am able to do with the new tools and strategies, but that if I don’t begin to utilize them almost immediately, that I begin to lose understanding of the capabilities and how to integrate them into instruction.

One of the strengths that Doherty’s methods offered was the manner through which he assessed his participants’ knowledge before the session, after the session, and gauged their implementation by following up with attendees three months after the modules had been completed. This gives a good understanding of, 1) at what comfort and familiarity levels those in attendance entered the session, 2) the effectiveness of the facilitator(s) in communicating the desired knowledge to the participants, and 3) how valuable the content was to the educators by assessing the rates at which they actually utilized the information conveyed. This approach was a strong one, as it assessed the participants are various, predetermined intervals, providing information that a short-term data collection period wouldn’t even come close to measuring.

Another particular strength offered by Doherty’s procedure is the application of interviews and qualitative methods to supplement the quantitative information. Doherty (2011) chose to interview participants to pinpoint why and how participants chose to, or in his case, chose not to utilize the information conveyed through the professional development sessions. Though they couldn’t self-identify the root causes for their inaction, the process of interviewing participants, in addition to a simple post assessment, offers invaluable insight that might not otherwise be communicated to the researcher. This research model definitely provides a framework that I can utilize as I begin to plan for my own innovation in the area of professional development, combining both short- and long-term quantitative data, as well as qualitative data to provide further information.

The last area I want to address regarding Doherty’s (2011) methods was a lens he lacked, through which he ought to have collected and analyzed data to understand an even more meaningful perspective on the role of professional development in education.  In the introductory paragraph, he writes, “[professional development] is important to improve and enhance student learning” (Doherty, 2011, p. 703). If educators are tasked with improving outcomes for students, and professional development is meant to play a role in that charge, then the improvement in student performance, either academically, socially, behaviorally, or otherwise, should be an essential consideration when measuring or assessing the effectiveness of any session, content, or implementation. Given that Doherty (2011) mentions that purpose of professional development, I thought the perspective that could be offered by looking at the change in student outcomes would have been a valuable lens through which he could have collected and analyzed data.

 

Inconclusive research = lame duck

Eynon, R., & Helsper, E. (2011). Adults learning online: digital choice and/or digital exclusion?. New media & society, 13(4), 534-551.

As I continue down the breadcrumb path of research in the general area of my area of inquiry, I’ve decided to open up my research to encompass articles that touch on a range of topics:
adult online learning,
effective practices for online learning,
assessing online learning,
unfacilitated online learning,
facilitated online learning,
adult communities engaged/disengaged in online learning,
various theories and frameworks that relate to online learning,
and, what I’m calling, the catch-all connection to online learning.

If I set a wide enough net, I’m sure to catch onto a guiding research question that makes the most sense for my work and in the mean-time can build up a great arsenal of knowledge in general about what research has been done in online learning and how it was orchestrated.

This particular article would fall under the umbrella of “adult communities engaged/disengaged in online learning”. This study took place in England, Scotland and Wales and aimed to investigate further which adults were engaging in online learning activities. These activities were organized into twelve general areas for fact checking, travel, shopping, entertainment, social networking, e-government civic participation, informal learning and formal learning/training. The researchers wanted to delve further into some of the characteristics that these adults had in common and also what factors influenced them to engage or disengage with different types of online learning. They focuses in on a central divide in the population and really wanted to decipher between voluntary or choice reasons versus involuntary or “digital exclusion” reasons (536).

The researchers methodologies included using the 2007 Oxford Internet Surveys (OxIS) with a sample size of 2350 people. These individuals were selected first by randomly selecting 175 regions in Britain and then from there they randomly selected ten addresses. The participants ranged from 14 years of age and above and they were all administered the OxIS face-to-face.

As the researchers consolidated the results and captured their findings, they divided the disengaged group of respondents into two groups: ex-users and non-users. There were four key reasons as to why these individuals disengaged from not using the internet and they were: costs, interest in using the internet, skills and access. It appeared that ex-users had made the choice to disengage because they were no longer interested in the internet but the non-users highlighted topics of digital exclusion like access and costs.

When it came to investigating why users were disengaging with the internet for learning opportunities, the characteristics of the users did tend to trend but were complex as it often depended on what type of online learning activity was happening. Users that are highly educated, have children over the age of 10 and have high levels of internet self-efficacy were found to be more likely to engage in formal learning activities via the internet. An underlying element that was important for informal learning, was having access to the internet at home. Also, upon analysis of the data set and its trends, began to see pockets of individuals who were “unexpectedly included” or “unexpectedly excluded” (542).

They conclude the research with a statement that this investigation into the engagement and disengagement of users into the internet and online learning is important because it demonstrates that the more information organizations or educational institutions have about a user, the more likely it can provide tailored, differentiated user support to increase the amount of learning activity that takes place.

If I were to turn my critical eye on this article, I unfortunately find more ways in which to improve it rather than strengths. I think one of the greatest strengths of the article is that the content is organized clearly and it really did a thorough job of contextualizing the importance of finding answers for the research questions.

An immediate area of improvement came from the literature review and theoretical framework supports to the research. It truly appears that more time was spent explaining the need for the research than the content and theory that acts as the foundation to the work.

The data collection process in general appeared to be well thought out and good in its randomization of quite a large population sample size. Unfortunately there are a few elements in the methodologies that are missing. Why was the particular OxIS survey tool used and how was it the perfect tool for this research? What questions are on the OxIS? Also, there was no explanation as to the process of how the surveys were conducted beyond “face-to-face” (537). Were they recorded by hand by the participant or the researcher? How many researchers were involved and was their any training needed to maintain consistency amongst the team? And what protocol was used during the survey?

The analysis of the data and the findings were good and very detailed but it almost seemed like the research really didn’t find much. And what was found, goes to support what would seem very sensible. As for the discussion and conclusion, it just seems like the conclusion was that this survey could be done better in the future and next time around they’ll also gather qualitative data, etc. If seems like if that’s your conclusion, then did we find out something important in this study at all? And if it is that there should be more studies in the future, then that’s not much of a conclusion. And since it lacked in conclusiveness, it makes sense that they weren’t able to offer suggestions of what they could do to provide tailored or individualized supports for educational organizations. It also seemed like there was no clear delineation or ability to distinguish clearly between factors of choice and those of digital exclusion.

Personally, I think the researchers have a lot of room for development when it comes to building on this research. I agree with them that a strong next step would be to combine the survey with an interview to capture some qualitative data. I think partnering with one of the informal or formal institutions they identified and surveying/interviewing learners before and after their online learning experience as well as capturing the actions the organization took to: onboard, orient the user to the technology and learning scope, and support their ongoing learning. This data in concert with the other data collected could help paint a more clear picture of who these online learners are, what needs do they have, and is the organization fulfilling those needs.

I think this second round of research could have great potential in providing more access to equitable educational opportunities. If these researchers could really hone in on the factors that exclude learners from online opportunities or even what actions unexpectedly help to include individuals, then that information could be directly utilized by organizations to help support these learners access learning opportunities that otherwise would not “exist” for them.

This article continues to help paint the picture of how much support, thought and detail should go into your writing and research. Learned something important tonight—if you don’t have anything conclusive in your conclusions, then something went wrong!

Facebook as Professional Development?

Rutherford, C. (2010). Facebook as a source of informal teacher professional development. In Education. Retrieved from http://ineducation.ca/index.php/ineducation/article/view/76/512.

 

For professional development (PD) to be considered effective, it must meet four criteria. These criteria characterize PD as: 1) Sustained, on-going, and intensive; 2) Practical and directly related to local classroom practice and student learning; 3) Collaborative and involving the sharing of knowledge, and; 4) Participant driven and constructivist in nature (Rutherford, 2010, p.62). In the journal article, Facebook as a Source of Informal Teacher Professional Development, author Camille Rutherford seeks to ascertain whether discussions that happen between teachers and other educational professionals on social media can be considered professional development and if such informal conversations meet the above four criteria for effective PD.

Rutherford (2010) begins her article by giving a historical context as to the seven different categories that form the knowledge base for teaching; such categorization serves to, “simplify the otherwise outrageously complex activity of teaching” (Rutherford, 2010, p. 61). These seven categories are not meant to be taken as a reduction of the teaching profession to a list of criteria, but rather form contextual categories that help synthesize the diverse areas that professional development can be offered. These seven categories, as first defined by Shulman (1987) are: 1) general pedagogical knowledge; 2) curriculum knowledge; 3) pedagogical content knowledge; 4) knowledge of learners and their characteristics; 5) knowledge of educational contexts [e.g. different styles of education], and; 7) knowledge of educational ends, purposes, and values [e.g. historical perspectives] (Shulman, 1987, p.7).

In order to determine whether teachers’ conversations on social media met the criteria to be considered effective PD, Rutherford monitored the postings on a Facebook group for teachers in Ontario, Canada. She cites that Facebook has the perception of being an, “adolescent playground ripe with juvenile gossip and social bullying,” however, despite this reality, she notes that Facebook has become a space for professionals who seek opportunities to network and exchange ideas and resources to gather (Rutherford, 2010).  In her monitoring of the Ontario Teachers – Resource and Idea Sharing group, which, at the time (2010), had more than 8,000 members, she used both qualitative and quantitative examinations of the discussion topics.

Over the course of the 2007-08 school year, she found that 278 new and unique topics of discussion were created, generating 1,867 posts from 384 different Facebook users (Rutherford, 2010). Any post that didn’t garner more than 2 responses, were excluded from the study, as, without another’s input, it cannot be considered a discussion. Any post that was also deemed too sales-y, or geared toward promoting an item, product, or service was also excluded from the study. Next, two independent “coders” went through the posts and categorized them into one (or more) of the seven different categories of teacher knowledge (see above). The coders then eliminated any posts that were considered too sales-y, or geared toward promoting an item, product, or service for fee (Rutherford, 2010).

The study found that the majority of the posts were related to Pedagogical Content Knowledge (strategies, tips, and tricks to help out in the classroom), representing just more than a quarter of all total posts (Rutherford, 2010). The next category was a surprising one, as it didn’t fit into any of the categories in Shulman’s conceptual framework for teacher knowledge, so Rutherford created a new category: Employment (opportunities and/or related questions). Posts categorized in this area made up the 22.5% of all posts analyzed (Rutherford, 2010). The final category, representing greater than 10% of total posts (19.8%), included discussions of Curriculum Knowledge. All other categories were comprised of less than 10% of total posts (Rutherford, 2010).

One of the essential features of effective professional development is that it be collaborative, on-going, practical, and participant driven. Rutherford (2010) found that the average number of months that users were actively engaged in discussion was less than 2 months (1.79 months) and the average user made only 4.2 posts during that span. These data suggest that discussions happening on Facebook, while certainly constructivist, collaborative, and participant-driven in nature, were lacking the essential “on-going” feature necessary for effective professional development.

In my situational context, as a professional development provider to schools across the state, we’ve tried to integrate more online components into our professional development portfolio offerings, only to find that teachers generally have not utilized them to the extent we were hoping. I see this evidenced in my own practice as well. When I reflect on my own professional development, both as a teacher, and in my current role as a trainer, I’ve been asked to “continue the conversation” on Edmodo, a social media site similar in platform to Facebook, but dedicated to educators and education. I found the steps of creating a username and password, confirming my email, setting up a profile, requesting access to the page, and waiting to be granted access as very cumbersome steps that did streamline the continuation of learning. In my writing of this blog post, I went back to those pages, only to find that there had only been one post in the 8 months the group had been around.

In my own learning experiences, like my Master’s degree, for example, I found the process of online modules, classes, and activities to be an ineffective medium to facilitate true learning, as the “flow” of a conversation was very unnatural and not conducive to insightful reflections and discussions on practice and pedagogy. While I’m sure that some people may enjoy and find value in the convenience of the online style to meet their varying schedules and time constraints, there is, however, something incredibly valuable for me about having that in-person, face-to-face interactions when learning from and with other people. It becomes much easier, in person, to hear the other person’s tone, read their body language, and ask follow up questions in a meaningful and timely manner, things that are lost through virtual communication. Because of these sentiments, I generally agree with Rutherford’s assessment, when she said, “Facebook teacher groups and similar forms of social media should be seen as an effective supplement [emphasis added] to traditional teacher professional development” (Rutherford, 2010, p.69). The idea that online modules could ever replace in-person professional development is not one I could support, but it certainly has a role to play as a free, low-risk, and convenient medium for teachers to collaborate and learn from one another.

 

Additional works cited:

Shulman, L. S. (1987). Knowledge and teaching: foundations of the new reform. Harvard Educational Review, 57(1), 1-22

Promoting success in online education… but, what is success?

Harrell II, I. L. (2008). Increasing the success of online students. Inquiry, 13(1), 36–44. Retrieved from http://www.vccaedu.org/inquiry/inquiry-spring-2008/1-13-Harrell.html

A concise, if not relatively simplistic piece, “Increasing the success of online students” highlights three components that impact student retention in online or distance education programs (2008).  These are student readiness, orientation, and support.  Harrell notes that online or distance education research also demonstrates the importance of “instructor preparation and support” and “course structure” for online student success, but the author sets these aside for this discussion.  In part because online education programs suffer from very high attrition rates, the author focuses on retention as the primary indicator of online student success.

 

Whereas other studies on online learner success, particularly prior to the extensive penetration of the internet in the distance education domain (Roblyer, Davis, Mills, Marshall, & Pape, 2008), focus on either learner characteristics or the learning environment, Harrell does not make this distinction.  Corroborating this approach, through an extensive research effort culminating in a readiness instrument for [prospective] online learners (the Educational Success Prediction Instrument [V2]),  Roblyer, Davis, Mills, Marshall, & Pape (2008) state that their “findings indicate that a combination of student factors and learning conditions can predict success” of online learners, “though predicting success is much easier than predicting failure” (99).  The orientation of the piece is higher education – the author is an assistant professor and the coordinator for student affairs at J. Sargeant Reynolds Community College, presumably writing from his own context; however the references used and the message is more broadly applicable. While Harrell’s piece is not revelatory, it reinforces certain best practices, espoused by related studies, relevant for online learning program development.

 

“Positive impact on online student success”

When an individual embarks on anything new, preparation for their new environment, expectations, relationships, and skills required is integral to his/her capacity to endure what’s ahead positively and productively.  Harrell recommends assessing student readiness for online learning prior to a student beginning coursework, then using this information to either counsel students against the online option or to build an individualized support strategy for each student, based upon their apparent strengths and weaknesses. An orientation should follow, possibly in the form of entire course (as exemplified by Beyrer (2010) and the Online Student Success online education orientation course).  The author favors online (vs. face-to-face) orientations, to get students navigating the technologies and program expectations in the realm and in ways that “mimic” their educational program immediately, before coursework becomes distracted by the student’s [inevitable] technical struggles.  Student technical support that is as accessible and available as the “anytime, anywhere” coursework is absolutely necessary.  The useful suggestion is made to leverage the skills of student workers and others within and beyond the school community to optimize support in this way (without requiring financial and human resources to which many schools lack access).

 

Enabling students to feel and cultivate their own sense of community and belonging is critically important – to student’s individual achievement and to the success of the program. The author cites studies that have recorded students’ reasons for withdrawal as very often being a sense of isolation, or not feeling a part of something (bigger than themselves).  A community among online students is relevant for facilitating a peer culture with mutual engagement, contributing to the student’s school support system, and creating opportunities for interdisciplinary collaboration and shared “real world” experiences.  Tools to communicate regularly and without pretense, e.g. instant messaging and social networking, and generating online spaces, e.g. “virtual lounges,” for students to connect on topics academic or of personal interest can support the development of communities.  “The more students integrate into the formal and informal social and academic culture of the institution, the more successful they will be” (Harrell, 2008).  In addition to these important features of an online program that supports student success Harrell focuses on, Roblyer et al (2008)emphasize that “initial active involvement in online courses predicts success. That is, students who are active in the first few weeks of the class are more likely to be successful in the course; dropout behavior is most likely to occur in the early weeks of the course” (106).

 

The development of a “sense of community” is different from developing a community of practice.  “Communities of practice [as defined by Etienne Wenger-Trayner] are groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly” (http://wenger-trayner.com/theory/). Perhaps the more inclusive (for both the participants and the institution) and ultimately impactful approach is to develop a community of practice among online learners.

 

Peers – in multiage groups spanning grade levels –might organize an action research agenda around a theme or specific research question, as an example constructing empowered communities of practice among online student populations.  They could do this on a semester, annual, or episodic basis, but continual throughout their postsecondary career.  Each student would have a position in the community, defined in part by their experience and budding expertise (or competences as Wenger [2000] discusses this).  The shared research agenda, with each individual engaged in and accountable for some aspect of the process, as well as coordinated action steps to maintain the group’s “alignment” to the co-constructed vision and mission, the students would gain invaluable experiences navigating the worlds in their research purview, collaborating with each other, and working toward a common purpose (Bautista, Bertrand, Morrell, Scorza, & Matthews, 2013; Wenger, 2000).  The community of practice would serve students’ development in ways applicable to and that transcend academia – arguably supporting their “success”.  Moreover, the likelihood of their retention would be significantly improved.

 

“Success”=Retention?

Harrell uses “success” and “retention” nearly interchangeably.  Is student success no more than an enrollment number?  Many days, given considerable budget constraints and the overly convoluted ADM calculations process (average daily minimum [ADM], which refers to the compensation charter schools receive per pupil) for online schools in the state of Arizona, retention feels so crucial to institutional “success” (read: viability and sustainability) that it doesn’t seem a stretch to conceptualize student success in the stark terms of attendance vs. withdrawal.  However, the effort and heart involved in establishing a new school is likely not just for the warm bodies and smiling faces (hidden behind various screens).  The purpose is more plausibly to provide a better, alternative, or altogether unique educational opportunity to some subset of students.  Defining success in this narrow way unquestionably narrows the exploratory purview: if the investigator is interested only in conditions and learner characteristics that lend themselves to a student’s staying in or leaving a school, will the data capture include relevant life circumstances (e.g. having a baby, needing to care for an ailing family member, having to prioritize income generation, or an onset of a mental disability)?  In other words, will this highly limited conceptualization of success skew the perspective on online educational program quality?

 

On a personal note, I had a meeting this week with a student who “dropped out” of our brick-and-mortar school in her eleventh grade year, due to a sudden emergence of debilitating expressions of a mental condition.  This would be a “failure” – on the student’s part and on our part – with respect to Harrell’s use of “success”.  However, she returned.  Several months later, she feels, once again, capable of course work.  Success!  (For now.)  A more comprehensive investigation would seek an understanding of: what kept the family connected to our school; why they felt they could trust us during her leave and now upon her return to care for her appropriately; and, what sorts of support they have received from us that kept their family loyal.

Roblyer et al (2008) suggest that “virtual schools … must come to gauge their success not only in terms of numbers of students served and courses offered but also in terms of how much they provide access and support to students most in need of an educational edge (107).”  The intent of this post is not to interrogate the author’s use of “success,” but perhaps that inquiry will emerge in the future. What is most interesting about this examination is what it signifies for program development: the benchmarks for programmatic evaluation and metrics of success are, by necessity, predicated upon the institutional imagining of Success – at the student level and at the organizational level.  When we speak of “excellence” in our contexts and consider an action research program to improve upon some aspect of or to, more generally, strive toward excellence, it is unlikely that retention emerges as the lead indicator.

 

Bautista, Mark A.; Bertrand, Melanie; Morrell, Ernest; Scorza, D., & Matthews, C. (2013). Participatory action research and city youth: Methodological insights from the Council of Youth Research. Teachers College Record. Retrieved May 30, 2014, from http://www.tcrecord.org.ezproxy1.lib.asu.edu/library/content.asp?contentid=17142

Beyrer, G. M. D. (2010). Online student success: Making a difference. MERLOT Journal of Online Learning and Teaching, 6(1). Retrieved from http://jolt.merlot.org/vol6no1/beyrer_0310.htm

Roblyer, M. D., Davis, L., Mills, S. C., Marshall, J., & Pape, L. (2008). Toward practical procedures for predicting and promoting success in virtual school students. American Journal of Distance Education, 22(2), 90–109. doi:10.1080/08923640802039040

Wenger, E. (2000). Communities of practice and social learning systems. Organization, 7(2), 225–246. doi:10.1177/135050840072002

Wenger-Trayner, E. (n.d.). Communities of practice: a brief introduction. Retrieved June 05, 2014, from http://wenger-trayner.com/theory/

Predicting online learner success to prepare for success

Roblyer, M. D., Davis, L., Mills, S. C., Marshall, J., & Pape, L. (2008). Toward practical procedures for predicting and promoting success in virtual school students. American Journal of Distance Education, 22(2), 90–109. doi:10.1080/08923640802039040

 

Until relatively recently, distance education research and discussion on student success largely ignored the influence of the learning environment.  As the internet increasingly came into the purview of distance education, the focus on individual learner characteristics came into question as the dominant source of insight into student success.  Roblyer, Davis, Mills, Marshall, and Pape ask the question: “Can any measured student cognitive and background characteristics be combined with learning environment characteristics to predict the success or failure of high school students in online courses” (2008, p. 96)This piece presents the second iteration of a model developed to predict the success of students in a virtual high school, conducted by Roblyer and Marshall in 2002.

 

The impetus for the work seems to be, first, that the drop-out rate in online education settings is significantly higher than for conventional learning environments (typical brick and mortar high schools), and, second, that there were no effective (in terms of percentage accuracy identifying a student’s likelihood to remain in school and achieve academically) models to help assess a student’s potential success in a virtual school.  This study presents a revision of the Educational Success Prediction Instrument (ESPRI), referred to as ESPRI-V2.  It has been edited to a 60-item Likert Scale measuring “technology use/self-efficacy (self-assessment of one’s ability with technology), achievement beliefs (confidence in one’s ability to learn, an aspect of locus of control), instructional risk-taking (willingness to try new things and risk failure in instructional situations, related to locus of control), and organization strategies (ways to organize for more efficient learning)” (p. 102).  These four factors originally derived from an extensive literature review of previous educational psychology and distance education studies and work on learner success in asynchronous and/or geographically distributed education situations, and they are representative of what emerged as most influential from a direct logistic regression analysis.  This output was combined with “two student background variables (age and self-reported GPA), and two environmental variables (home computer availability and school period for working on the virtual course)” to yield a highly reliable model (p. 99).  (Both of these descriptive variables where shown in the review of the literature to be incontestably significant for predicting online learner success.)

 

The study was conducted using a very large and seemingly variegated sample, comprising over 2,000 students of the Virtual High School Global Consortium (VHS).  Students were from different parts of the country and attended schools of different sizes, socioeconomic status, and settings (urban, suburban, and rural schools were represented).  The main factor missing from the sample is a diversity of access to internet at home – all students in the sample had the internet (and related devices) at home.  It would be interesting to apply ESPRI-V2 to learners whose access to the internet is more difficult and/or less reliable; these individuals would likely display less comfort overall with the platform and navigating the online world generally, which, one could reasonably assume, would impact findings.  Additionally, 80% of study participants had a period during the school day designated for their online course work; it is logical to assume that less supervision and a schedule structure requiring more student self-management would impact learner success.  This is the case in many online learning settings, particularly for remediation and credit recovery, where, often, students need credits to graduate and are not duly motivated to pace their work for quick completion (rather, if permitted, students may wait until the month of graduation to worry about their missing Geometry credit, for example).

 

ESPRI and the approach of Roblyer et al is rooted in the sociocultural tradition which values understanding “people’s everyday activities rather than focusing exclusively on formal educational contexts and academic subjects.  The emphasis is on the ways psychological processes emerge through practical activities that are mediated by culture and are part of longer histories” (Ito, Gutierrez et al, 2013, pp. 42-43).  They assessed the existing literature and discourse and observed an important under-appreciation of factors associated with the student’s learning experience and environment, including technology access and life circumstances.  Their “results indicate that environmental variables can play as important a role in a students’ success as the characteristics and background students bring to the course” (p. 105).   As a predictive tool, ESPRI-V2 is valid and robust, having predicted success (for its sample of VHS students) with 93% accuracy.

 

The important pursuit that follows from these findings is how to construct an academic program that is supportive of those predicted to succeed and as well as for those who are sure to struggle (if they choose to pursue the online option after the reflection opportunity ESPRI affords program advisors to facilitate during the pre-enrollment process).   One approach for an online high school is sketched briefly in the following.  Upon enrollment, a student will not only take a placement assessment, to help determine specific academic strengths and weaknesses, but will take a questionnaire including the measures comprising the Educational Success Prediction Instrument.  The questionnaire will be augmented with questions eliciting a range of personal details, such as hobbies, siblings, passions, aspirations for the future, concerns, and expectations in the program.  Armed with a wealth of data about each individual, including past academic performance (which schools receive on any incoming student in the form or report cards or transcripts), technology skills and access (self-reported), fears and hopes, as well as the precise output from the ESPRI-V2 element, a personal learning plan can be crafted with the student.  The plan and associated correspondence with the student’s instructors will be driven not merely by academic requirements and school budgetary concerns (online charter schools in Arizona receive funding based upon each student’s average daily minimum number of instructional minutes, relative to the average number of required minutes annually).  This working document will be unique to the individual, tailored to help the student connect his/her interests throughout their high school career, as well as to his/her potential strengths and weaknesses as an online student, as indicated by this onboarding questionnaire.

 

Prior to the student beginning any coursework articulated in the plan, the student will engage in a program and online learning orientation.  Structured like a course itself, the orientation is an opportunity for students to get familiar with and connected to the program, peers, staff, and the technological tools the student will be expected to use.  This has been recommended by several scholars thinking about student engagement and online learner success, e.g. Beyrer (2010) who developed and studied the utility of an online orientation course for online students at a small college, and Jagannathan and Blair (2013) who echo that orientating should be integral to the all-important efforts of a school endeavoring to engage students from “day 1” for retention and achievement.  Important aspects of the orientation process would include: requiring students to communicate in multiple modalities; completing tasks by set due dates; collaborating with peers on small projects designed to build relationships and help students familiarize themselves on how asynchronous and synchronous activities and may work and the challenges therein; practicing using web-based learning resources in a safe and effective way; and engaging in lessons on online learning “tips” and digital citizenship.

 

The intent of ESPRI-V2, according to Robleyer et al, explicitly, is not for schools to use it to deter or exclude students from an online educational setting.  One of the benefits of online education is that it has the potential to enhance access to high quality education widely, given institutional capacity to support technological device and skill development needs.  However, there is value in using a tool such as ESPRI to help counsel families on the options their student is better suited for.  Online learning is not for everyone.  For those that choose to proceed, ESPRI furnishes schools with valuable data on each student’s potential for success.  Schools may implement highly personalized engagement and support plans for each student, toward retention and achievement for its student body.

 

 

Beyrer, G. M. D. (2010). Online student success: Making a difference. MERLOT Journal of Online Learning and Teaching, 6(1). Retrieved from http://jolt.merlot.org/vol6no1/beyrer_0310.htm

Ito, Mizuko; Gutiérrez, Kris; Livingstone, Sonia; Penuel, Bill; Rhodes, Jean; Salen, Katie; Schor, Juliet; Sefton-Green, Julian; Watkins, C. S. (2013). Connected learning: an agenda for research and design (p. 99). Irvine, CA, USA: Digital Media and Learning Research Hub. Retrieved from http://dmlhub.net/

Jagannathan, U., & Blair, R. (2013). Engage the disengaged: Strategies for addressing the expectations of today’s online millennials. Distance Learning, 10(4), 1–7. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=eft&AN=93996527&site=ehost-live

Roblyer, M. D., Davis, L., Mills, S. C., Marshall, J., & Pape, L. (2008). Toward practical procedures for predicting and promoting success in virtual school students. American Journal of Distance Education, 22(2), 90–109. doi:10.1080/08923640802039040

Roblyer, M. d., & Marshall, J. C. (2002). Predicting success of virtual high school students: Preliminary results from an Educational Success Prediction Instrument. Journal of Research on Technology in Education (International Society for Technology in Education), 35(2), 241. Retrieved from http://search.ebscohost.com/login.aspx?direct=true&db=aph&AN=8948095&site=ehost-live

Research Topic Post – Online Learning Readiness Assessments

Dray, B. J., Lowenthal, P. K., Miskiewicz, M. J., Ruiz-Primo, M., & Marczynski, K. (2011). Developing and instrament to assess student readiness for online learning: A validation study. Distance Education, 32(1), 29-47.

With the dramatic increase in online learning deliverables seen within K-20 environment, researchers have begun to examine not only the validity of this mode of education, but also the student’s preparedness for web-based learning, and success for the online learning environment. Using Kerr, Kerr, and Rynearson’s (2006) framework for assessing online learning readiness, the author’s found that students felt they were indeed ready for online learning, but, that their self-assessment was based on their own experiences with technology leaving room for additional variables to be examined.

The authors of this article were familiar with previously conducted survey findings presenting information on student readiness for web-based learning; however, the results of those surveys provided limited information and translating that information into tangible data was challenging as stated by the researcher. Therefore, the authors of this article conducted a study to develop a more detailed tool to determine student readiness for online learning through a three-phase study: the survey development phase in which faculty/experts reviewed questions for clarity, the item analysis phase where the content of the tool and research questions were refined through focus groups and interviews, and finally, the survey validation phase in which questions from previous surveys were combined with new questions to cover topics relating to student demographic, learner characteristics, and technological ability (Dray, Lowenthal, Miskiewicz, Ruiz-Primo, & Marczynski, 2011).

The participants in their study were comprised of 26 graduate students pursuing a degree in educational computing. The results of the study showed that many of the students were scored as ”ready” for online learning, yet, the implications of study were direly impacted by the lack of sub-groups based on age, sex, or socioeconomic influences on their preparedness. This has importance because results may show that certain ages, genders, or socioeconomic factors play a role in determining whether the student is prepared for web-based learning. For example, if a student is under the age of thirty, they may rank as being prepared for online learning because it is reasonably assumed that students in this age group use online communication daily through social networking. Researchers also determined that the term “readiness” needed further clarification for the study’s purpose, as oddities were discovered as to whether readiness was determined by one’s technical ability or by their use and engagement of web-based tools, equipment, and material (Dray, Lowenthal, Miskiewicz, Ruiz-Primo, & Marczynski, 2011).

This article proves beneficial to my study in that the literature review coherently presented previous research done on the topic along with critiques of the strengths and weaknesses of each article reviewed.  The authors found  literature gave information on general learner characteristics, interpersonal communication abilities,  and technological skils(word processing, using spread sheets, use of search engines, etc.). However, noticeably absent was information on the student’s work schedules, access to technology, and the expectations for being successful in an online course. I found it interesting that the authors found an unexplored angle of questioning based on self- concept, esteem, and efficacy which could lead to quite different results of surveys proving to be an excellent contribution to the field. The authors of this article likened their study to that of Kerr, Kerr and Rynearson (2006) whose article, “Student Characteristics for Online Learning Success” also discussed student esteem, efficacy, and self-confidence as a means to determine success in online learning.

The authors create a stimulating argument, showing that readiness is a complex term, and must be defined as more than general characteristics. I found the first phase of their survey to have the strongest argument where skills regarded as being part of traditional learning can be easily carried over into online learning. Such skills include writing and expression, time management, and responsibility. Additionally, the authors were diligent enough to alter questionnaires where inconsistencies were present. For example, during the first phase of their study, it was found that students were answering questions based on their personal experience with web-based tools, rather than within the education context as expected. Therefore, the authors revised the prompts to require the students to answer the questions from their educational experience.

The article presented the questionnaire through its various stages of reconstruction showing how questions were revised during each phase of the study. However, the article lacked a clear definition of how the surveys were administered. Was the survey an in-person sheet where students entered answers long-hand? Was the survey administered online through a course management software program, or was the survey collected in focus group setting in a qualitative manner? These are questions that presented areas of concern as the setting in which the survey was administered could possibility present differing data results. While the authors presented information the ages, ethnicities, and major of study for the participants, the study failed to present information on the whether participants were taking an online course for the first time, and what distance learning model was used for this specific course in which the survey was given (completely online, hybrid, etc.). Additional areas of study could examine  the comparison between undergraduate student online learning preparedness, and graduate student preparedness in online learning environments to see if results of the study vary between the two populations. Another area of exploration could be centered on how the level of social media experience of the participants  impacts online learning success. Finally, the study could be extended to present data on minority student success in online learning environment, including information on whether one’s socioeconomic status has an impact on online learning. The further study, as suggested, would prove to as an effective analysis for researchers and teacher-educators to examine underrepresented populations.

This article can be compared to Lau and Shaikh’s (2012) article, “The Impacts of Personal Qualities on Online Learning Readiness at Curtin Sarawak Malaysi” in which the authors of the article developed a questionnaire to gather information on student’s personality characteristics as diagnostic tool for both faculty and instructional designers (Lau & Shaikh, 2012). Where Lau and Shaikh’s study shows a higher level of evidence is that they surveyed over 350 participants in their study as compared to Dray, Lowenthal, Miskiewicz, Ruiz-Primo, and Marczynski’s study in which only 26 graduate students were surveyed. The findings of Lau and Shaikh’s (2012) study were that students were less satisfied with online learning in comparison to traditional learning environments and felt less prepared for the objectives of the course. Both articles support my research in different by equally necessary ways: Lau and Shaikh’s (2012) article presents compelling statistical data on online learning readiness, while Dray et al (2011) article provides information on how to compose efficient survey questionnaires.

References

Dray, B. J., Lowenthal, P. K., Miskiewicz, M. J., Ruiz-Primo, M., & Marczynski, K. (2011). Developing and instrament to assess student readiness for online learning: A validation study. Distance Education, 32(1), 29-47. Retrieved from http://web.b.ebscohost.com.ezproxy1.lib.asu.edu/ehost/pdfviewer/pdfviewer?sid=2ae58e61-960f-4907-be25-93dcd5ba5c38%40sessionmgr114&vid=2&hid=120

Kerr, M., Rynearson, K., & Kerr, M. (2006). Student characteristics for online learning success. The Internet and Higher Education, 9(2), 91-105.

Lau, C. Y., & Shaikh, J. M. (2012, July). The impacts of personal qualities on online learning readiness at Curtin Sarawak Malaysia. Educational Research and Reviews, 7(20), 430-444.