Time to Survey

Lavin, A. M., Korte, L., & Davies, T. L. (2011). The impact of classroom technology on student behavior. Journal of Technology Research, 2, 1–13. Retrieved from https://search.ebscohost.com/login.aspx?direct=true&db=a9h&AN=57522954&site=eds-live

Technology plays a big role in our lives today. However, the amount of technology we use in our everyday life does not always translate into the amount used in education. At any level, one can find classes that use technology effectively and extensively and classes that do not utilize technology at all. The more technology has become a tool in education, the more important it becomes to understand all the aspects it impacts.

Student behaviors can alter depending on what resources they are using. “The Impact of Classroom Technology on Student Behavior” tries to understand the impact technology has on specific student behaviors. A couple examples are:

  • The level of preparedness for each class.
  • The quality of notes taken.
  • The level of participation in class discussions.

The authors surveyed 700 students, of which 557 were returned and usable. The survey was distributed to students in all levels of business classes at a Mid-western university. “Both versions of the survey used the following five point scale to collect student opinions: “1” was significantly positive, ‘2’ was somewhat positive, ‘3’ was no difference, ‘4’ was somewhat negative, and ‘5’ was significantly negative”(Lavin, Korte, & Davies, 2011, p.4). In order to determine if there were significant effects, the answers were compared to the “neutral”, which is 3. Means above 3 showed a negative impact and means below 3 showed positive impact. The questions were specific for two groups of students, but the way the questions were approached was different. The first group of students came from classes where the professors identified a moderate to extensive use of technology, while the second group of students had professors who indicated that no technology was used.  For the group in classes with technology they were asked how an absence of technology would affect each behavior. For the group in classes without technology they were asked how the addition of technology would impact each behavior.  Overall, the results of the survey showed technology had a positive impact on student behaviors (Lavin et al., 2011).

The most important impact this article has to the field is that it gives researchers a starting point. As with most of the research I have read, it tends to develop more questions than answers. As the researchers pointed out, this is just a first step. There needs to be further studies that include a larger base of students and studies that focus on what technology has the greatest impact on learning. To make the article a little easier to read, especially in the results section, I would have broken it down into sections based on each group, rather than trying to address both at the same time. It made the conclusion a little difficult to track. The authors discussed a few theories and nicely connected them to other literature, but until I got to the “Current Study” section, I wasn’t sure what the focus of the research was. I didn’t feel the discussion of other theories was necessary to discussing the current study. The authors included their data in table format, which I found helpful since the discussion of the results was difficult to track. Additionally, since I plan on including surveys in my work, this gives me a good idea of how to approach. I was surprised to find a few grammatical errors, but they were minor and the meaning of the sections was not lost.

I was expecting the results to be a little different than they were. There were a few areas the group with technology said would improve without technology. This wasn’t a result I would have guessed, however, it also brought additional questions to mind. For instance, the group with technology indicated that the absence of technology “would have a positive impact…on the amount of time they study for class each day” (Lavin et al., 2011, p.5). Had I been the one doing the survey, I would have done a follow-up survey to clarify their reasons. Would they study less because of the amount of notes they would have to take in class or would they study less because without technology they have no distractions? Or, do would they study less because they have fewer materials to look at? In my own research I could do follow-up surveys as needed or I can provide a comments section under each question in which the respondent would be able to clarify their answer. Although, this would complicate the mathematics, as it is difficult to put a value on answers that could vary from person to person. One insight I had while I read through the tables showing the data is that I need to build my mathematical knowledge. There were some sections I vaguely recognize, but I am unsure as how to use them now and there were some that were totally foreign to me. So, either I need to increase my math skills or find a partner who already possesses such skills and will help me.

A survey I would like to do within my community would be how technology affects student achievement from multiple views: the teachers, the administrators, the parents and the students. I would like to address if they feel technology affects their achievement and how, meaning it helps them learn quicker, deeper or if it makes learning more difficult. In the study by Lavin, Korte and Davies they ask specially about how technology affects the students behaviors (Lavin et al., 2011). I would like to address that, but in addition look at if the responders feel there is a specific technology that allows them a deeper understanding of the content. One thing I will have to consider is the number of surveys (one large survey or multiple smaller surveys) and what will be more reliable.

This study brings focus onto the community and gives them a voice. If students feel their opinion matters, they are likely to be more motivated. I think that the surveys in my research will hold valuable information and will play a big role in my research. Teachers do not need to add technology in for the sake of technology. Teachers need to add technology in because it gives the students a deeper understanding for the concept. I suspect that student achievement is impacted by technology when it requires the student to take charge of their learning.

 

Online Learning as Professional Development?

Holmes, A., Singer, B., & MacLeod, A. Professional development at a distance: a mixed-method study exploring inservice teachers’ views on presence online. Journal of Digital Learning in Teacher Education, p76-85. Retrieved June 19, 2014, from http://files.eric.ed.gov/fulltext/EJ907004.pdf.

 

Professional Development, I’m finding, is being viewed as more and more essential for teacher preparedness to deal with diverse student populations and a teacher’s ability to respond to the ever-changing educational landscape that seemingly shifts its priorities quite often. This is a good thing, as it ensures that I’ll have a job for a long time to come. However, all joking aside, Professional Development, when implemented and facilitated correctly and effectively, can have significantly positive impacts on students’ achievement and outcomes (Holmes, Singer, & MacLeod, 2011). Yet, the challenges presented by this raise questions of access, impact, and excellence.

In their article, Holmes, Singer, and MacLeod (2011) seek to address two of the three aforementioned challenges, each of which I will discuss in turn, as well a missed opportunity to reflect upon the impact of their study, which I will also address in this post, by examining the role of online learning in Professional Development. Using a mixed methods approach, the authors looked at the outcomes of five different online Professional Development courses, as measured by participant course evaluations, which utilized 24 Likert-scale questions, as well as two long-answer responses. The teachers who participated in this study taught, exclusively, at private schools, with the majority working with students in grades 3-8 (Holmes, et al., 2011).

Upon an analysis of the data, they found several connections between teacher demographic information and satisfaction with online Professional Development; most notably, there was a strongly positive correlation between the number of online Professional Development modules a teacher had previously taken and their overall satisfaction with the course they were currently enrolled in. This suggests that teachers who have enjoyed Professional Development online in the past are the ones, who, by and large, are the ones who come back for further development in this medium, which, when one thinks about it, makes sense. If I’ve found value in something in the past, given its convenience, my ease and comfort in the medium, I will likely engage with it again.

Traditional teacher Professional Development, which occurs in person, through face-to-face interactions and facilitation, can be stymied as schools and/or educational agencies are concerned about cost effectiveness, something I can personally understand, given that I work in the field. This issue of access to content and facilitation is meant to be mitigated by the cheaper online modules, as suggested in Holmes, Singer, and Macleod’s (2011) discussion of the background of Professional Development and Online Learning.  However, the idea of access also presents an additional challenge when it comes to teachers who are not technologically proficient. Holmes, Singer, and Macleod (2011) suggested that teachers who self-assessed as being weak or uncomfortable with technology, or had only ever participated in in-person Professional Development, were unlikely to rate the course highly, and responded that they were also unlikely to take such courses again. If facilitators and providers of Professional Development seek to use this medium for large swaths of the teaching population, then they will also need to find ways to support those who lack the technological proficiency to be successful in such a program.

The idea of supporting educators who struggle to use technology has implications for me and for my community of practice, as I begin to think about my innovation. Participants, almost universally, see the role of the facilitator as crucial to the success or failure of a Professional Development session or module (Holmes, et al., 2011). For successful online learning and Professional Development, then, the person or persons in charge of facilitating the modules must ensure that the participants are comfortable with the medium, prior to engaging with the content, or that they have the support systems in place so those educators know where to turn, when they have questions, which they ultimately will.

The second issue raised by this research study is one of excellence, which I am operationally using to mean high quality, for the context of this post. Previous research has suggested that certain criteria must be met, in order to meet a threshold of quality: purposeful design, skillful facilitator(s), rich conversations and reflections centered on classroom instruction, and integration with powerful teaching methods (Holmes, et al., 2011). If online learning will be used to engage teachers and other educators in Professional Development, then the sessions, courses, or modules must meet the above requirements for quality Professional Development. If participants do not see connections to their daily teaching lives, and do not have meaningful opportunities to engage with their fellow colleagues, then the online learning and Professional Development will not meet the requirements of excellence, and will be a waste of teachers’ time.

This, to me, is one of the most important considerations for any innovation I seek to implement into my community of practice; if I cannot implement my innovation well, then it is not an innovation that is worth being implemented at all. This underscores the importance of being very purposeful and thoughtful in the design of any innovation, so as to make it an effective and useful experience for anyone who participates in it.

The last issue raised by this article that represents a missed opportunity on the part of the researchers was to study the impact that their Professional Development courses had on the outcomes of students in the classrooms of the teachers. The authors, by their own admission, suggest that effective Professional Development should better prepare teachers to work with their students in some capacity, for example, classroom management, differentiation, or instructional strategies, among others (Holmes, et al., 2011). The researchers did ask participants if they had implemented any changes in their classroom based on the online Professional Development, and, while 74.8% of them said that they had, there was no measure on the outcomes for students, and whether those changes led to an improvement in student achievement (Holmes, et al., 2011). Seeing this missed opportunity serves as a good example to learn from, in that I should always try, whenever possible to measure the impact that my innovation has on students and their achievement, as that is what really matters.

Online Annotations are the new Sticky Notes

Lu, J., & Deng, L. (2012). Reading actively online: An exploratory investigation of online annotation tools for learning inquiry learning. Canadian Journal of Learning and Technology. 38(3), 1-16.

 

Summary

Critical thinking is a difficult concept and students need to learn the skills necessary to accomplish that.  My research hopes to incorporate technology, critical thinking, and sound pedagogy in order to help students achieve the maximum benefits when learning.  This research study looks at how a specific piece of technology can be used to help students engage in critical thinking skills during reading.  The research was conducted in Hong Kong with students who were the equivalent of tenth graders in the United States.  Research was evaluated in two categories: a review of pedagogical annotations and a review of currently available annotating web programs.  The literature shows that the more annotations readers take, the more they increase their comprehension.  This is true for both for the frequency and quality of those notes.  The annotation process is helpful when done either individually or collaboratively and this information was factored into the study.  There were five online annotation tools available.  The literature detailed the differences between them and then explained its rationale for choosing Diigo (Digest of Internet Information, Groups and Other stuff).  Diigo provided a few features that allowed students to interact with the text in ways that some of the others did not. The authors believed it was best suited to support the critical thinking process.

 

The study consisted of two classes of students.  One class was an advanced level class.  That class began with 44 students but the study only assessed 42 students after accounting for factors such as excessive absences.  The other group of students was a class of regular education students.  That class began with 37 students but only accounted for 27 once also weighing for absences.  The differences between the two types of classes was purposeful.  One question the researchers examined was the difference between how the two groups’ behaviors and observations related to Diigo.  Other goals of the study were to find out how all of the students used the technology, perceived it, and how their actual use of it compared to their reported responses.  The research was conducted in four sessions.  First, the teacher gave the students material to read.  The next session required the use of Diigo.  On the third session, students worked independently and could take notes or interact with the text however they chose.  The fourth session was for students to share their notes with each other and synthesize information.

 

The researchers individually observed and assessed each note card entered it into the Diigo system.  Calculations were based on those results.  The notes were categorized into four sections: define, tag, record, and discuss.  A Likert scale assessment was also used to measure the students’ opinions.  MANOVA tests were performed.   Scores were adjusted to account for the differences in the number of students in the two classes. The results showed that Class A used, and reported liking, the sticky notes more.  They also used the define category the most.  Both groups reported enjoying the notes according to the survey results, however, Class B had so few notes that some categories could not be fully assessed.

 

 

Strengths and Critiques

This research compares two classes one of which had high achieving students.  One of the big problems is that the tool being assessed involves students’ ability to read critically.  The reason that Class A may have had more notes in Diigo may have had nothing to do with the product or its effectiveness but everything to do with the students’ skill in Class B to complete the assignment.  The research did not detail the reading level of the material presented to the classes nor did it specify if it was the same passage for both groups.  There were a few other problems as well.  Class A and Class B were very small making it unable to be generalizable even if they were both the same type of learners.  Another problem is that one class lost two students while another class lost ten.  There might be some dynamic or secondary issue going on (behavior, illness, etc.) that had an effect on the remainder of the students which could, in turn, effect the results.  The authors didn’t address that issue.  Finally, the design called for students to be grouped by their teachers.  Again, since both classes started out with larger numbers and ended smaller, the research did not explain how groupings were changed during the project as absences occurred.  Since Group B lost ten students and Group A only lost 2 that might have been another factor.

 

The researchers evaluated each note card themselves.  That left the opportunity for interpretation of the cards.  There was no independent party also evaluating the messages on the sticky notes so the breakdown of data could have been construed differently had someone without a bias been the arbiter.

 

There literature review was broken into two sections.  The first section evaluated the pedagogy value behind annotations.  The second section was not a review of literature at all.  Rather, it was a review of the products that are currently on the internet and available.  It explained to the readers the rationale behind the choice of using Diigo as the source for this research.

 

The layout of the paper was fine, however, the one typographical error was very noticeable and did create difficulty when reading.  At the end of the Research Question section, it stated that there were three questions they would be focusing on but then proceeded to list four questions.  I reread that several times as I was initially unsure if the mistake was that the three should have been a four (which is what I concluded) or if one of the questions on the list was the error. That mistake was very confusing and distracting.

 

Further Study

I think that the researchers had a very valuable idea by choosing to research how a specific piece of technology can assist in building students’ reading skills.  In order to extend beyond this study, the research needs to be repeated with several changes.  First, more students need to be involved.  Second, consistent academic levels need to be considered. Once this study has been repeated accurately, other studies also can be done to compare the other technology options that were presented at the beginning of the literature review section.  One more direction that this study could be taken is to compare the use of the online annotations to traditional annotations with paper and pencil or sticky notes. This study compared higher level students with average students but on a very small scale.  The next study could be done on a large level with groups of students at both high and average academic levels which would make the results generalizable.  The reading passages each group gets could match their abilities.  The results could be compared to each other after the fact thereby eliminating that variable as a factor.

 

Relate to Another Reading

The literature review and discussions in Effects of Technology on Critical Thinking and Essay Writing Among Gifted Adolescents (Dixon, F., Cassady, J., Cross, T., & Williams, D., 2005) argued that little research has been explored specifically regarding how gifted students learn.  Both studies look at how technology is used by high achieving students.  Neither study used large enough groups to make the results generalizable so neither were able to contribute much to the overall development of literature of the way gifted students acquire knowledge.

 

Brainstorms for My Area of Interest

This study had two of the three components that I am looking to use in my study; the technology and the critical reading development.  The pedagogy base was discussed in the literature review but not analyzed in the research so I do not consider it as fully part of the research.  I was completely unaware that this type of annotating technology existed until I read this research.  This seems like a simple, free, easily accessible piece of software.  The literature section described several of the options available and provided the sites to access them.  What it made me realize is just how much may be obtainable for my research that I have not even thought about.  I recognize now that I may have been limiting my options.  I am going to begin to trolling through many places to explore what else may be possible before I narrow my research decisions.

 

 

Dixon, F., Cassady, J., Cross, T., & Williams, D. (2005). Effects of technology on critical thinking and essay writing among gifted adolescents. The Journal of secondary gifted education, 16(4). 180-189.

 

Subject Selection

Guzey, S. S., & Roehrig, G. H. (2009). Teaching science with technology: case studies of science teachers’ development of technology, pedagogy, and content knowledge. Contemporary Issues in Technology and Teacher Education, 9, 25–45. doi:10.1007/s10956-008-9140-4

This week I looked at an article called Teaching Science with Technology: Case Studies of Science Teachers’ Development of Technology, Pedagogy, and Content Knowledge. (Guzey & Roehrig, 2009)The study is looking at how a professional development program called Technology Enhanced Communities or TEC, enhanced science teachers’ TPACK.

TPACK is a theoretical framework which is derived from Shulman’s idea of Pedagogical Content Knowledge. TPACK is made up of three forms of knowledge: content, pedagogy and technology. The argument is that a teacher must have integration of all three knowledge areas in order to be effective. TEC is described in the article as “a yearlong, intensive program, which included a 2-week-long summer introductory course about inquiry teaching and technology tools.” In addition, there were group meetings throughout the year, which was associated with an online teacher action research course. During the two week course in the summer, the participating teachers learned about inquiry-based activities while learning several instructional technologies.

Guzey and Roehrig did qualitative research and collected data through observation, interviews and surveys. In this study, they chose four teachers, new to the field; all of whom had less than three years of experience.

The organization of the article is very easy follow and read. However, the order of the sections didn’t make sense to me. Guzey and Roehrig put the profiles of the teachers in between the results and the discussion. This caused it to feel disjointed, as it didn’t flow properly. The author clearly explains the theories and gives examples of the research they came from. However, the research is supposed to be looking at the impact of TEC, but I felt that there was a lot of focus on inquiry, which is a component of TEC; none the less, too much focus on it. Additionally, the author went into great length of what TPACK is, but, it wasn’t necessary to understand the theory at the depth provided in order to comprehend the research.

Guzey and Roehrig chose beginning teachers because they felt this would provide more commonalities: they had graduated from the same program, they were all going to be teaching their specialty …etc. However, I totally disagree with this approach to selection. Had veteran teachers been selected, there would have been more focus on the authors’ guiding question rather than on common rookie issues (e.g. classroom management, flexibility, lesson planning). Much of the article discusses these issues, which, while they play a role in being an effective teacher, doesn’t necessarily impact whether or not the TEC program is working. By selecting veteran teachers, much of this would have been avoided.

The analysis gives a pretty clear picture of their work and if the resources were available could be reproduced. In the results section, Guzey and Roehrig stated, “Teachers were each found to integrate technology into their teaching to various degrees.” However, their guiding question was how does TEC enhance TPACK? How can the depth of integration of technology be their result? In the decision section of the article they state that TEC was found to have a “varying impact on teacher development of TPACK.” That should have been in their results. Unfortunately, since new teachers are learning so much more at one time than veteran teachers, I don’t know how reliable these results are. It is doubtful that this research had a big impact within the field, as the findings were not significant.

The impact that this research had on my area of inquiry is a different story. I have been solely focused on how integrating technology will have an impact on student achievement and it never occurred to me to consider the teachers’ experience or effectiveness. If a teacher has poor classroom management, adding technology to the mix is not going to increase student achievement. In fact, it is likely to do the opposite. Managing technology in a classroom adds a degree of chaos. Most veteran teachers are adept at establishing new procedures and have enough forethought to know what those procedures should be. One has to be able to understand what problems may arise with students in order to establish procedures that would circumvent said problems. It is unlikely that most beginning teachers have this depth of knowledge. Additionally, veteran teachers have the ability to adjust at a moment’s notice when technology fails, which it does and will. This again, goes back to experience. It would be like giving a two-handed piano piece to a beginning piano student, who is only ready to play with one hand. Reading two lines of music at the same time, maintaining a steady tempo, including dynamics and phrasing is more than one can expect from a beginning musician, but after a few weeks or months of one-handed pieces, that student will be ready to add a level of difficulty. This is not to say that beginning teachers shouldn’t be using technology, the opposite is true; but, to utilize beginning teachers as research participants in how effective technology is, may not be the wisest decision.

Professional development is not a point I considered as a piece to my research. Often, professional development is a hit and run experience. We receive an hour or two of training and then we, the teachers, are expected to have it completely integrated the following day and we never speak of it again. This could be why so many teachers are so cynical about new programs. As a music teacher, very few of the professional developments I have attended have been catered to me specifically. Due to this, I have spent much time over the last twelve years, essentially providing my own professional development. On one hand I have become quit proficient at innovating within my classroom, but had I received more guidance from a veteran teacher, it would have taken me less time to achieve what I have. Technology is a tricky area, in that some people are very comfortable with daily technology interactions and some people struggle with turning on electronics. It may be necessary to include a professional development component within my action research in order to create support for the teachers I work with. It would need to be implemented in such a way that the teachers are able to reflect and discuss their experiences and brainstorm new ideas. This will create lessons that utilize technology to deepen the understanding of the concept, not just adding technology for the sake of technology. Overall, I enjoyed reading this article, it really got me reflecting on the presentation of my own work and the components I should or shouldn’t include.

Boys vs Girls vs Computers

Dixon, F., Cassady, J., Cross, T. & Williams, D. (2005). Effects of technology on critical thinking and essay writing among gifted adolescents.  The Journal of secondary gifted education, 16(4), 180-189.

 

Article Summary

My area of interest is critical thinking, technology, and pedagogy.  The pedagogy I hope to focus on is Bloom’s Taxonomy.  I want to create a study where all three meet in my fifth grade classroom.  This study interested me because it is about critical thinking and technology and part of the assessment that they used to measure the critical thinking focuses on analysis, synthesis, and evaluation which are the higher level of Bloom’s order so this research fits well with my classroom goal. The technology used in this study is trying to determine if a computer can help students become better critical thinkers when they write.  It is also trying to determine if the students’ gender makes any difference in the outcome.   Researchers decided to determine if technology would have an impact on the writing and critical thinking of gifted high school students.  The research was conducted at a residential, gifted high school.  99 incoming juniors wrote an essay.  39 of those students were males and 60 were females, all were sixteen years old.   One year later, the students were incoming seniors and they wrote a second essay.  The prompts for each essay were based off of the same poem.  Directions given for both essays were identical.  This second time the students wrote their essay, they were randomly assigned to one of two groups.  One group wrote the essay by handwriting it as they had done the first time and the second group used a computer to compose their thoughts.  These essays were also scored by the same two people using the same rubric they used the year prior.   The essays measured critical thinking using a five point scale.  The five point assessment measured the critical thinking skills of analysis, synthesis, and evaluation and did not focus on the mechanics of writing.  To score those, two people were brought in and trained until an interrater reliability was established.  A second critical thinking assessment was also used called the Watson-Glaser Critical Thinking Appraisal.  That was an 80 question test that measured critical thinking a different way than the essays.  The results compared the two scores.   There were two sets of results that this study examined.  One was the comparison of the critical thinking scores and the basic writing indicators.  As far as the first comparison, the conclusions of the study led to the analysis that critical thinking was significantly related.  The other evaluated if gender and the computers were interconnected which was the primary focus of the study.  “A…2 (male, female) by 2 (word process, handwrite) repeated measures multivariate analysis of variance was employed, examining four dependent variables at two points in time (WS-1, WS-2). That revealed statistically significant main effects for gender, method of writing at WS-2, and the repeated factor (time)” (Dixon, F., Cassady, J., Cross, T. & Williams, D., 2005 p.185).  The study found that boys did better using the computer.  It found no difference for girls.

 

 

Strengths and Critiques

This study had several limitations that were not addressed.  First, the sample size was very small which makes the results not generalizable.  Not only does it begin with 99 students, but it looks at the results based on gender and the gender is not split evenly to start with so only 39 boys are part of the study out of the 99 total.  That is before dividing the students for the second half of the study.  The researchers also said that the students were randomly assigned to write either by hand or on the computer.  What they did not specify is if the calculations were purposefully made so that the students were evenly divided down the middle or if the random assignments were made by gender.  If they were not, then there is no way of knowing how many male students used the computer and how many completed their second essay by hand. In that case it is possible the results could be very skewed.  Even if the students were evenly divided by gender, that would have left only 19 boys in one of the groups and 20 in the other which is a very small group.   Another issue examine is that this research specifically states that it is done with gifted students.  The authors cite the lack of research in the field of gifted education.  Doing research specifically for the gifted community is a valuable contribution to the field.  However, in addition to repeating this study within the gifted community, because of the small sample size, it might also be a good idea to have another study in which this is attempted in the non-gifted population as well to see what those results show.  A additional study with special education students might also prove worthwhile.

 

One more issue that was not addressed was the amount of word processing skills or familiarity that any of the students had with the computers.  We don’t know how often these students had access to the computers they used for the second essay and if that could account for any of the disparity.  If, by chance, the males had access to the computers more often that might be a contributing factor to their increased scores.  We also don’t know if they were ever given word processing classes, how often, if they had the same access to it as the females, etc.  Other unknown factors that could have impacted the study were the students’ connection to or interest it either of the prompts or to the poem.

 

The overall organization of the article was good and the literature study was detailed.  Several articles were given to support the connection between critical thinking and writing as well as articles supporting the use of computers to assist in writing.  There were no editorial errors.

 

 

Connecting to Past Research

As a classroom teacher who teaches writing, I think that this study has some interesting promise.  Writing fluency is an important component to being a competent writer.  More research needs to be done to explore the effects of using the tools that are available.  Not only should larger sample sizes be used but other types of research could be explored.  For example, what types of hardware (tablets, personal computers, etc.) would help versus hinder the writing process.  Will word processing programs get in the way of students’ writing because they become too encumbered with the minutia of spell-checking and editing rather than focus on the bigger picture of concepts? If technology is available and can help students with the writing process then it is definitely something that should be explored.  If it is something that is prone to help one gender more than another, that is worth examining as well.  If the findings from this research that males are able to write significantly more fluently by using computers is proven valid then the ramifications could have an enormous impact on the way we help students learn.  The technology is readily available in many classrooms and if the expectations become that we allow boys the access to do their writing on technology, and as a result, they become better able to communicate their thoughts, it could have a great impact on their ability to reach academic excellence.

 

Furthering My Area of Interest

This connects to my research in that it opens my mind to the type of technology I might use with my students.  This study made me realize that I don’t have to use something extravagant in order to be effective and have an impact on student learning.  I had been searching for a specific technology “tool” to use but it is now something I am starting to rethink.  In one of the research articles I read, they named a precise Microsoft program (Hubbard, J. D. & Price, G. 2013) that they used. After reading this research, I am reconsidering the direction to take. My goal is to use technology to help students become better critical thinkers based on sound pedagogy.  The background piece for this research began by explaining that laptops are in students’ hands every day which is why they chose to do this study.  The impact of their study alone, even with its issues, has made me consider letting some of the students in my classroom use computers to see if it will help them become more fluent writers.  This has made me contemplate that when I choose my study it would be more helpful to my students, and to anyone who reads my study, to have technology that is a part of their daily live rather than an isolated piece.

 

 

Hubbard, J. D. & Price, G. (2013). Cross-culture and technology integration: Journal of the Research Center for Educational Technology (RECT) 9(1).

Music and Technology

Carruthers, G. (2009). Engaging music and media: Technology as a universal language. Research & Issues in Music Education, 7(1), 1–9. Retrieved from http://www.stthomas.edu/rimeonline/vol7/carruthers.htm

 

This week I read “Engaging Music and Media: Technology as a Universal Language.” (Carruthers, 2009) The article is about the role of music and technology in education and how they might play a role together. The article doesn’t offer new research, but it does synthesize others’ research.

The first discussion is about the roles of music, within education and how they might affect each other. Carruthers states that music often plays a secondary role in education. Meaning, that we don’t teach music as part of our curriculum because music is good, in and of itself, we have music within our curriculum because it supports something else. As a music teacher, I often find myself saying “This directly supports you” to other content teachers. You don’t often hear a math teacher justifying why the kids need to learn math. There is an array of reasons why music is valuable on its own legs. It doesn’t need to be supporting anything else.

After reading the article, I recognized that I had used the same type of reasoning as the supporters of Flores v. Arizona. As discussed in “Keeping up the Good Fight: the said and unsaid in Flores V. Arizona.” The supporters had many reasons why the ELL funding in Arizona should be awarded to the schools. The findings, however, showed the reasons from the supporting side fell under the idea of, ‘you should support this because you’ll get this out of it’ mentality. (Thomas, Risri Aletheiani, Carlson, & Ewbank, 2014)With that being said, great teachers integrate all areas into their content. Students need to see how everything is interrelated. Often times children are taught in compartments: math in math class, science in science class…etc, but our lives do not work this way.

Music has, what Caruthers calls, a division of labor. In music, this is the composer, performer and listener; each has their separate job and people rarely cross over. With the addition of technology, this isn’t necessarily the case. My own children compose music with special applications that do not require them to read music. Anyone with the right software can do all three. I see this as one of the biggest impacts technology has had on music. In the past, if one didn’t read music, composing to share with other was rather difficult. Now with software and media- sharing, this becomes relatively easy.

In order to look at the various ways technology impacts us, Caruthers defines technology as anything “from the wheel” to “a personal computer.” This immediately caught me off guard. Defining what is technology never occurred to me. I simply thought of technology as laptops, computers and electronic devices and any software to go along with it, but after reading how Caruthers is approaching technology, I may have to be more specific in what I’m viewing as technology within my research. The ways technology can have an impact, according to Caruthers, can be broken into four parts, technology that: 1. makes things easier to do than it was before, 2. does things better than before, 3. allows us to do things we couldn’t do before and 4. makes us think differently. Again, I had to consider the future of my research. At what level of impact am I going to be assessing. For instance, making it easier to do things than it was before, such as multiplication practice, may not have as big of an impact on student achievement as something that makes the student think in a different way.

The article was more thought provoking than I expected it to be.  Carruthers was clear from the beginning, he was reviewing previous research and that the paper would not answer many of the questions. The purpose of the paper is to create discussion and it proved to do just that. It caused me to look at the research I’m heading into and the basics of how I will approach it. I am dealing with so many more layers than I had previously thought. Carruthers poses, “It is incumbent upon us as educators not only to evaluate the uses of technology – to extol its virtues and denounce its failings – but also to explore deeply how it encourages or causes us to think differently about the world around us.” In my research, I will have to decide if I’m going to look at the level of technology that creates the deepest learning or do I not even take it into consideration.  Do I continue looking at the impact of music with technology on achievement or solely at the impact technology? If I research the impact of music and technology together, does the depth of learning within the music matter in the research? For instance, composing is a deeper depth of knowledge than identifying notes. How does one take this into consideration?  If my research does show an impact on student achievement, is it necessary or valuable to determine if the act of utilizing technology is creating more engagement or is the technology deepening the students’ understanding? Either one could impact student achievement; is there a way to tell which it is? How do I approach the research in a manner that will include my community and their views? In fact, can I even account for the ways technology and especially music has on the community?

Carruthes said it well, “Many of the benefits of music study, some of which are imbedded in the art form itself, are intended by teachers and curriculum planners while others are not” I suspect, that this is the case in technology as well. Unfortunately, it adds another question for me. How do I consider this in my research?

Overall, the article was well written and professional. It was organized in a logical way and he was very clear that he was presenting theories and that, as a literature review, was creating more questions than could be answered in this one piece. His ideas are insightful and have definitely given me pause. I have a lot to consider as I dive deeper into my research.

 

 

Thomas, M. H., Risri Aletheiani, D., Carlson, D. L., & Ewbank, A. D. (2014). “Keeping up the good fight”: the said and unsaid in Flroes v. Arizona. Policy Futures in Education, 12(2), 242–261. doi:10.2304/pfie.2014.12.2.242

Barriers to Introducing System Dynamics in K-12 STEM Curriculum

Skaza H., Crippen, K. J., & Carroll, K. R. (2013). Teachers’ barriers to introducing system dynamics in K-12 STEM curriculum. System Dynamics Review, 29(3), 157-169.

Science, technology, engineering, and math (STEM) education is required in order to prepare students for fast-paced 21st century careers but best STEM teaching practices have yet to be fully developed. One technique currently being studied is system dynamic modeling that “provides a valuable means for helping students think about complex problems” (Skaza, Crippen, & Carroll, 2013, p. 158). System dynamics offers a means of thinking and modeling that allows students to begin making connections between variables. If system dynamics modeling gives students greater access to STEM curriculum, I believe we need to discover the barriers of program implementation and actively begin breaking them down.

Skaza, Crippen, and Carroll analyzed current barriers to introducing system dynamics into K-12 STEM curriculum in their 2013 article. The authors analyze three research questions by means of a mixed-method approach. The questions are as follows;

  1. How are teachers currently using system dynamics simulations and stock and flow models that were already a part of their adopted curriculum?
  2. For teachers who are not using the simulations, what barriers persist to their classroom implementation?
  3. What is the level of teachers’ understanding of the system dynamics stock and flow modeling language and how might that be influencing the classroom use of system dynamics tools? (p. 158)

The organization of the article is clear, allowing the reader to easily progress through the study of ‘system dynamics.’ Structurally, the article begins with an introduction, which includes the main research questions addressed in the remaining sections. After the introduction there is a review of related literature, allowing the reader to get a better view of previous findings by other scholars. The literature review contains relevant topics that allow for a broader examination of the research topic.  Next, the authors thoroughly cover the context for the investigation, methods used, results, discussions section, and final remarks and future research. As a whole, the organization of the article is all-inclusive and is very coherent.

Skaza et al. (2013) addressed a concept that has previously been studied by other educational researchers. According to the authors, a “larger base of empirical research is needed” (p. 159) in regards to system dynamics in order to begin fully utilizing them in most K-12 classrooms. Overall, the study found that only 2.8% of the educators completed the curriculum, which is equivalent to two participants. After this discovery, the researchers analyzed the major barriers such as lack of access to the technology, low teacher efficacy, and not enough professional development support. Outcomes for the study will allow for future research to address the major barriers discussed.

Within the article, Skaza et al. (2013) analyze systems thinking and system dynamics modeling as means for giving improved access to STEM curriculum, particularly to minority students. Systems thinking and system dynamics modeling “is consistent with recent calls for educational reform that focuses on active learning strategies, teaching for transfer to new problems, as well as intending for creativity and innovation as key outcomes” (Skaza et al., 2013, p. 157). Thus, this study is relevant to the overall consensus of the United States’ push towards effective STEM education.

In regards to theoretical frameworks, “the theoretical framework for this revision includes system and system models as crosscutting concepts and as a component of Scientific and Engineering Practice” (Skaza et al., 2013, p. 157).  As a whole the authors stay true to the framework making the article cohesive and appropriate.

Within the methods section, the authors discuss the mixed-method approach to data collection that is used in the quest to answer the three research questions. The “research method involved a single-group, mixed-method (quantitative-qualitative) design consisting of two phases: a survey followed by a focus group” (Skaza et al., 2013, p. 160). Participants for this study were selected from 40 high schools and consisted of 160 teachers, while the focus group was made up of four participants. In summary, the survey consisted of 17 questions containing both qualitative and quantitative measures. Also, the focus group contributed valuable support for the survey findings, which could be made stronger by increasing the number of focus group participants.

The researchers analyzed the surveys by looking at both qualitative and quantitative data, while using the focus group information to add depth to the survey findings. If another researcher wanted to replicate the analysis piece of this research, there is adequate information to do so. The analysis section fully describes the steps taken by the researcher and allows for replication due to the specifics of how data was analyzed in both the surveys and focus group. Overall, the researchers determined the number of participants who actually implemented the system dynamics concept into their classroom and if teachers failed to implement, the researchers worked to uncover the barriers to implementation.

As far as the findings are concerned, they are based on a thorough understanding of the data. By this I mean that the researchers analyzed the survey information, gained knowledge, and then used the focus group to either confirm or deny these findings. Also, there were multiple questions within each category on the survey helping gain more accurate information. For example, the survey asked teachers to provide proof of understanding the concepts by means of essay answers. So, if a teacher said that unavailable technology was their barrier yet they were unable to describe a science concept, the researchers could conclude that teacher efficacy is also an issue. The researchers discovered that the major barriers to using system modeling in the classroom is technology, yet the focus group and survey essay answers told a different story of potential teacher efficacy problems. Thus, I believe that the barriers are accurately captured, which can in turn lead to potential new research or action.

As an educator, I have experienced the push towards technology use in the classrooms. I believe that this thrust is necessary and important towards the growth of our students and the necessity to bring students into the 21st century. Our goal is to help students use technology to problem solve and work towards higher understandings but what happens when teachers don’t fully understand how to integrate technology into the classroom? Many educators that I have encountered feel uneasy about technology, thus do not make an effort to use it to enhance the learning environment. With this being said, our first move towards incorporating system dynamics modeling into the classroom, in order to enhance STEM understandings, is ensuring that all of our educators and future educators are technologically competent.

 

 

References

Skaza H., Crippen, K. J., & Carroll, K. R. (2013). Teachers’ barriers to introducing system dynamics in K-12 STEM curriculum. System Dynamics Review, 29(3), 157-169.

Inequality in Education

Doyle, J. L. (2014). Cultural relevance in urban music education: a synthesis of the literature. Update: Applications of Research in Music Education, 32(2), 44–51. doi:10.1177/8755123314521037

During an observation I did last week, I was struck with the reality that, in comparison, I am a teacher at a privileged school. When I examined the differences between the campus I was visiting and the campus I work at, I started shifting on the direction of research I wanted to look into. Originally, what caught me off guard was the realization that the students in the classroom I was visiting had zero technology to use; nothing, unless we are going to consider a mechanical pencil technology. However, as I learned more about the school I was in, the more dismayed I became. (Truth be told I was all sorts of fired up!)

The school I was observing in has made some cuts to their staff, which is no different than any other school in Arizona, but this district made major cuts to their special areas team. The students never have art and they had a whopping two hours of music the entire year. I don’t imagine they made a year’s growth in those two hours. They did have, in alternating weeks, PE and technology class. Keep in mind, they have no technology in their classroom. There is a band program, but the students are only pulled out once a week.

In comparison, the school I work at has five Chromebooks for each grade level in addition to the laptops we have in every classroom and an iPad cart that is shared. That sounds more amazing than it is. For example, I had eight laptops in my classroom, but only three of them worked. The major difference, technology wise, is that our students are not only allowed, but encouraged to bring their electronic devices to school. In addition, we have a full special area team, which includes: art, music, college and career readiness, and two PE teachers. The students also have the option of joining band and/or choir.

The ramifications of not having music, art, or technology at a school are mind boggling. Think about the impact art has in the engineering and design of items like cell phones and tablets. I guarantee there is a lot of thought put into the visual effect of everyday items. The higher level thinking involved in each of these areas allows our students to problem solve and be creative in a way that is only possible in the arts and technology. The students I had the opportunity to visit with are going through their education without the same resources other students have.

I started researching anything that had to do with music, technology and achievement. I came across an article by Jennifer Lee Doyle entitled Cultural Relevance in Urban Music Education: A Synthesis of the Literature. Basically, the article is looking at the students of low socioeconomic status and how their social and academic outcome is affected by the arts. I have failed to mention thus far, that the school I visited is 94% free and reduced lunch, meaning the students who attend the school come from low socioeconomic status. The school I work at would be considered mid-level socioeconomic status; we have very few student on free and reduced lunch. The article says that “students who participate in the arts tend to have better academic and social outcomes than do students who do not participate in the arts.” She goes on to say that low SES students have increased civic engagement, better achievement test scores, school grades, graduation rates and college enrollment rates when compared to low SES students who are not in the arts. One point I found interesting was that she specifically pointed out that students “with a history of intensive arts experiences” score closer to the level, and sometimes exceeding the level shown by the general populations. This would definitely support my theory that a school without music or art will impact the students in a negative way.

The next section of her article she says there are indications that students of color, low SES and with low academic achievement are underrepresented in secondary music programs in the United States. From experience, I can tell you that there are many reasons for this and from what I have witnessed, is totally accurate. In high schools across Arizona students who participate in band typically pay $100 or more for supplies, uniform…etc. I have heard numbers as high as $500. This does not include the cost of any trips the group takes or their instrument. The non-existent funding for music programs in the high school means, that those costs fall to the families. Students from a low SES, struggle with this. Fundraising helps, but there are only so many scented candles one can sell. In regards to the students who struggle academically, from what I have witnessed, they are underrepresented because there is no more room in their schedule due to the remedial courses they are required to take. The same holds true for ELL students. A study that Doyle looked at said that 65.7% of music students, in secondary school are Caucasian and 90.4% of them are native English speakers. In the school the study looked at, only 50% of the students were Caucasian. Obviously, this does not match with the composition of the music program. A second study she looked at found a strong association between SES and music participation. “Only 17% of music students were from the lowest SES quartile.” It is baffling that we have areas in education today still, essentially, segregating our students.

Doyle suggests that in order to raise participation in music programs, specifically in junior high and high school, teachers need to create more culturally relevant courses.  She gives several examples of how to implement this: integrating multicultural music styles, offer nontraditional ensembles, teach courses that relate directly to local student interests and to be more present in the lower level schools. Making the classes more culturally relevant just means that one is being a good teacher. It is all about making connections and in middle school and high school, if a student does not feel connected, they will not stay in the program. However, music programs are unique in this way because of the amount of time the ensembles spend together, the community created is tighter than in a regular classroom. My theory is that students in general, have a higher rate of success in school when they are in such a community.

Overall, the article was easy to read and was thought provoking for me. It definitely has me looking at different ways to research. However, because it was a literature review and not quantitative research, I felt as though it didn’t dive very deep into each issue discussed, especially within data. Therefore, I don’t feel like it is or going to be very impactful.  If anything, I would have liked to see more information and statistics. However, she cited forty-two different references, which absolutely gives me a place to dive deeper.

Even though Doyle’s article is discussing secondary schools, there is still a connection to elementary. Elementary school prepares the students for secondary school. If the students are receiving an education that forces them to work at a high, more rigorous level and requires them to be creative, imagine how much farther they would end up in middle school and high school.  In the meantime, I will be looking at how the arts and technology impact student achievement and how that relates to students of color, low SES and English language learners.

Technology…..it’s all about the Teacher

Howley, A., Wood, L., & Hough, B. (2011). Rural Elementary School Teachers’ Technology Integration. Journal of Research in Rural Education26, 1-13 http://www.mendeley.com/catalog/rural-elementary-school-teachers-technology-integration-3/

 

In 2011, Howley, Wood, & Hough (2011) chose to survey the technology habits of teachers in the state of Ohio.  They wanted to address technology integration in rural areas.  They were specifically looking to evaluate three categories.  First, they wanted to learn if teacher attitude had an impact on technology integration.  Second, they looked at if the students’ ability to use technology made a difference. Finally, they wanted to determine how teacher preparedness factored into the equation.

 

The authors examined literature from all three areas they were evaluating.  Their findings concluded that most schools do have access to the basic technology, although the broadband connections are often unreliable.  Previous research showed that teacher attitude often drove the use of the technology that was available to them.  They also found that in some instances in rural schools, culture played an impact because some adults felt that technology use interferes with rural values and ways of life (Howley, A., Wood, L., & Hough, B., 2011 p.4).  They also provided examples of rural schools that felt the opposite of that and did want their students using technology.  When that was the case, the issue tended to focus on either obtaining the technology or on using the technology they did have.  This article also had a section dedicated to areas related to this topic where literature is lacking.  Based on their research, little has been done in regards with evaluating elementary schools.  They found more research in this area from middle school upwards; hence their desire to focus on third grader teachers.

 

For this study, the Ohio Department of Education was contacted for a list of third grade teachers.  Additional details regarding the responding teachers was provided in the literature (i.e. average age, gender, etc…).  Specials teachers such as art, music, and physical education were eliminated from the list.  A 56 largely closed-ended question survey was specifically developed for this research assessment and was mailed to these teachers.  Ground mail was chosen over email in order to eliminate a technological component purposefully in this survey. Letters ensuring teachers anonymity and stamped, self-addressed envelopes were also included in order to increase the chances of participation.  One thousand teacher names were randomly generated and of that, 514 usable responses were returned. Of those, 157 came from rural teachers and 357 from non-rural teachers.  The mailings occurred during a three week time period and care was taken to ensure that the survey was not sent during a period of time such as a high-stakes testing week.

 

The goal of the study was to tease out any potential differences, if they existed, between rural and non-rural teachers in regards to their use and attitudes of technology.  Since there were many different variances that could result from the survey, the research team used the one way analysis variance (ANOVA) as well as analysis of covariance (ANCOVA) to compare mean responses to scales constructed from cluster of related items (Howley, A., Wood, L., & Hough, B., 2011 p.6).   The findings of these results did show one significant difference, namely in the attitudes of teachers toward technology integration (Howley, A., Wood, L., & Hough, B., 2011 p.6).  Based on the results of this survey, teacher attitude towards technology appears to have the most influential determiner on usage.  The next two factors that the data did show in of having some impact (in decreasing order) were the amount of time that teachers needed to prepare in order to use the technology and the teachers’ views that they felt that they lacked technology in their schools.

 

The conclusions that the research leads to is that teacher attitude towards the use of technology within the classroom is the driving force.

 

This article was very comprehensive and definitely appeared to be able to be replicated. The literature review went to great lengths to provide thorough examples to back up its findings. Only one article was from 1999, the rest were from 2000 or newer with many citations coming from within the past five years.  They explained in detail how they got their research sample and provided an index with the technology questions that they asked.

 

A critique of the article would be that it would have been helpful if they had a breakdown available of the results to the specific questions.  Notable, the last question asks teachers to detail how they use technology with the classroom.  It would be interesting to view those results.  I believe that would give insight as to the type of teacher and comfort level and may provide additional insight.

 

The article was largely well written with almost only one grammatical or punctuation error (a missing period).  However, the headings and subheadings were the same font.  Although the headings were centered and the subheading were left justified, the subheadings in some sections were so long that it created some confusion as a reader.  It would have been helpful if the headings were either a little larger or bolder making the article easier to read and follow.

 

After reading this research, I would be very interested in finding out how the results of the question that asked teachers to delineate how they use technology within their classrooms bore out.  If teacher attitude has been determined to be such a deciding factor I would like to know how the teachers who are using the technology they have are putting it to use.  I would then like to create a professional development study to study the best way to move those teachers forward to maximize the benefits.

 

I chose this article to read because I want to explore integrating technology alongside critical thinking and solid pedagogy in order to create an optimal learning experience for my students. From this article I was looking for insight on technology and what I might learn with regards to classroom implementation.  I did come up with a few ideas but after reflecting on the results from the survey sent to 1000 third grade classroom teachers in Ohio about technology, the results all come down to…..the attitude of the classroom teacher.  At the end of the day, it all seems to boil down to that.  And what a valuable lesson that is.  In the classes I’m taking, the value of teachers to self-reflect has been discussed.  This is a perfect example of a situation where the teachers in this study might be shocked to learn that THEY are the ones standing in the way of their students having access to technology, not the other way around.  The power teachers have and how much could be done if they only realized how much control they have is limitless.  Imagine how much could be accomplished within classrooms if teachers harnessed that power every day and for every student.

Research Topic Post – Online Learning Readiness Assessments

Dray, B. J., Lowenthal, P. K., Miskiewicz, M. J., Ruiz-Primo, M., & Marczynski, K. (2011). Developing and instrament to assess student readiness for online learning: A validation study. Distance Education, 32(1), 29-47.

With the dramatic increase in online learning deliverables seen within K-20 environment, researchers have begun to examine not only the validity of this mode of education, but also the student’s preparedness for web-based learning, and success for the online learning environment. Using Kerr, Kerr, and Rynearson’s (2006) framework for assessing online learning readiness, the author’s found that students felt they were indeed ready for online learning, but, that their self-assessment was based on their own experiences with technology leaving room for additional variables to be examined.

The authors of this article were familiar with previously conducted survey findings presenting information on student readiness for web-based learning; however, the results of those surveys provided limited information and translating that information into tangible data was challenging as stated by the researcher. Therefore, the authors of this article conducted a study to develop a more detailed tool to determine student readiness for online learning through a three-phase study: the survey development phase in which faculty/experts reviewed questions for clarity, the item analysis phase where the content of the tool and research questions were refined through focus groups and interviews, and finally, the survey validation phase in which questions from previous surveys were combined with new questions to cover topics relating to student demographic, learner characteristics, and technological ability (Dray, Lowenthal, Miskiewicz, Ruiz-Primo, & Marczynski, 2011).

The participants in their study were comprised of 26 graduate students pursuing a degree in educational computing. The results of the study showed that many of the students were scored as ”ready” for online learning, yet, the implications of study were direly impacted by the lack of sub-groups based on age, sex, or socioeconomic influences on their preparedness. This has importance because results may show that certain ages, genders, or socioeconomic factors play a role in determining whether the student is prepared for web-based learning. For example, if a student is under the age of thirty, they may rank as being prepared for online learning because it is reasonably assumed that students in this age group use online communication daily through social networking. Researchers also determined that the term “readiness” needed further clarification for the study’s purpose, as oddities were discovered as to whether readiness was determined by one’s technical ability or by their use and engagement of web-based tools, equipment, and material (Dray, Lowenthal, Miskiewicz, Ruiz-Primo, & Marczynski, 2011).

This article proves beneficial to my study in that the literature review coherently presented previous research done on the topic along with critiques of the strengths and weaknesses of each article reviewed.  The authors found  literature gave information on general learner characteristics, interpersonal communication abilities,  and technological skils(word processing, using spread sheets, use of search engines, etc.). However, noticeably absent was information on the student’s work schedules, access to technology, and the expectations for being successful in an online course. I found it interesting that the authors found an unexplored angle of questioning based on self- concept, esteem, and efficacy which could lead to quite different results of surveys proving to be an excellent contribution to the field. The authors of this article likened their study to that of Kerr, Kerr and Rynearson (2006) whose article, “Student Characteristics for Online Learning Success” also discussed student esteem, efficacy, and self-confidence as a means to determine success in online learning.

The authors create a stimulating argument, showing that readiness is a complex term, and must be defined as more than general characteristics. I found the first phase of their survey to have the strongest argument where skills regarded as being part of traditional learning can be easily carried over into online learning. Such skills include writing and expression, time management, and responsibility. Additionally, the authors were diligent enough to alter questionnaires where inconsistencies were present. For example, during the first phase of their study, it was found that students were answering questions based on their personal experience with web-based tools, rather than within the education context as expected. Therefore, the authors revised the prompts to require the students to answer the questions from their educational experience.

The article presented the questionnaire through its various stages of reconstruction showing how questions were revised during each phase of the study. However, the article lacked a clear definition of how the surveys were administered. Was the survey an in-person sheet where students entered answers long-hand? Was the survey administered online through a course management software program, or was the survey collected in focus group setting in a qualitative manner? These are questions that presented areas of concern as the setting in which the survey was administered could possibility present differing data results. While the authors presented information the ages, ethnicities, and major of study for the participants, the study failed to present information on the whether participants were taking an online course for the first time, and what distance learning model was used for this specific course in which the survey was given (completely online, hybrid, etc.). Additional areas of study could examine  the comparison between undergraduate student online learning preparedness, and graduate student preparedness in online learning environments to see if results of the study vary between the two populations. Another area of exploration could be centered on how the level of social media experience of the participants  impacts online learning success. Finally, the study could be extended to present data on minority student success in online learning environment, including information on whether one’s socioeconomic status has an impact on online learning. The further study, as suggested, would prove to as an effective analysis for researchers and teacher-educators to examine underrepresented populations.

This article can be compared to Lau and Shaikh’s (2012) article, “The Impacts of Personal Qualities on Online Learning Readiness at Curtin Sarawak Malaysi” in which the authors of the article developed a questionnaire to gather information on student’s personality characteristics as diagnostic tool for both faculty and instructional designers (Lau & Shaikh, 2012). Where Lau and Shaikh’s study shows a higher level of evidence is that they surveyed over 350 participants in their study as compared to Dray, Lowenthal, Miskiewicz, Ruiz-Primo, and Marczynski’s study in which only 26 graduate students were surveyed. The findings of Lau and Shaikh’s (2012) study were that students were less satisfied with online learning in comparison to traditional learning environments and felt less prepared for the objectives of the course. Both articles support my research in different by equally necessary ways: Lau and Shaikh’s (2012) article presents compelling statistical data on online learning readiness, while Dray et al (2011) article provides information on how to compose efficient survey questionnaires.

References

Dray, B. J., Lowenthal, P. K., Miskiewicz, M. J., Ruiz-Primo, M., & Marczynski, K. (2011). Developing and instrament to assess student readiness for online learning: A validation study. Distance Education, 32(1), 29-47. Retrieved from http://web.b.ebscohost.com.ezproxy1.lib.asu.edu/ehost/pdfviewer/pdfviewer?sid=2ae58e61-960f-4907-be25-93dcd5ba5c38%40sessionmgr114&vid=2&hid=120

Kerr, M., Rynearson, K., & Kerr, M. (2006). Student characteristics for online learning success. The Internet and Higher Education, 9(2), 91-105.

Lau, C. Y., & Shaikh, J. M. (2012, July). The impacts of personal qualities on online learning readiness at Curtin Sarawak Malaysia. Educational Research and Reviews, 7(20), 430-444.