“Words, words, words, I’m so sick of words!” (Eliza Doolittle in “My Fair Lady”)

 

As I dive deeper into my first doctoral class, I hear Eliza Doolittle in the musical “My Fair Lady” singing the above.  All these made up words that don’t speak plainly – “processual” (Lave, p.158), “historicism” (Lave p. 159) and “positionality.”  It almost seems as though the writers are working to keep readers out rather than draw us in us.  I love to read yet this week’s readings have been torturous.  Pivovarova’s paper (2014) is a great original study about the effects of tracking high and low achieving students.  Reading it left me confused.  Rosaldo’s (1994) mini-ethnographic stories entertained, but some of his sentences are convoluted and absurd, e.g. “This chapter uses a series of examples to explore the consequences of thus understanding the factors that condition social analysis (page 169).”  Where’s the subject?  Where’s the verb?  I am confused.  If researchers want to make their writing and thoughts accessible to an average person, they need to write for the average person.  I’m so sick of words.

As McCarty writes in her editorial introduction to the special issue of Anthropology & Education Quarterly (2005), “…the shift toward English represents a shift away from the Indigenous (p. 3).”  As researchers focus on English even when working with native peoples, the history, culture and a connection are lost.  In my current state of frustration, I could rewrite that sentence as “the shift toward academic writing represents a shift away from coherent language.”  Academicians are creating their own words and language that is inaccessible to those who aren’t in the circle.  It feels oppressive to me.  This must be how some community college students feel when they hear instructors mention “Blackboard” and “MEID” and “SIS.”  It’s a new vocabulary as well as new, never-performed-before actions – e.g.,“blog” and “submit electronically.”

“Show Me” is the name of the song referred to in the title of this blog.  Youth Participatory Action Research seems geared to do that.  Researchers “show” each other what is important and what they need/want to know.  Adult researchers show the youth how to use research tools while the youth show the adults what’s important to study and how to relate.  That is dialogue.  McCarty writes that she is looking for dialogue in the theme issue of Anthropology & Education Quarterly (2005).  However, if the writers all have doctoral degrees and if people with doctoral degrees make up less than 1% of the population (as Sue Henderson advised us last week), does that 1% isolate itself with a language not understood by most of the rest of the world?  I can see researchers wanting (and needing) to develop their own language; yet it seems antithetical to the idea of social justice when this language cannot be understood by 99% of the population.

Perhaps what I am experiencing is akin to the tracking that Pivovarova (2014) writes about.  Those of us who are “low-achievers” are put in class with the “high-achievers.”  We don’t bring down the high achievers too much (depending on the distribution), but we low-achievers can be brought up.  I feel I am benefitting from all this wordy reading and writing, but I am wary of becoming one of the oppressors.  I want to relate with my colleagues and students in an authentic way. Parker Palmer’s classic book, The Courage to Teach (1998) gently encourages the reader to be authentic.  To teach in dialogue with students.  Though I believe many of the authors we read are hoping to establish dialogue, with such convoluted writing, dialogue is a distant dream for this tyro doctoral student.   Words, words, words…(when I can relate) I love them.

 

References
Lave, J. (2012). Changing Practice. Mind, Culture, and Activity, 19(2), 156–171.

McCarty, T. L. (2005). Indigenous Epistemologies and Education — Self-Determination, Anthropology, and Human Rights. Anthropology & Education Quarterly, 36(1), 1–7.

Palmer, P. J. (1998). The courage to teach: Exploring the inner landscape of a teacher’s life. San Francisco, Calif: Jossey-Bass.

Pivovarova, M. (2014). Should We Track or Should We Mix Them ? Mary Lou Fulton Teachers College.

Rosaldo, R. (1994). Culture and Truth: The remaking of social analysis. Boston, MA: Beacon Press.

 

 

 

Tracking v. Mixing; Seeking Humanization

Classroom dynamics are shaped by many things, including attributes, dispositions, knowledge, and skills of the teacher, students’ personalities and backgrounds, and, among other things, students’ academic abilities. Margarita Pivovarova (2014) seeks to isolate and measure the effect of students’ academic abilities, as measured by a low-stakes standardized test, on the performance of other students on subsequent exams, in effect, attempting to address the title question of whether students should be homogenously tracked or heterogeneously mixed according to academic ability. As I read this article, there were several things that I took issue with, each of which I want to address in turn.

Relatively early in the article, Pivovarova (2014) begins to use certain words to describe students’ academic performance that I found to be rather unsettling and dehumanizing in nature; by simply reducing them students to statistics on a page and using descriptors such as, good, bad, average, and marginal, she fails to positively recognize that the data she are interpreting are individual humans who are much more complex than simple adjectives. In an attempt to distance herself from the inescapable connotations associated with such words, she includes the following in her notes section, nearly three fourths of the way through the article:

4In order to make the interpretation easier, instead of labeling students by the level of achievement as 1 to 4, I will call students at the lowest level of achievement “bad” students without attaching the actual meaning of the word “bad”; students at the highest level – “good” students, and students in the middle of the achievement distribution – “average”. Among “average” students, I will distinguish between “marginal” (those whose achievement is below provincial standards, or level 2) and just “average” (level 3). (Pivovarova, 2014, p. 29)

While I recognize that it can often be easier to simplify data for ease of writing and communication with the reader, I found the inclusion and repeated use of these words to be over-simplified and indelible choice. The visceral reaction I had when reading these words, to me, underscores the importance and value of being very conscious and intentional in my word choices when making qualitative judgments and assessments about data points, and to always remember what the data I describe actually represent, which, in this case are students.

                When I began to reflect on the situative context behind Pivovarova’s (2014) work to better her dispositions and analyses, it became clear that her approach was quite distanced from any actual interaction with the students themselves. With such a large sample size (n=228,947 students), it seems likely that this information was reported to her by, or obtained through an institution involved with Ontario’s standardized testing, as opposed to being collected by she herself. While there were likely human-to-human interactions in the collection and analysis of these data, it seems as though one could reproduce such a study without ever seeing an actual person represented by the data. I see this as a major weakness of her methods; by never interacting with the ‘subjects’ of a study, it seems as though it would be quite easy to make such qualitative assessments that fail to acknowledge the humanity of the data points. As I read more about the author and her background, I found that her focus is in field of economics, which can have a cold, sterile distance to it, which was the feeling conveyed through this article.

Despite my above sentiments, when I began to reflect on the motivation behind Pivovarova’s work, I see the most noble of intentions behind it. The article intends to dispel current thinking on the linear relationship between the effects peers have on an individual achiever’s learning. This is done in the name of improving school effectiveness and efficiency, with the ultimate goal of improving student achievement.  If current practice dictates that students be grouped into classrooms in a certain way, homogeneously by achievement, for example, and new information suggests that more students would benefit to a greater extent if they were grouped using a different method, then the paper serves to fill an invaluable need that will improve outcomes for a significant number of students, something that has limitless value. An example of this, representing another particular strength of Pivovarova’s (2014) article is the manner through which she debunked an oft-used excuse of educators: the idea that a “bad apple” student can ruin the learning environment for all other students. Her data suggest that an increased number of low-performing students do not, on the whole, negatively impact the outcomes of high achieving students (Pivovarova, 2014). To use data to soundly reject such a dehumanizing (of students) notion is one of my most valuable take-aways from this research.

Through reading this article, I have gained a valuable insight about how I will implement my innovation into my own practice. I hope that collecting data through participative action research and utilizing methods that always actively seek to humanize my participants, my innovations will not reduce the lives and abilities of those involved to mathematical formulas, algorithms, and simple numbers on paper. But rather, that my methods will always refer to the data in ways that respectfully acknowledge the various backgrounds and stories of those involved in the study.

 

Works Cited:

Pivovarova, M. (2014). Should We Track or Should We Mix Them? Mary Lou Fulton
Teachers College. Tempe: Arizona State University.

“Damned to be concrete”: Considering productive uncertainty in data visualization

Marx, V. (2013). “Data visualization: Ambiguity as a fellow traveler.” Nature Methods, 10(7), 613-615.

In their musings on the importance of uncertainty with regards to social networks and educational attainment, Jordan and McDaniel (In Press) bring to the forefront an interesting concept of “productive uncertainty” (pp.5).  This idea allows that while uncertainty is not always pleasant—and while learners will often seek to minimize it—the experience is not without value.  Marx (2013), while discussing the complexities and shortcomings common among data visualizations, expands upon this concept; uncertainty, particularly within a statistical realm, can illuminate new characteristics of the data or new methodologies that address shortcomings in collection or analysis.  However, data visualizations themselves can obscure or outright hide this level of detail.  So how do we visualize data in a way that is both simple and transparent?

“[With visuals], we are damned to be concrete” (Marx, 2013, pp. 613).

Marx (2013), using examples from genomic and biomedical research, poses an interesting question: In discussing scientific results, researchers often feel compelled to gloss over, if not exactly obscure, uncertainty in their data.  These questions can arise from inconsistent collection, imperfect aggregation, or even unexpected results.  However, these “unloved fellow travelers of science” (pp. 613) cannot exist visually in the type of “grey area” analysis that Marx contends they often do while in text.  When faced with creating an honest visualization, then, researchers must decide to what extent they will account for study uncertainty.  Marx, in explaining the potential impacts of this decision, advocates that researchers strongly consider two points: First, that uncertainty may have implication upon the data itself; and second, that a transparent consideration of uncertainty strongly impacts “what comes next.”

Thus, Marx (2013) is explicitly pushing productivity over negativity when reflecting upon uncertainty in data or the wider study; however, she is also acknowledging that even within the specific context of biomedical researchers, the pull to minimize uncertainty when broadly discussing results exists.

Down the rabbit hole: Analysis can create uncertainty too

One should also consider the process—largely mathematical, in this context—of moving from a raw dataset to a clean visualization.  Common steps for creating data visualizations, particularly in genomics and the biomedical sciences, often include aggregating data from different sources (and thus methods of collection) or summarizing large and complex markers into something more easily digestible.  By attempting to standardize disparate collection methods into something more uniform, or by summarizing disparate study groups or grouped variables, an important level of detail is lost.  These processes themselves can obscure data, which in turn obscures uncertainty for the end audience, whose exposure to this study may wholly lie in the visualization.  Going somewhat down the rabbit hole, this in itself can therefore create new uncertainty.

Certainly, simplicity is important in a data visualization; however, as Marx argues, researchers also have an obligation to consider that by glossing over details of uncertainty, or by creating new sources of uncertainty through their analyses, the wider community may understand their work less, or may make assumptions of their findings that are unfounded.

In particular, missing data presents a complex dilemma.  Marx (2013) gives the example of a genomic sequencing experiment, seeking to map a stretch of genetic material that contains 60 million bases:

“The scientists obtain results and a statistical distribution of sequencing coverage across the genome.  Some stretches might be sequenced 100-fold, whereas other stretches have lower sequencing depths or no coverage at all…But after an alignment, scientists might find that they have aligned only 50 million of the sought-after 60 million bases…This ambiguity in large data sets due to missing data—in this example, 10 million bases—is a big challenge” (pp. 614).

As opposed to data that is statistically uncertain, or uncertain by virtue of its collection methods, missing data is a true negative whose effect is difficult to truthfully express and explain.

So how do we show uncertainty visually?

Marx suggests several methods for including uncertainty visually when discussing data.  Broadly, she suggests including some representation of uncertainty within a visualization; this can be layered on top of the data visualized—for example, using color coding or varying levels of transparency to indicate more and less certain data.  A visualization can also account for uncertainty separate from the data, by using an additional symbol to denote certainty or the reverse, for example.  She also discusses contrasting analyses of similar (or the same) data that have reached differing conclusions; taking into account their methods of analysis, this inclusion of multiple viewpoints can also round a discussion of uncertainty.

In addition to understanding how to represent uncertainty visually, however, one should also consider how and when (during a study or study analysis) one should tabulate uncertainty.  One platform looking to incorporate uncertainty holistically into data visualization is Refinery.  In particular, Marx notes that this system seeks to find “ways to highlight for scientists what might be missing in their data and their analysis steps” (pp. 614), both addressing uncertainty situated in data and analysis.  As shown below, this system considers uncertainty at all steps throughout the data analysis, rather than only at the end, giving a more rounded picture of how uncertainty has influenced the study at all levels.

“The team developing the visualization platform Refinery (top row) is testing how to let users track uncertainty levels (orange) that arise in each data analysis step” (Marx, 2013, pp. 615).

In the graphic, the blue boxes represent data at different stages during analysis.  Orange, in the top row, represents the types of uncertainty that may arise during each analytical step, concluding that the orange error bars in the bar graph to the far right are much more comprehensive in their calculation.  The light blue bar in the bottom row shows the disparity, theoretically, when error is only taken into account at the end of an analysis.  While the magnitude of uncertainty may not be as significantly different as shown in the graphic, researchers are better able to account for what causes or has caused error in the top row; they are better able to situate their uncertainty.

A picture may be worth a thousand words, but do they have to tell one story?

Analyzing data is often a narrative process; however, as Marx (2013) alludes, there can be consequences to how one tells their story.  Washing over uncertainty, in both preparing and discussing results, can be misleading, limiting both a researcher’s true understanding of their own data, and collaborations or theories that use the data as a foundation for further study.  Marx, however, is not disparaging researchers who fail to consider uncertainty as dishonest; she is promoting the idea that considering uncertainty positive—or productive—can lead research in novel directions.

Sources

Jordan, M.E. & McDaniel, R.R. (In Press). “Managing uncertainty during collaborative problem solving in elementary school teams: The role of peer influences in robotics engineering activity.” The Journal of the Learning Sciences, 1-49.

Many parts, One body

One body, Many parts

The intentional destruction of cultures and annihilation of people through imperialism, colonization, and neglect has been devastating to the world.  When one group sees themselves as greater than others and as a consequence believes they must wipe out or at least subjugate others, that faulty thinking kills spirit and life.  In preparation for liturgy this Sunday I was reading the scriptures that my husband and I were to proclaim to the assembled.  In our church it is the feast of Pentecost, a time when the Holy Spirit is believed to have inSpired followers of Jesus to take his story and message of peace and respect for the marginalized to the world.  The following passage connected with the readings for our Introduction to Doctoral Studies class, TEL 706, for me: “The body is one and has many members, but all the members, many though they are, are one body” (1 Corinthians 12:12 New American Bible).

That passage is hopeful for me.  Despite the beliefs of some that White is right and that everyone else should try to imitate the majority culture in power and that some people are not worthy of going to college, if we focus on communities’ cultural wealth (Yosso, 2005), we may recognize one community’s parts (or wealth) as different from another’s yet necessary to make the “body” complete.

Uncertainty is necessary for learning (Piaget in Jordan & McDaniel, in press) and managing that uncertainty is necessary in collaborative learning (Jordan & McDaniel, in press).  Research requires collaborative learning.  If researchers are anything like fifth graders working on robot projects, by expressing uncertainty about established research methods or the causality of “racial” problems as Zuberi & Bonilla-Silva (2008) do, the path is open for other researchers to explore the uncertainties as well and create new methods or explanations. Uncertainty allows a step back to “see” with fresh eyes a sharper, more focused image.  It’s like when you lose something and get frantic searching for it – so frantic that you can’t see it’s right in front of you.  Stepping away and then coming back to contentious research questions when you are calmer often brings the “lost” item into focus.

I may be naive, but I would like to believe the “lost” item is the viewpoint of indigenous people throughout the world who, through imperialism, colonization, and neglect, lost their culture and ways of knowing.  It will take more than just stepping away to reclaim culture and ways of knowing, but that’s a start.  Being open to stepping away and seeing research methods or ways of knowing or teaching with new eyes may allow the white folks and the indigenous to see what’s been right in front of them – a narrative, cultural capital, learning by engaging with the earth.  Because ultimately, we are all of the same body – just many parts:  Africans, Maori, Anglos; one an eye, another an ear, another a foot – all parts that are needed to complete one body that functions effectively in the world.

“If the ear should say, ‘Because I am not an eye, I do not belong to the body,’ would it then no longer belong to the body?  If the body were all eye what would happen to our hearing?  If it were all ear, what would happen to our smelling?” 1 Corinthians 12:16-17

 

References

Jordan, M. E., & McDaniel, R.  (in press). Managing uncertainty during collaborative problem solving in elementary school teams : The role of peer influence in robotics engineering activity. Journal of the Learning Sciences, doi: 10.1080/10508406.2014.896254

Yosso, T. J. (2005). Whose culture has capital? A critical race theory discussion of community cultural wealth. Race Ethnicity and Education, 8(1), 69–91. doi:10.1080/1361332052000341006

Zuberi, T. & Bonilla-Silva, E. (2008). White Logic , White Methods: Racism and methodology. New York: Rowman & Littlefield Publishers, Inc.

“Learning styles” and education in a controlled environment

Pashler, H. et al. (2009). “Learning styles: Concepts and evidence.” Psychological Science in the Public Interest, 9(3), 106-119.

People best learn in different ways.  This is a deceptively simple and interestingly familiar idea in modern educational research and curriculum design.  It’s also a concept accepted—or at least understood—by a wider general public, and fits nicely within the twenty-first century cultural (and technological) context that personalization easily available, expected and best.  But regardless of this wider acceptance, is there quantitative evidence to support the theory?  Pashler et al. (2009) set out to explore the current literature, historical context and quantitative support for what they term “learning styles.”  Through what historical context did this idea germinate?  What experimental methodology would best quantitatively prove its efficacy?  Has such research been performed in the current literature, and if so, what does the evidence prove?

It’s all Jungian

The authors begin by situating the idea of categorizing people into disparate “types”; this, they explain, draws from the work of Jung, whose research in psychology and psychoanalysis led to the creation of behavioral tests—like the Myers-Briggs—that perform much the same function as learning styles.  They categorize people into “supposedly disparate groups” based upon a set of distinct characteristics, which in turn explains something deeper about a person.  Although the authors do not regard these tests as objectively scientific, they do note that these tests have “some eternal and deep appeal” (pp. 107) with the general public.

The authors hold that this “deep appeal” partially explains what draws researchers, educators, learners and parents to the idea of learning styles.  Beyond being a method to feel like a larger and often cumbersome system is treating a learner uniquely, the authors write that learning styles can become straw men for underachievement:

“If a person or person’s child is not succeeding or excelling in school, it may be more comfortable for that person to think the educational system, not the person or the child himself or herself, is responsible” (pp. 108).

Even including the evidence presented, this is an unfair prognostication.  In their desire to explore the objective science of learning styles, the authors have shut down consideration of a slew of externally confounding factors, including socioeconomic stressors, racial background and cultural barriers, which all have a demonstrated influence upon classroom performance (Howard 2003; Liou 2009).  More than that, however, this passage reflects an underlying bias among the authors commentary—that a theory is lesser when it speaks to people emotionally.

What are learning styles really for?

However, when the authors break down the unspoken hypotheses that govern the idea of learning styles, they make an excellent point.  There are two very distinct issues at play:

  1. The idea that if an educator fails to consider the learning styles of his or her student, their instruction will be ineffective (or less effective).  The authors also consider what they term the reverse of this assumption: That “individualizing instruction to the learner’s style can allow people to achieve a better outcome” (pp. 108).
  2. What the authors term the meshing hypothesis, which assumes that students are always best “matched” with instructional methods that reflect their learning style.

These represent both disparate theories of curricular design and widely differing levels of analysis; whereas the first hypothesis presented above represents the assessment of learning styles as critical to the creation of a curriculum, the meshing hypothesis treats learning styles as more of a delivery method.  Most importantly, by confusing these two ideas in exploration of this theory, researchers overlook the possibility that one may prove true while the other does not.

One experimental methodology to rule them all

Before reviewing the current literature, the authors outline abstractly a simple, experimental methodology.  They identify this methodology as the truest way to “provide evidence” of the existence and efficacy of learning styles, and use it as a guideline to measure the quality of data in existing literature.  The requirements are listed below:

  1. Learners must be separated into groups reflective of their learning style; the authors suggest “putative visual learners” and “auditory learners” (pp. 109).
  2. Within their groups, learners are randomly assigned one of two instructional treatments.
  3. All subjects are assessed using the same instrument.

In order to prove the quantitative efficacy of learning styles, the results of this experiment must show a “crossover interaction”: That the most effective instructional method is different for each group.  The authors note that this interaction is visible regardless of mean ability; if Group A scores wildly higher on the final assessment than Group B, a crossover interaction can still be observed.

However, it seems that the authors are confounding their hypotheses in much the same way they identify the literature does; assessing the learning styles of a class and identifying which instructional tools will best speak to a particular learning style are completely different processes.  The latter includes interference from several factors, least of which is the assumption that all instructional methods are equally effective ways to explain the content at hand.  They also do not allow for these hypotheses to be proven independently true; by stating that the only acceptable outcome of this experiment is some magnitude of crossover interaction, they ignore confounding factors—the comparative strength of instructional methods to each other; that all learning styles are equally effective ways to explain the content; that students who identify either an audio or visual strength will respond to the content in the same way—and assume that either both hypotheses are true, or both or false.

But what are the tools for?

In their review of the associated literature, the authors denote only one article that supports the existence of learning styles and uses their outlined experimental method.  They conclude that

“although [this study is suggestive of an interaction of the type we have been looking for, the study has peculiar features that make us view it as providing only tenuous evidence” (pp. 112).

These tenuous features include omitting the mean scores of each group’s final assessment in the paper (instead matching learners with a control); that learner performance was measured by raters; and that the instructional treatments used vary significantly from those “more widely promoted” (pp. 112).

This lack of appropriate evidence, conclude the authors, demonstrates that the theory of learning styles is untested at best and nonexistent at worst.  However, the one point that the authors decline to discuss is why experimental methodology is best for “proving” this theory in the first place.  They assume that a controlled environment will provide truer or cleaner data without recognizing a singular truth of classroom education—there is no controlled environment.  Educators at the classroom level have no control over the previous education and content exposure of their learners; over the influences learners face outside of school; of the gender-based, racial or cultural experiences that shape a learner’s perception.  In such an environment, why would it matter to educators that one mode of assessing learning styles, or one instructional intervention, is statistically better than another?  That environment is far removed from the situation this theory is designed to elucidate.

The authors are unresponsive to their own biases, namely bridging the distance between an idea in theory and in practice.  They make the claim in their introduction that because learning styles are so untested, meager educational resources should not be focused on studying or including them in instructional design (pp. 105).  However, they fail to consider learning styles on a concrete level.  Is it truly more expensive to personalize a curriculum based on learning styles?  Does learner benefit need to be statistically significant in a controlled environment for it to be “worth” the effort?  Although the authors are in some ways critically reflexive of the unspoken hypotheses researchers assume in discussing learning styles, they are unaware of how their personal biases have shaded their commentary, which begs the question: To whom are the authors speaking?

Sources

Howard, T.C. (2003).  Culturally relevant pedagogy: Ingredients for critical teacher reflections. Theory into Practice, 42(3), 195-202.

Liou, D.D., Antrop-Gonzalez, R.A. & Cooper, R. (2009). Unveiling the promise of community cultural wealth to sustaining Latina/o students’ college-going networks. Educational Series, 45, 534-555.

 

Music and Technology

Carruthers, G. (2009). Engaging music and media: Technology as a universal language. Research & Issues in Music Education, 7(1), 1–9. Retrieved from http://www.stthomas.edu/rimeonline/vol7/carruthers.htm

 

This week I read “Engaging Music and Media: Technology as a Universal Language.” (Carruthers, 2009) The article is about the role of music and technology in education and how they might play a role together. The article doesn’t offer new research, but it does synthesize others’ research.

The first discussion is about the roles of music, within education and how they might affect each other. Carruthers states that music often plays a secondary role in education. Meaning, that we don’t teach music as part of our curriculum because music is good, in and of itself, we have music within our curriculum because it supports something else. As a music teacher, I often find myself saying “This directly supports you” to other content teachers. You don’t often hear a math teacher justifying why the kids need to learn math. There is an array of reasons why music is valuable on its own legs. It doesn’t need to be supporting anything else.

After reading the article, I recognized that I had used the same type of reasoning as the supporters of Flores v. Arizona. As discussed in “Keeping up the Good Fight: the said and unsaid in Flores V. Arizona.” The supporters had many reasons why the ELL funding in Arizona should be awarded to the schools. The findings, however, showed the reasons from the supporting side fell under the idea of, ‘you should support this because you’ll get this out of it’ mentality. (Thomas, Risri Aletheiani, Carlson, & Ewbank, 2014)With that being said, great teachers integrate all areas into their content. Students need to see how everything is interrelated. Often times children are taught in compartments: math in math class, science in science class…etc, but our lives do not work this way.

Music has, what Caruthers calls, a division of labor. In music, this is the composer, performer and listener; each has their separate job and people rarely cross over. With the addition of technology, this isn’t necessarily the case. My own children compose music with special applications that do not require them to read music. Anyone with the right software can do all three. I see this as one of the biggest impacts technology has had on music. In the past, if one didn’t read music, composing to share with other was rather difficult. Now with software and media- sharing, this becomes relatively easy.

In order to look at the various ways technology impacts us, Caruthers defines technology as anything “from the wheel” to “a personal computer.” This immediately caught me off guard. Defining what is technology never occurred to me. I simply thought of technology as laptops, computers and electronic devices and any software to go along with it, but after reading how Caruthers is approaching technology, I may have to be more specific in what I’m viewing as technology within my research. The ways technology can have an impact, according to Caruthers, can be broken into four parts, technology that: 1. makes things easier to do than it was before, 2. does things better than before, 3. allows us to do things we couldn’t do before and 4. makes us think differently. Again, I had to consider the future of my research. At what level of impact am I going to be assessing. For instance, making it easier to do things than it was before, such as multiplication practice, may not have as big of an impact on student achievement as something that makes the student think in a different way.

The article was more thought provoking than I expected it to be.  Carruthers was clear from the beginning, he was reviewing previous research and that the paper would not answer many of the questions. The purpose of the paper is to create discussion and it proved to do just that. It caused me to look at the research I’m heading into and the basics of how I will approach it. I am dealing with so many more layers than I had previously thought. Carruthers poses, “It is incumbent upon us as educators not only to evaluate the uses of technology – to extol its virtues and denounce its failings – but also to explore deeply how it encourages or causes us to think differently about the world around us.” In my research, I will have to decide if I’m going to look at the level of technology that creates the deepest learning or do I not even take it into consideration.  Do I continue looking at the impact of music with technology on achievement or solely at the impact technology? If I research the impact of music and technology together, does the depth of learning within the music matter in the research? For instance, composing is a deeper depth of knowledge than identifying notes. How does one take this into consideration?  If my research does show an impact on student achievement, is it necessary or valuable to determine if the act of utilizing technology is creating more engagement or is the technology deepening the students’ understanding? Either one could impact student achievement; is there a way to tell which it is? How do I approach the research in a manner that will include my community and their views? In fact, can I even account for the ways technology and especially music has on the community?

Carruthes said it well, “Many of the benefits of music study, some of which are imbedded in the art form itself, are intended by teachers and curriculum planners while others are not” I suspect, that this is the case in technology as well. Unfortunately, it adds another question for me. How do I consider this in my research?

Overall, the article was well written and professional. It was organized in a logical way and he was very clear that he was presenting theories and that, as a literature review, was creating more questions than could be answered in this one piece. His ideas are insightful and have definitely given me pause. I have a lot to consider as I dive deeper into my research.

 

 

Thomas, M. H., Risri Aletheiani, D., Carlson, D. L., & Ewbank, A. D. (2014). “Keeping up the good fight”: the said and unsaid in Flroes v. Arizona. Policy Futures in Education, 12(2), 242–261. doi:10.2304/pfie.2014.12.2.242