Pashler, H. et al. (2009). “Learning styles: Concepts and evidence.” Psychological Science in the Public Interest, 9(3), 106-119.
People best learn in different ways. This is a deceptively simple and interestingly familiar idea in modern educational research and curriculum design. It’s also a concept accepted—or at least understood—by a wider general public, and fits nicely within the twenty-first century cultural (and technological) context that personalization easily available, expected and best. But regardless of this wider acceptance, is there quantitative evidence to support the theory? Pashler et al. (2009) set out to explore the current literature, historical context and quantitative support for what they term “learning styles.” Through what historical context did this idea germinate? What experimental methodology would best quantitatively prove its efficacy? Has such research been performed in the current literature, and if so, what does the evidence prove?
It’s all Jungian
The authors begin by situating the idea of categorizing people into disparate “types”; this, they explain, draws from the work of Jung, whose research in psychology and psychoanalysis led to the creation of behavioral tests—like the Myers-Briggs—that perform much the same function as learning styles. They categorize people into “supposedly disparate groups” based upon a set of distinct characteristics, which in turn explains something deeper about a person. Although the authors do not regard these tests as objectively scientific, they do note that these tests have “some eternal and deep appeal” (pp. 107) with the general public.
The authors hold that this “deep appeal” partially explains what draws researchers, educators, learners and parents to the idea of learning styles. Beyond being a method to feel like a larger and often cumbersome system is treating a learner uniquely, the authors write that learning styles can become straw men for underachievement:
“If a person or person’s child is not succeeding or excelling in school, it may be more comfortable for that person to think the educational system, not the person or the child himself or herself, is responsible” (pp. 108).
Even including the evidence presented, this is an unfair prognostication. In their desire to explore the objective science of learning styles, the authors have shut down consideration of a slew of externally confounding factors, including socioeconomic stressors, racial background and cultural barriers, which all have a demonstrated influence upon classroom performance (Howard 2003; Liou 2009). More than that, however, this passage reflects an underlying bias among the authors commentary—that a theory is lesser when it speaks to people emotionally.
What are learning styles really for?
However, when the authors break down the unspoken hypotheses that govern the idea of learning styles, they make an excellent point. There are two very distinct issues at play:
- The idea that if an educator fails to consider the learning styles of his or her student, their instruction will be ineffective (or less effective). The authors also consider what they term the reverse of this assumption: That “individualizing instruction to the learner’s style can allow people to achieve a better outcome” (pp. 108).
- What the authors term the meshing hypothesis, which assumes that students are always best “matched” with instructional methods that reflect their learning style.
These represent both disparate theories of curricular design and widely differing levels of analysis; whereas the first hypothesis presented above represents the assessment of learning styles as critical to the creation of a curriculum, the meshing hypothesis treats learning styles as more of a delivery method. Most importantly, by confusing these two ideas in exploration of this theory, researchers overlook the possibility that one may prove true while the other does not.
One experimental methodology to rule them all
Before reviewing the current literature, the authors outline abstractly a simple, experimental methodology. They identify this methodology as the truest way to “provide evidence” of the existence and efficacy of learning styles, and use it as a guideline to measure the quality of data in existing literature. The requirements are listed below:
- Learners must be separated into groups reflective of their learning style; the authors suggest “putative visual learners” and “auditory learners” (pp. 109).
- Within their groups, learners are randomly assigned one of two instructional treatments.
- All subjects are assessed using the same instrument.
In order to prove the quantitative efficacy of learning styles, the results of this experiment must show a “crossover interaction”: That the most effective instructional method is different for each group. The authors note that this interaction is visible regardless of mean ability; if Group A scores wildly higher on the final assessment than Group B, a crossover interaction can still be observed.
However, it seems that the authors are confounding their hypotheses in much the same way they identify the literature does; assessing the learning styles of a class and identifying which instructional tools will best speak to a particular learning style are completely different processes. The latter includes interference from several factors, least of which is the assumption that all instructional methods are equally effective ways to explain the content at hand. They also do not allow for these hypotheses to be proven independently true; by stating that the only acceptable outcome of this experiment is some magnitude of crossover interaction, they ignore confounding factors—the comparative strength of instructional methods to each other; that all learning styles are equally effective ways to explain the content; that students who identify either an audio or visual strength will respond to the content in the same way—and assume that either both hypotheses are true, or both or false.
But what are the tools for?
In their review of the associated literature, the authors denote only one article that supports the existence of learning styles and uses their outlined experimental method. They conclude that
“although [this study is suggestive of an interaction of the type we have been looking for, the study has peculiar features that make us view it as providing only tenuous evidence” (pp. 112).
These tenuous features include omitting the mean scores of each group’s final assessment in the paper (instead matching learners with a control); that learner performance was measured by raters; and that the instructional treatments used vary significantly from those “more widely promoted” (pp. 112).
This lack of appropriate evidence, conclude the authors, demonstrates that the theory of learning styles is untested at best and nonexistent at worst. However, the one point that the authors decline to discuss is why experimental methodology is best for “proving” this theory in the first place. They assume that a controlled environment will provide truer or cleaner data without recognizing a singular truth of classroom education—there is no controlled environment. Educators at the classroom level have no control over the previous education and content exposure of their learners; over the influences learners face outside of school; of the gender-based, racial or cultural experiences that shape a learner’s perception. In such an environment, why would it matter to educators that one mode of assessing learning styles, or one instructional intervention, is statistically better than another? That environment is far removed from the situation this theory is designed to elucidate.
The authors are unresponsive to their own biases, namely bridging the distance between an idea in theory and in practice. They make the claim in their introduction that because learning styles are so untested, meager educational resources should not be focused on studying or including them in instructional design (pp. 105). However, they fail to consider learning styles on a concrete level. Is it truly more expensive to personalize a curriculum based on learning styles? Does learner benefit need to be statistically significant in a controlled environment for it to be “worth” the effort? Although the authors are in some ways critically reflexive of the unspoken hypotheses researchers assume in discussing learning styles, they are unaware of how their personal biases have shaded their commentary, which begs the question: To whom are the authors speaking?
Howard, T.C. (2003). Culturally relevant pedagogy: Ingredients for critical teacher reflections. Theory into Practice, 42(3), 195-202.
Liou, D.D., Antrop-Gonzalez, R.A. & Cooper, R. (2009). Unveiling the promise of community cultural wealth to sustaining Latina/o students’ college-going networks. Educational Series, 45, 534-555.
Latest posts by faulandh (see all)
- The Periodic Table of Data Visualizations – June 20, 2014
- Linguistic cultural association and ethnography – June 17, 2014
- “Damned to be concrete”: Considering productive uncertainty in data visualization – June 13, 2014
- Situating “uncertainty” in communities of practice and competency-based medical education – June 10, 2014
- “Learning styles” and education in a controlled environment – June 6, 2014