The Periodic Table of Data Visualizations

Lengler, R. & Eppler, M. (2007).  Towards a periodic table of visualization methods for management. IASTED Proceesing of the Conference on Graphics and Visualization in Engineering.  Lecture conducted from Clearwater, FL.   In this article, Lengler and Eppler (2007) discuss the current state of data visualization as an area of academic inquiry; define their focus in visualization type and usage; and develop an infomap designed to group like methods of visualization for researcher and educator ease. A visualization, for the purposes of this article, is defined by Lengler and Eppler as

“a systematic, rule-based, external, permanent and graphic representation that depicts information in a way that is conducive to acquiring insights, developing an elaborate understanding, or communicating experiences” (pp. 1).

Data visualization as a fractured field

Lengler and Eppler (2007) open with a reflection upon the current state of data visualization literature.  Described as an “emergent” (pp. 1) field, work on data visualization is fractured across multiple, disparate fields—from computer programming to education.  The danger with this, the authors note, is the possibility that scholars may pursue theoretical work or breakthrough ideas in parallel with each other, rather than building collaboratively from each other’s works; this, in turn, could impede the development of data visualization research as its own distinct field. This characterization—a highly dichotomous bed of literature—reminds me strongly of Dr. Jordan’s (2014) thoughts on her work in researching educational “uncertainty.”  Much of the literature foundational for her thesis comes from disciplines focusing upon organization or management; likewise, this discussion of categorizing data visualizations is heavily rooted in management research, perhaps owing to Eppler and Lengler’s management backgrounds.

Overlap between management and education

Lengler and Eppler hone in upon visualization methods that are easily applicable within the field of management; that is, methods that are outcome oriented and favor a strong focus upon problem solving.  Because of this problem-solving focus, most if not all of the visualization methods presented are easily translatable to an educational (or more specifically classroom) environment.  As the authors interpret it, the “key for better execution is to engage employees” (pp. 2).  Through an educational lens, the same could be said of the need for educators to engage their students; Howard (2003) would certainly agree with the importance in considering what Lengler and Eppler term cognitive, social and emotional challenges facing managers; visualization methods, to their end, are tools—“advantages” (pp. 2)—to better understand and incorporate the perspectives of employees, and should either help to simplify a discussion or to foment new ideas and innovations.  This, of course, can is also true in reverse: A good visualization will give employees as much insight into their managers as vice versa.

The data visualization of data visualizations

In order to walk the walk, so to speak, the authors create a visualization—specifically, an infomap—to categorize and explore relationships between particular methods of displaying or interacting with data.  They chose visualization methods that were problem-solving or outcome oriented, per their focus on managerial research.  They also chose visualization methods that are easy to produce (though they may vary in complexity). This infomap is visually based upon the Periodic Table of Elements.  The authors note that the Periodic Table, in particular, is an excellent example of a co-opted visual metaphor; while widely recognized and used within several scientific fields, including chemistry, the Periodic Table is also understood outside of a scientific context as a shorthand to group or describe a complex topic.  Their “Periodic Table of Visualization Methods” is given as one of many examples of nonscientific fields using both the structure and shorthand connotation of the Periodic Table to describe something completely beyond chemical elements.

(Lengler & Eppler, 2007)

To help guide their discussion, Lengler and Eppler codify visualization methods on several axes, beginning first with their complexity and application.  Complexity is visualized as an ordinal characteristic; that is, the authors line up like methods in columns, from simplest at the top to most complex at the bottom.  Application is a bit more complex.  Methods are categorized by color into one of six “groups”:

  • Data visualizations, or “visualizations of quantitative data in schematic form” (pp. 3);
  • Information visualizations, or “the use of interactive visual representations of data to amplify cognition” (pp. 3);
  • Concept visualizations, or “methods to elaborate (mostly) qualitative concepts…through the help of rule-guided mapping procedures” (pp. 3-4);
  • Metaphor visualizations, or “effective and simple templates to convey complex insights” (pp. 4), such as story lines;
  • Strategy visualizations, or the “systematic use of complementary visual representations to improve the analysis, development, formulation, communication and implementation of strategies in organizations” (pp. 4); and
  • Compound visualizations, or methods that combine two or more of the following groupings or formats.

However, the authors also note that the categories listed above are not mutually exclusive; visualization methods can and do belong to multiple “groups.”  They attempted to streamline this process by focusing on both the complexity of a method—removing compound visualizations from ambiguity—and its interactive intent. In addition to grouping like methods, Lengler and Eppler also attempt to systematically categorize each method in their chart.  They focus on interaction, or the strengths of a visualization: Does it provide an excellent summary or overview of data, or does it better drill down into the details?  The authors also take into account what they term “cognitive processes” (pp. 4): Is the visualization an aid to simplify a complex concept (convergent thinking), or does it better jumpstart new and innovative ideas (divergent thinking)? To view the full infomap in all its interactive glory, with scroll over examples of all visualization methods listed, please visit: http://www.visual-literacy.org/periodic_table/periodic_table.pdf

Sources

Jordan, M. (2014, June). Managing uncertainty during collaborative problem solving in elementary school teams.  Lecture conducted from Arizona State University, Phoenix, AZ.

Linguistic cultural association and ethnography

This blog post comments on an idea prevalent in McCarty, Wyman and Nicholas’ (2014) chapter on ethnography with indigenous youth.  The first ethnographic vignette in this chapter focused on a conversation with a ninth grader, aged 16, attending a “Navajo community school” (pp. 84).  One of the major themes of the conversation, and one that the researcher admitted she failed to fully connect until well into the interview, is that language often cannot be disconnected from other prescient social factors—culture, racism, and environment as examples.  She noted that the student “repeatedly returned to the integrity of the human and physical landscape in which Dine identity is rooted” (pp. 87).

This idea of interconnectedness brought me back to my first ethnographic field trip.  I was working as an interviewer with a group of cultural anthropologists from the University of Arizona and Native American elders from several southwestern Nations.  We were traveling to the Timber Mountain Caldera, a landscape containing several pan-culturally significant sites that also happened to lay directly in the middle of the Nevada Testing Site (NTS).  The NTS, controlled by the federal Department of Energy, has a long history of classified weapons testing—including the nuclear bomb in the mid-Twentieth Century—as well as a somewhat colorful history working with indigenous groups (Zedeno, Stoffle & Halmo, 2001).  Before any new testing projects can begin, the agency of record must conduct environmental impact assessments (EIA), which determine risks to the geological landscape and native flora or fauna, and social impact assessments (SIA), which determine cultural risks to people, either living nearby or historically associated with a place.

Many of the first SIAs were informed by the standard, and predominantly white, idea of the desert as a negative space with no connection to history or culture.  When faced with the sometimes quite complicated historical and cultural associations with the land in question, federal authorities often responded with disbelief or hostility.  In return, indigenous groups often became more concrete in their language of a place, and more demanding of federal consideration.

This history was beautifully telescoped in the highway sign for Mercury, NV, the NTS checkpoint that houses offices, cafeterias and lodging for personnel at the site: It read “NO SERVICES,” and was flanked at the turn-off by several “DO NOT ENTER”s and “GOVERNMENT PERSONNEL ONLY”s.  This was flanked, at the end of a long dirt road, with a barricaded front office, where we were required to present two forms of ID to an official who already knew our names.  Although we joked about it in the moment, it was the most singularly intimidating town entrance I’ve ever seen, and it had a tangible effect on the elders, even before any interviews were conducted.  It disconnected them from land that culturally was theirs, angered them and drastically changed the tenor of our ethnographic interviews from outside of the NTS.

Sources

McCarty, T.L., Wyman, L.T. & S.E. Nicholas. (2014). Activist ethnography with indigenous youth.  In D. Paris & M.T. Winn (Eds.), Humanizing research: Decolonizing qualitative inquiry with youth and communities.  Los Angeles: Sage Publications Inc.

Stoffle, R.W., M.N. Zedeño, and D.B. Halmo. (2001). American Indians and the Nevada Test Site: A Model of Research and Consultation.  Washington, D.C.: U.S. Government Printing Office

“Damned to be concrete”: Considering productive uncertainty in data visualization

Marx, V. (2013). “Data visualization: Ambiguity as a fellow traveler.” Nature Methods, 10(7), 613-615.

In their musings on the importance of uncertainty with regards to social networks and educational attainment, Jordan and McDaniel (In Press) bring to the forefront an interesting concept of “productive uncertainty” (pp.5).  This idea allows that while uncertainty is not always pleasant—and while learners will often seek to minimize it—the experience is not without value.  Marx (2013), while discussing the complexities and shortcomings common among data visualizations, expands upon this concept; uncertainty, particularly within a statistical realm, can illuminate new characteristics of the data or new methodologies that address shortcomings in collection or analysis.  However, data visualizations themselves can obscure or outright hide this level of detail.  So how do we visualize data in a way that is both simple and transparent?

“[With visuals], we are damned to be concrete” (Marx, 2013, pp. 613).

Marx (2013), using examples from genomic and biomedical research, poses an interesting question: In discussing scientific results, researchers often feel compelled to gloss over, if not exactly obscure, uncertainty in their data.  These questions can arise from inconsistent collection, imperfect aggregation, or even unexpected results.  However, these “unloved fellow travelers of science” (pp. 613) cannot exist visually in the type of “grey area” analysis that Marx contends they often do while in text.  When faced with creating an honest visualization, then, researchers must decide to what extent they will account for study uncertainty.  Marx, in explaining the potential impacts of this decision, advocates that researchers strongly consider two points: First, that uncertainty may have implication upon the data itself; and second, that a transparent consideration of uncertainty strongly impacts “what comes next.”

Thus, Marx (2013) is explicitly pushing productivity over negativity when reflecting upon uncertainty in data or the wider study; however, she is also acknowledging that even within the specific context of biomedical researchers, the pull to minimize uncertainty when broadly discussing results exists.

Down the rabbit hole: Analysis can create uncertainty too

One should also consider the process—largely mathematical, in this context—of moving from a raw dataset to a clean visualization.  Common steps for creating data visualizations, particularly in genomics and the biomedical sciences, often include aggregating data from different sources (and thus methods of collection) or summarizing large and complex markers into something more easily digestible.  By attempting to standardize disparate collection methods into something more uniform, or by summarizing disparate study groups or grouped variables, an important level of detail is lost.  These processes themselves can obscure data, which in turn obscures uncertainty for the end audience, whose exposure to this study may wholly lie in the visualization.  Going somewhat down the rabbit hole, this in itself can therefore create new uncertainty.

Certainly, simplicity is important in a data visualization; however, as Marx argues, researchers also have an obligation to consider that by glossing over details of uncertainty, or by creating new sources of uncertainty through their analyses, the wider community may understand their work less, or may make assumptions of their findings that are unfounded.

In particular, missing data presents a complex dilemma.  Marx (2013) gives the example of a genomic sequencing experiment, seeking to map a stretch of genetic material that contains 60 million bases:

“The scientists obtain results and a statistical distribution of sequencing coverage across the genome.  Some stretches might be sequenced 100-fold, whereas other stretches have lower sequencing depths or no coverage at all…But after an alignment, scientists might find that they have aligned only 50 million of the sought-after 60 million bases…This ambiguity in large data sets due to missing data—in this example, 10 million bases—is a big challenge” (pp. 614).

As opposed to data that is statistically uncertain, or uncertain by virtue of its collection methods, missing data is a true negative whose effect is difficult to truthfully express and explain.

So how do we show uncertainty visually?

Marx suggests several methods for including uncertainty visually when discussing data.  Broadly, she suggests including some representation of uncertainty within a visualization; this can be layered on top of the data visualized—for example, using color coding or varying levels of transparency to indicate more and less certain data.  A visualization can also account for uncertainty separate from the data, by using an additional symbol to denote certainty or the reverse, for example.  She also discusses contrasting analyses of similar (or the same) data that have reached differing conclusions; taking into account their methods of analysis, this inclusion of multiple viewpoints can also round a discussion of uncertainty.

In addition to understanding how to represent uncertainty visually, however, one should also consider how and when (during a study or study analysis) one should tabulate uncertainty.  One platform looking to incorporate uncertainty holistically into data visualization is Refinery.  In particular, Marx notes that this system seeks to find “ways to highlight for scientists what might be missing in their data and their analysis steps” (pp. 614), both addressing uncertainty situated in data and analysis.  As shown below, this system considers uncertainty at all steps throughout the data analysis, rather than only at the end, giving a more rounded picture of how uncertainty has influenced the study at all levels.

“The team developing the visualization platform Refinery (top row) is testing how to let users track uncertainty levels (orange) that arise in each data analysis step” (Marx, 2013, pp. 615).

In the graphic, the blue boxes represent data at different stages during analysis.  Orange, in the top row, represents the types of uncertainty that may arise during each analytical step, concluding that the orange error bars in the bar graph to the far right are much more comprehensive in their calculation.  The light blue bar in the bottom row shows the disparity, theoretically, when error is only taken into account at the end of an analysis.  While the magnitude of uncertainty may not be as significantly different as shown in the graphic, researchers are better able to account for what causes or has caused error in the top row; they are better able to situate their uncertainty.

A picture may be worth a thousand words, but do they have to tell one story?

Analyzing data is often a narrative process; however, as Marx (2013) alludes, there can be consequences to how one tells their story.  Washing over uncertainty, in both preparing and discussing results, can be misleading, limiting both a researcher’s true understanding of their own data, and collaborations or theories that use the data as a foundation for further study.  Marx, however, is not disparaging researchers who fail to consider uncertainty as dishonest; she is promoting the idea that considering uncertainty positive—or productive—can lead research in novel directions.

Sources

Jordan, M.E. & McDaniel, R.R. (In Press). “Managing uncertainty during collaborative problem solving in elementary school teams: The role of peer influences in robotics engineering activity.” The Journal of the Learning Sciences, 1-49.

Situating “uncertainty” in communities of practice and competency-based medical education

This blog post discusses Jordan & McDaniel’s (in press) conceptualization of “uncertainty,” and seeks to situate that “uncertainty” in Wenger’s (2000) visualization of organizational structure.  We will also apply these theories to the adoption of competency-based assessments in graduate medical education.

Jordan and McDaniel describe uncertainty as

“an individual’s subjective experience of doubting, being unsure, or wondering about how the future will unfold, what the present means, or how to interpret the past” (pp. 3).

For them, this concept is central to the process of learning.  However, they also note that uncertainty may play differing roles in learning outcomes.  Uncertainty can as easily be considered a desirable outcome—for example, in demonstrating the complexity of a concept, or the limits of a learner’s knowledge on a subject—as an undesirable one—where learners respond to an “impulse” to reduce their uncertainty (pp. 4).

Wenger, speaking systemically of our communities of practice, outlines two major types of knowledge: social competence, meaning the socially and historically situated understanding of our community; and experience, which captures personally acquired knowledge that may or may not align with wider societal beliefs (pp. 226-227).  When social competence and experience clash, this creates space for learning to occur, and knowledge, be it societal or individual, to change (pp. 227).

How, then, does uncertainty fit in Wenger’s community of practice?  Jordan & McDaniel have outlined two potential theories: Uncertainty can take the place of individual experience.  As Jordan & McDaniel note, uncertainty (particularly in a classroom setting) can be very experiential; it is a common modality for learners to see and challenge the structure of their classroom, or relationships with fellow students.  Uncertainty, however, can also take the place of learning, or as a part of learning, that allows learners to identify questions regarding societal competence and to be inquisitive about their social knowledge.

The below example, discussing core curricular expectations of graduate medical education, is an example of uncertainty as both a mode of experience as well as a situation for learning.  

The American Council for Graduate Medical Education (ACGME) is the nonprofit accrediting body for American medical schools; it focuses upon “graduate” medical education, meaning residency programs, internships, fellowships and the like, rather than “undergraduate” medical institutions, which award the MD or DO degrees.  Traditionally, “variability in the quality of resident education” was a major systemic stressor (Nasca et al. 2012, pp. 1051).  In response to this, the ACGME historically focused upon quality of teaching and program structure when evaluating an institution.  However, to many such institutions, this focus created an undue administrative burden, stifling innovation, reducing staff and faculty availability to mentor students, and lagging behind systemic changes in the wider medical system.  In 1999, the ACGME introduced six core competencies that, in order to remain accredited, graduate medical education programs must include in their curriculum (Nasca 2012):

  • Medical Knowledge (MK)
  • Patient Care (PC)
  • Interpersonal Skills and Communication (IPC)
  • Professionalism (P)
  • Systems-Based Practice (SBP)
  • Practice-Based Learning and Improvement (PBLI)

The six factors outlined above were designed to shift administrative focus toward tangible “outcomes and learner-centered approaches” (pp. 1052).  For learners, it shifted the focus of medical curriculum closer to real world application.  With traditional didactic lecturing concentrated within one of the six categories, this system presented a unique opportunity to reduce the uncertainty that existed between rote medical knowledge and the myriad of other competencies expected of a practicing physician.  It mandated space within the medical curriculum to both experience parts of being a physician beyond a textbook knowledge of medicine or medical procedures—displaying professionalism with patients, families and other medical professionals; clearly communicating complicated concepts to lay audiences; refine their bedside manner, and practice composure in emotionally difficult situations.  The addition of “System-Based Practice” and “Practice-Based Learning and Improvement” also gave learners the room to confront uncertainty as a part of Wegner’s learning: To practice critical reflexivity, identify strengths and weaknesses in the current structure of the medical system, and to situate themselves as physicians and advocates within that system.

 

Sources

Jordan, M.E. and McDaniel, R.R. (In Press). “Managing uncertainty during collaborative problem solving in elementary school teams: The role of peer influences in robotics engineering activity.” The Journal of the Learning Sciences, 1-49.

Nasca, TJ et al. (2012). “The next GME accreditation system: Rationale and benefits.” New England Journal of Medicine, 366(11), 1051-1056.

Wenger, E. (2000).  “Communities of practice and social learning systems.” Organization, 7, 225-246.

“Learning styles” and education in a controlled environment

Pashler, H. et al. (2009). “Learning styles: Concepts and evidence.” Psychological Science in the Public Interest, 9(3), 106-119.

People best learn in different ways.  This is a deceptively simple and interestingly familiar idea in modern educational research and curriculum design.  It’s also a concept accepted—or at least understood—by a wider general public, and fits nicely within the twenty-first century cultural (and technological) context that personalization easily available, expected and best.  But regardless of this wider acceptance, is there quantitative evidence to support the theory?  Pashler et al. (2009) set out to explore the current literature, historical context and quantitative support for what they term “learning styles.”  Through what historical context did this idea germinate?  What experimental methodology would best quantitatively prove its efficacy?  Has such research been performed in the current literature, and if so, what does the evidence prove?

It’s all Jungian

The authors begin by situating the idea of categorizing people into disparate “types”; this, they explain, draws from the work of Jung, whose research in psychology and psychoanalysis led to the creation of behavioral tests—like the Myers-Briggs—that perform much the same function as learning styles.  They categorize people into “supposedly disparate groups” based upon a set of distinct characteristics, which in turn explains something deeper about a person.  Although the authors do not regard these tests as objectively scientific, they do note that these tests have “some eternal and deep appeal” (pp. 107) with the general public.

The authors hold that this “deep appeal” partially explains what draws researchers, educators, learners and parents to the idea of learning styles.  Beyond being a method to feel like a larger and often cumbersome system is treating a learner uniquely, the authors write that learning styles can become straw men for underachievement:

“If a person or person’s child is not succeeding or excelling in school, it may be more comfortable for that person to think the educational system, not the person or the child himself or herself, is responsible” (pp. 108).

Even including the evidence presented, this is an unfair prognostication.  In their desire to explore the objective science of learning styles, the authors have shut down consideration of a slew of externally confounding factors, including socioeconomic stressors, racial background and cultural barriers, which all have a demonstrated influence upon classroom performance (Howard 2003; Liou 2009).  More than that, however, this passage reflects an underlying bias among the authors commentary—that a theory is lesser when it speaks to people emotionally.

What are learning styles really for?

However, when the authors break down the unspoken hypotheses that govern the idea of learning styles, they make an excellent point.  There are two very distinct issues at play:

  1. The idea that if an educator fails to consider the learning styles of his or her student, their instruction will be ineffective (or less effective).  The authors also consider what they term the reverse of this assumption: That “individualizing instruction to the learner’s style can allow people to achieve a better outcome” (pp. 108).
  2. What the authors term the meshing hypothesis, which assumes that students are always best “matched” with instructional methods that reflect their learning style.

These represent both disparate theories of curricular design and widely differing levels of analysis; whereas the first hypothesis presented above represents the assessment of learning styles as critical to the creation of a curriculum, the meshing hypothesis treats learning styles as more of a delivery method.  Most importantly, by confusing these two ideas in exploration of this theory, researchers overlook the possibility that one may prove true while the other does not.

One experimental methodology to rule them all

Before reviewing the current literature, the authors outline abstractly a simple, experimental methodology.  They identify this methodology as the truest way to “provide evidence” of the existence and efficacy of learning styles, and use it as a guideline to measure the quality of data in existing literature.  The requirements are listed below:

  1. Learners must be separated into groups reflective of their learning style; the authors suggest “putative visual learners” and “auditory learners” (pp. 109).
  2. Within their groups, learners are randomly assigned one of two instructional treatments.
  3. All subjects are assessed using the same instrument.

In order to prove the quantitative efficacy of learning styles, the results of this experiment must show a “crossover interaction”: That the most effective instructional method is different for each group.  The authors note that this interaction is visible regardless of mean ability; if Group A scores wildly higher on the final assessment than Group B, a crossover interaction can still be observed.

However, it seems that the authors are confounding their hypotheses in much the same way they identify the literature does; assessing the learning styles of a class and identifying which instructional tools will best speak to a particular learning style are completely different processes.  The latter includes interference from several factors, least of which is the assumption that all instructional methods are equally effective ways to explain the content at hand.  They also do not allow for these hypotheses to be proven independently true; by stating that the only acceptable outcome of this experiment is some magnitude of crossover interaction, they ignore confounding factors—the comparative strength of instructional methods to each other; that all learning styles are equally effective ways to explain the content; that students who identify either an audio or visual strength will respond to the content in the same way—and assume that either both hypotheses are true, or both or false.

But what are the tools for?

In their review of the associated literature, the authors denote only one article that supports the existence of learning styles and uses their outlined experimental method.  They conclude that

“although [this study is suggestive of an interaction of the type we have been looking for, the study has peculiar features that make us view it as providing only tenuous evidence” (pp. 112).

These tenuous features include omitting the mean scores of each group’s final assessment in the paper (instead matching learners with a control); that learner performance was measured by raters; and that the instructional treatments used vary significantly from those “more widely promoted” (pp. 112).

This lack of appropriate evidence, conclude the authors, demonstrates that the theory of learning styles is untested at best and nonexistent at worst.  However, the one point that the authors decline to discuss is why experimental methodology is best for “proving” this theory in the first place.  They assume that a controlled environment will provide truer or cleaner data without recognizing a singular truth of classroom education—there is no controlled environment.  Educators at the classroom level have no control over the previous education and content exposure of their learners; over the influences learners face outside of school; of the gender-based, racial or cultural experiences that shape a learner’s perception.  In such an environment, why would it matter to educators that one mode of assessing learning styles, or one instructional intervention, is statistically better than another?  That environment is far removed from the situation this theory is designed to elucidate.

The authors are unresponsive to their own biases, namely bridging the distance between an idea in theory and in practice.  They make the claim in their introduction that because learning styles are so untested, meager educational resources should not be focused on studying or including them in instructional design (pp. 105).  However, they fail to consider learning styles on a concrete level.  Is it truly more expensive to personalize a curriculum based on learning styles?  Does learner benefit need to be statistically significant in a controlled environment for it to be “worth” the effort?  Although the authors are in some ways critically reflexive of the unspoken hypotheses researchers assume in discussing learning styles, they are unaware of how their personal biases have shaded their commentary, which begs the question: To whom are the authors speaking?

Sources

Howard, T.C. (2003).  Culturally relevant pedagogy: Ingredients for critical teacher reflections. Theory into Practice, 42(3), 195-202.

Liou, D.D., Antrop-Gonzalez, R.A. & Cooper, R. (2009). Unveiling the promise of community cultural wealth to sustaining Latina/o students’ college-going networks. Educational Series, 45, 534-555.

 

Critical Reflectivity and Student Agency

This blog article will focus on bridging the work of Bautista et. al (2013) and Liou et. al (2009) with Howard’s (2003) rubric for self-reflection; beyond the ability to recognize your individual biases and agency, it is also important for research and researchers to recognize power built from student experience and the wider community.

Howard (2003) described a very personal rubric to aid educators in reflecting inward: upon their current racial or cultural biases, as well as major (personal) historical influences upon them. Bautisa et. al, expand upon this practice of cultural reflection, but move the focus outward; using a youth participatory research program (YPAR) as an example, the authors situate the power of student experience and student voice in educational research. The authors’ goal was to explore which “traditional tools of research” (pp. 2) students appropriated to evaluate their program—the Council of Youth Research. As part of a wider discussion, they also note the absence of student experience from educational research as a whole.

Liou et. al, likewise, expand upon the theme of critical reflexivity by focusing upon the agency that exists outside of a traditional school. How do local communities empower students to succeed–or, in this case, to seek out relevant resources and materials to apply for college–in the absence of such assistance from an underperforming school? The authors note that often, when services do not exist in underperforming schools–or when those services are not readily available to all students–students instead look to their community. This creates an interesting paradigm for school improvement; focusing upon the resources a wider community provides to students (as well as their quality) gives a school a new understanding of the services students need, as well as “improv[ing] the quality of relationships between school adults and the students they serve” (pp. 551).

These readings made me reflect upon one personal and very applicable example of the power of student agency, and how difficult it can be to build. During my time with the Arizona Department of Health Services (ADHS), I worked to build and sustain a coalition for youth anti-tobacco advocacy, made of disparate school-based and community-based youth organizations from across the state. Historically, anti-tobacco work with youth in Arizona had heavily focused upon what we called in shorthand the “DARE model:” in-classroom lectures, featuring a figure of authority from the school or greater community who gave a very fact-based presentation. In focus groups with middle and high school students, however, we learned that this model was effective in passing along those facts–that cigarettes are deadly and addictive–but it did not personalize the subject, nor give students a sense of involvement in the cause. The goal of this new advocacy-based coalition was to empower students to understand what policy is, how it affects them and how they could affect it.

This was a radical change in the student-educator relationship, and one of the most difficult pieces to put into play was to demonstrate to these student leaders that they had agency–within their homes, their schools and their communities–and to support them in developing their confidence. Many, at the outset, simply asked for a list of acceptable club activities, without giving much thought to their local environment or personal interests. Definitely putting Howard’s rubric into play, adult educators were a vital piece of building confidence among students to tackle issues of importance to themselves and their peers; these adults, who could be anything from a homeroom teacher to someone working in outreach at the county health office to a volunteer with a community youth program, approached “advocacy” and “student agency” in very different ways; we helped all parties, including ourselves, to reflect upon our own biases, and our own communities, in order to formulate a better way to speak to coalition student leaders. Likewise, as Bautista suggests, we guided these students in the same process, asking them to identify their individual agency, as well as the agency of their local club, and to use that to find projects that were meaningful on both a personal and community level. This conversation was essential; without the wealth of student voices and experience added to the conversation, this coalition would never have risen past the lecture–a figure of authority telling the students what they should do.

Sources

Bautista, M.A. et al. (2013). Participatory action research and city youth: Methodological insights from the Council of Youth Research. Teachers College Record, 115, 1-23.

Liou, D.D., Antrop-Gonzalez, R.A. & R. Cooper. (2009). Unveiling the promise of community cultural wealth to sustaining Latina/o students’ college-going networks. Educational Series, 45, 534-555.

Howard, T.C. (2003). Culturally relevant pedagogy: Ingredients for critical teacher reflections. Theory into Practice, 42(3), 195-202.

The wide world of Wordles: Discussion of “Participatory visualizations with Wordle”

Viegas, F.B., Wattenberg, M. & J. Feinberg (2009). “Participatory visualizations with Wordle.” IEEE Transactions on Visualization and Computer Graphics, 15(6), 1137-1144.

In this article, Viegas et al. (2009) introduce “Wordles,” their distinctions among similar data visualizations, and methodology to discover certain characteristics of Wordle users and their wider community.

Wordles represent a popular form of tag clouds, a common data visualization generally used to represent word frequency in text, with more frequent words represented in bigger and less frequent in smaller font. However, there are some key differences between an average tag cloud and a Wordle, in both their calculation and final appearance. In a Wordle, text size and word frequency are represented linearly; that is, the size of a word increases the same amount for each time it appears. Often, tag clouds calculate word size by utilizing the square root instead. Additionally, the Wordle algorithm allows words to appear in any free space not occupied by text–for example, in the space of an “o” or rotated vertically along the side of an “l.” The authors note that these changes were made for aesthetic reasons; however, particularly regarding how text size is calculated, the side effect may be a more straightforward relationship between size and frequency.

The authors also speak to their expectations of the Wordle community as casual infovis and a participatory culture. Casual infovis refers to situations or communities where lay users depict information in a personally meaningful way. Participatory culture refers to the tenor of conversation between the generator of information (or Wordles) and their audience; this very commonly occurs on the Internet, in the form of website user feedback, fan fiction, or comment boards on news stories or blog posts, to name a few examples.

“Wordles in the wild”: Methods and results

Because Wordle does not collect demographic information for users, who can make and download a graphic without logging in or creating an account, Wordle has little data to describe their users beyond the Wordles they create. To learn more about the wider community of Wordle users, the authors use a dual approach: Research into “Wordles in the wild,” an Internet search of previously created graphics and how they have been used online; and a survey of current visitors to the Wordle site.

“Wordles in the wild” (pp. 1139) were initially identified through Google search. The authors examined the first 500 sites returned for “Wordle,” and used these “prominent” (pp. 1139) examples to guide more specific research. Through this process, the authors identified several major categories for both Wordle users and how Wordle graphics are used, the largest being “education.” While a rather ingenious way to collect context, in the face of little circumstantial data to understand how Wordles have been used, snowball research does yield very little control over both the completeness and quality of found data.

Wordle also placed a survey link on its homepage, asking users to provide feedback about themselves and their graphics. The survey was first piloted for two days, and following feedback and revisions reposted for one week; the authors do not note specifically what feedback was given, or how the survey changed. During the week it was live, the survey received about 4,300 responses, which (assuming one Wordle per user per day, with no user overlap) represents a response rate of about 11%; although the authors note a margin of error of about 1%, they also recognize that given difficulties controlling for demographic variables and self-selection bias, the results should only be viewed as “a general guide” (pp. 1140).

The authors do admit a significant selection bias in this data, among both “wild Wordles” and survey respondents; they do not delve deeply into demographic data, beyond sex, age and occupation.

Do Wordles even count as a data visualization?

Given the authors’ results, there is little question that Wordle users clearly represent a participatory culture. They outline several ways that users collaborate with not only their data, but also their audience. As one example of professional use: Journalists, particularly during the 2008 presidential election, used Wordle to illuminate trends from political text and speeches. There are also many examples of personal or “fun” uses given, particularly focusing upon Wordles as gifts–for baby showers, church groups, and so on.

The authors, however, do note that the categorization of the Wordle community as “casual infovis” does not clearly convey some of the Wordle community’s more interesting characteristics. For example, “casual” doesn’t quite express the personal connection many users expressed toward their Wordle text; over half indicated that had written it themselves. Also, not all users identify their graphics or the use thereof, analytical or otherwise, as personally meaningful.

Besides the characteristics of Wordle users, the strong focus upon creating Wordles rather than using them as an analytical tool demonstrates to the authors that Wordles are not being utilized as intended, or perhaps as expected. Particularly considering the large number of survey respondents who did not understand the significance of word size within a graphic, does this then disqualify Wordles from truly being data visualizations?

This may be true in the wider community of users–particularly when considering the Wordles created as Valentine’s Day cards for spouses, or as bridal gifts and birthday presents. Wordles as gifts, or Wordles created for fun seem commonly to not have an analytical context. However, I would argue that within education, Wordle is working as intended, plus some. Educators create Wordles of new vocabulary words or Shakespearan sonnets to illuminate classroom discussion; students likewise are asked to participate in creating new Wordle graphics as an assignment or classroom activity. Bandeen and Sawain (2012) outline several concrete applications for Wordles in class, including (broadly):

  • Understanding major concepts
  • Identifying and defining unfamiliar terms
  • Connecting current passages with previous readings
  • Pointing out unexpected words
  • Identifying missing words
  • Theorizing connections among words

which pull from all levels of the Bloom’s taxonomy. In addition to serving as an analytical tool to guide discussion, Wordles (or tag clouds in general) are used collaboratively to explore texts in unique or unusual ways not always apparent at first read. Whether students are creating or viewing Wordle graphics, and whether or not the graphics are used in strictly an “analytical” sense, they are actively engaging the material in a meaningful way–both as casual infovis and a participatory culture.

Sources

Bandeen, H.M. & Sawain, J.E. (2012). Encourage students to read through the use of data visualizations. College Teaching, 60, 38-39.

Classroom cultural influences case study: Gender in a physical education classroom

Howard (2003) gives an excellent rubric for educators to explore their own cultural influences and prejudices, asking them to reflect upon five points:

  1. Their own interactions with different cultural (and particularly racial) groups growing up
  2. The primary influences upon their perspective
  3. If they harbor any prejudices against people because of race
  4. How those prejudices might affect a member of said racial background
  5. If they create negative profiles of others, based on assumptions of their race or culture

He outlines this as a necessary step to both valuing and creating an effective and culturally sensitive pedagogy, with which I absolutely agree. However, I wish his discussion had been rounded beyond this rubric; once an educator has openly analyzed their own prejudices, how do they apply that knowledge?

Howard lays out a foundation for educators to neither diminish cultural influences, nor to normalize them, which led me to linger on situations where cultural traditions may diminish a learner’s success; in particular, I was struck by some of the factors beyond race that Howard mentions in passing, such as gender. Is it ever acceptable to “normalize” a cultural behavior to “middle-class, European American cultural values” (pp. 198), and how does an educator recognize, prioritize and navigate that situation?

As an example: Ennis (1999) gives the case study of a physical education class in an urban high school where female students were largely disengaged. In interviews, they noted being bullied or scapegoated by male students in team activities:

“I used to like to play sports with the boys…Now, in high school, they’re like maniacs or something…They throw the ball so hard you can’t catch it.”

“They call us lame. They say we’re not trying, but we are.”

“I don’t need boys yelling at me when I make a mistake.” (Ennis 1999, pp. 33)

A program called “Sport for Peace” was instituted in the classroom; this program intentionally avoided many of the tensions that rose in the traditional “team sports” model by creating teams of equally skilled students, focusing less upon rewarding skills and more upon conflict negotiation beyond force or violence.

This example illustrates the normalizing of two culturally influenced behaviors, attacking the expectations of women to be delicate and unathletic, and of men to be forceful or violent. However, this is done in the service of creating more equal opportunities for learners of both genders. A simplistic reading may equate this to a prioritization of the normative cultural expectation–that learners are equal in ability, regardless of their gender–over the prevalent cultural norm. However, this case study and its curricular solution represent a more complicated methodology and conclusion. With its emphasis on consensus building and peaceful reconciliation, “Sport for Peace” is a textbook example of his rubric in action. It gives students the opportunity to reflect upon their own cultural biases, as well as the influences of their community growing up; it allows students to examine prejudices of others they may be carrying based upon gender, and how those prejudices impact their targets. Additionally, if gives the students an agency of which Bourdieu would approve. In order to recognize your own power within a social structure, you must be able to recognize the structure itself, and this program gives learners an extraordinary power to discover and mediate cultural biases independently. And while it might not directly answer the question of priotizing conflicting cultural influences in a classroom, it does answer that the process of self-reflection, as Howard outlined, can lead to unexpected rewards for learners and educators alike.

Sources

Bourdieu, P. (1978). Cultural reproduction and social reproduction.  In R. Brown (Ed.), Knowledge, Education and Cultural Changes (56-69).  London: Harper & Row.

Ennis, C.D. (1999). Creating a culturally relevant curriculum for disengaged girls. Sports, Education and Society, 4, 31-49.

Howard, T. (2003). Relevant pedagogy: Ingredients for critical teacher reflection. Theory into Practice, 42(3), 195-202.