Inconclusive research = lame duck

Eynon, R., & Helsper, E. (2011). Adults learning online: digital choice and/or digital exclusion?. New media & society, 13(4), 534-551.

As I continue down the breadcrumb path of research in the general area of my area of inquiry, I’ve decided to open up my research to encompass articles that touch on a range of topics:
adult online learning,
effective practices for online learning,
assessing online learning,
unfacilitated online learning,
facilitated online learning,
adult communities engaged/disengaged in online learning,
various theories and frameworks that relate to online learning,
and, what I’m calling, the catch-all connection to online learning.

If I set a wide enough net, I’m sure to catch onto a guiding research question that makes the most sense for my work and in the mean-time can build up a great arsenal of knowledge in general about what research has been done in online learning and how it was orchestrated.

This particular article would fall under the umbrella of “adult communities engaged/disengaged in online learning”. This study took place in England, Scotland and Wales and aimed to investigate further which adults were engaging in online learning activities. These activities were organized into twelve general areas for fact checking, travel, shopping, entertainment, social networking, e-government civic participation, informal learning and formal learning/training. The researchers wanted to delve further into some of the characteristics that these adults had in common and also what factors influenced them to engage or disengage with different types of online learning. They focuses in on a central divide in the population and really wanted to decipher between voluntary or choice reasons versus involuntary or “digital exclusion” reasons (536).

The researchers methodologies included using the 2007 Oxford Internet Surveys (OxIS) with a sample size of 2350 people. These individuals were selected first by randomly selecting 175 regions in Britain and then from there they randomly selected ten addresses. The participants ranged from 14 years of age and above and they were all administered the OxIS face-to-face.

As the researchers consolidated the results and captured their findings, they divided the disengaged group of respondents into two groups: ex-users and non-users. There were four key reasons as to why these individuals disengaged from not using the internet and they were: costs, interest in using the internet, skills and access. It appeared that ex-users had made the choice to disengage because they were no longer interested in the internet but the non-users highlighted topics of digital exclusion like access and costs.

When it came to investigating why users were disengaging with the internet for learning opportunities, the characteristics of the users did tend to trend but were complex as it often depended on what type of online learning activity was happening. Users that are highly educated, have children over the age of 10 and have high levels of internet self-efficacy were found to be more likely to engage in formal learning activities via the internet. An underlying element that was important for informal learning, was having access to the internet at home. Also, upon analysis of the data set and its trends, began to see pockets of individuals who were “unexpectedly included” or “unexpectedly excluded” (542).

They conclude the research with a statement that this investigation into the engagement and disengagement of users into the internet and online learning is important because it demonstrates that the more information organizations or educational institutions have about a user, the more likely it can provide tailored, differentiated user support to increase the amount of learning activity that takes place.

If I were to turn my critical eye on this article, I unfortunately find more ways in which to improve it rather than strengths. I think one of the greatest strengths of the article is that the content is organized clearly and it really did a thorough job of contextualizing the importance of finding answers for the research questions.

An immediate area of improvement came from the literature review and theoretical framework supports to the research. It truly appears that more time was spent explaining the need for the research than the content and theory that acts as the foundation to the work.

The data collection process in general appeared to be well thought out and good in its randomization of quite a large population sample size. Unfortunately there are a few elements in the methodologies that are missing. Why was the particular OxIS survey tool used and how was it the perfect tool for this research? What questions are on the OxIS? Also, there was no explanation as to the process of how the surveys were conducted beyond “face-to-face” (537). Were they recorded by hand by the participant or the researcher? How many researchers were involved and was their any training needed to maintain consistency amongst the team? And what protocol was used during the survey?

The analysis of the data and the findings were good and very detailed but it almost seemed like the research really didn’t find much. And what was found, goes to support what would seem very sensible. As for the discussion and conclusion, it just seems like the conclusion was that this survey could be done better in the future and next time around they’ll also gather qualitative data, etc. If seems like if that’s your conclusion, then did we find out something important in this study at all? And if it is that there should be more studies in the future, then that’s not much of a conclusion. And since it lacked in conclusiveness, it makes sense that they weren’t able to offer suggestions of what they could do to provide tailored or individualized supports for educational organizations. It also seemed like there was no clear delineation or ability to distinguish clearly between factors of choice and those of digital exclusion.

Personally, I think the researchers have a lot of room for development when it comes to building on this research. I agree with them that a strong next step would be to combine the survey with an interview to capture some qualitative data. I think partnering with one of the informal or formal institutions they identified and surveying/interviewing learners before and after their online learning experience as well as capturing the actions the organization took to: onboard, orient the user to the technology and learning scope, and support their ongoing learning. This data in concert with the other data collected could help paint a more clear picture of who these online learners are, what needs do they have, and is the organization fulfilling those needs.

I think this second round of research could have great potential in providing more access to equitable educational opportunities. If these researchers could really hone in on the factors that exclude learners from online opportunities or even what actions unexpectedly help to include individuals, then that information could be directly utilized by organizations to help support these learners access learning opportunities that otherwise would not “exist” for them.

This article continues to help paint the picture of how much support, thought and detail should go into your writing and research. Learned something important tonight—if you don’t have anything conclusive in your conclusions, then something went wrong!