Reading the LMS against the Backdrop of Critical Pedagogy, Part One

“Education, which must never be neutral, can be at the service either of decision, of world transformation and of critical insertion within the world, or of immobility and the possible permanence of unjust structures, of human beings’ settling for a reality seen as untouchable.”
~ Paulo Freire

What if we were to theorize that the learning management system (LMS) is designed, not for learning or teaching, but for the gathering of data? And what if we were to further theorize that the gathering of data, as messaged and marketed through the LMS, has become conflated with teaching and learning?

Part of the work that I do as an instructional designer and critical pedagogy agonist is to ask questions of the tools used in digitally mediated education. Because critical pedagogy encourages us to consider carefully the assumptions handed to us, a critical instructional designer begins first not by learning the tools before them—most digital tools work essentially the same way, with few exceptions, and are designed to be easily adopted—but by stepping back and looking at those tools from a distance. The critical instructional designer ask questions like:

  • Why was this tool created? What is its primary objective? What are its other objectives?
  • What assumptions about education and learning lie behind the design of this tool? Where do those assumptions come from?
  • How does this tool offer what Freire might call a determinant, or something that seems to resist our agency to change or resist?
  • How do the objectives and assumptions of this tool measure up against my own? How effectively does it resist my capacity to resist it, to change or hack it?

There are other considerations as well. How does this tool represent a politics of oppression—the surrender of privacy, data, authorship, authority, agency, as well as issues of representation, equity, access? Who owns the tool and what are their goals? How is the production of this tool funded? What influence does the maker of this tool have on culture more broadly writ? What labor is rewarded and what labor is erased? What is the relationship between this tool and the administration of the institution? Who must use this tool and who is trained to use this tool, and is that labor compensated? These are all important questions to ask, and the answers may play a role in the adoption of any given tool in a classroom or learning environment.

But in many cases, and especially with the LMS, adoption comes regardless of consent. In only a minority of situations are faculty and students part of the discussion around the purchase of an LMS for an institution. In those situations, we must abide by the use of the LMS; however, that doesn’t mean we must acquiesce to its politics or its pedagogy. In order to intervene, then, we must step back and rather than learn the tool, analyze the tool.

When we do that with the LMS, we find that its primary operation is the acquisition of data, and the conflation of that data with student performance, engagement, and teaching success. As Beer, Clark, and Jones cheerfully report in their article, “Indicators of Engagement,”

A fortunate effect of the almost ubiquitous adoption of LMS for online course delivery in universities, is their ability to track and store vast amounts of data on student and designer behaviour (Heathcoate & Dawson, 2005) . Typically, LMS record all actions made by users once they are logged into the system and this data is subsequently stored in an associated database. The process of analysing institutional data captured by an LMS for decision making and reporting purposes is called academic analytics (Campbell, Oblinger, & DeBlois, 2007) and it has been shown that analysis of captured LMS data is directly relevant to student engagement, evaluating learning activities and can usefully answer other important questions (Shane Dawson & McWilliam, 2008).

And earlier in that same report, the authors write:

It could be said that the online learning environment facilitates the interactions required for learning and therefore have an influence on student engagement. It could also be said that measuring student participation within a learning environment and contrasting this measure with student results can provide an approximation of student engagement.

In other words, usage becomes engagement and engagement gets equated with successful learning and expert teaching. But we cannot let ourselves believe that usage is anything besides usage—and even that assumption is subject to a certain questioning.

But when we assume that data points to behavior, and that points to the means to control behavior, we become authorized to create methods, approaches, and technologies that fulfill that promise. I offer as exhibit A this promotional video for Hero K12 a student monitoring system that gathers data from student behavior in on-ground learning environments (aka, the augmented reality LMS).

I’ve shared this video out on Twitter (with a nod to Audrey Watters, who originally shared it here), and the overall response was one of horror. My network was concerned about this level of monitoring, about the reduction of students to data, about the fact that Jill’s home or family situation, her access to transportation, nor any other factor outside of her name and grade level are considered by the Hero K12 human management system. For myself, I am most concerned about the inability of students to fully understand and to resist or change the system. While I have no doubts students are capable of breaking the system, or making it work for them, Hero K12 represents a determinant, one which students must adapt to, one which requires a surrender of their agency. They become their data, and while they may find ways to feed certain data into the system, they have no power to resist their own reduction to numbers, patterns, and statistics.

The LMS threatens the same reduction of human complexity to simple data. I say “simple” because even when data is nuanced and complex, it fails to be an accurate representation of a human being. This is not to say data cannot indicate certain behaviors, nor that it is useless, only that it has limitations. But it is not those limitations that are advertised, not those limitations that we’re trained to observe; instead, we are encouraged to see data as descriptive, not just indicative. And when that happens, a surfeit of data erects a barrier between students, teachers, and administrators. But most importantly, and least spoken about, data as a determinant erects a barrier between a student and themselves.

Most LMS data isn’t different from website data. Pageviews, time on page, number of posts in a discussion, the number of announcements a teacher sends out in a semester, the number of modules, quizzes, assignments, and files in a course, etc. And like with website data, pageviews and time on page are equated with engagement. The “hits” on a given course indicate the quality of the teaching taking place, and can be aligned with the number of assignments, quizzes, files, and more in that course to point the way to best practices.

Except they don’t and can’t. Any more than Jill showing up late for school can be equated with the kind of learner she is, or whether she will learn anything from detention mandated upon her by her data. And yet it is the use of data that makes the LMS so potentially destructive, especially as that data is used to punish or correct behavior.

B. F. Skinner, an innovator of the behavioral psychology on which most positivist approaches to education are founded, wrote that “behavior is shaped and maintained by its consequences”. He believed that, by controlling the environment in which learning happened, learning could be made more efficient, more effective, and that outcomes could be guaranteed. Put simply (too simply, I admit), a belief in Skinner’s approach has led to evidence-based teaching, which uses data to determine the effectiveness of certain pedagogical practices based on whether students achieved the desired outcomes.

This is precisely the same as testing the reactions of a rat in a maze. In Skinner’s own words: “Comparable results have been obtained with pigeons, rats, dogs, monkeys, human children, and psychotic subjects.” Shall we repeat part of that? Rats, monkeys, human children, and psychotic subjects. Collecting data has always been part of the “science” of education, control has always been the end to that effort, and learners have necessarily been equated with rats and pigeons all along.

Photo by Chris Karidis on Unsplash

Sean Morris

Leave a Reply Text

Your email address will not be published. Required fields are marked *