Exploring Digital Sanctuary

Last month, I gave a talk at the University of Edinburgh on critical digital pedagogy in troubled political times. While the main purpose of the talk was to give my UK colleagues a sense of the looming impact of the current administration on higher education, I also wanted to explore the ways in which critical (digital) educators are re-orienting their work in response to these serious threats.

I shared an emerging idea I’d had that builds on the sanctuary city/campus movement. Critical digital educators need to be talking about what a digital sanctuary campus might look like. Digital tools have been celebrated for bringing efficiency gains and improved learning outcomes to our campuses (claims that can and should be contested) but their presence deserves additional scrutiny in an age where so many of our students are at risk of deportation, of brutality, of harassment. A digital sanctuary initiative questions the role our technological systems play in students’ safety and looks for ways to minimize risks to students associated with those technological encounters. I mentioned the University of Edinburgh’s student data policy as an example of an institution’s taking student data privacy seriously.

I also shared some additional proposals for what a digital sanctuary approach might look like:

1) Do an audit of student data repositories & policies associated with third party providers. Make sure you are aware of and have documented every “place” student data goes and what their policies are on handling your students’ data.

2) Have a standard and well-known policy about how to handle external inquiries for student data and information. I’m slightly less worried about rogue university staff handing out student data in a way that puts undocumented students at risk and more worried about coercion and intimidation that could yield problematic results without clear guidelines for people to follow. This may not necessarily mean that people can choose whether or not to give out specific data from a sanctuary campus—we may be legally bound to do so—but that there is a process for giving data (and what data can and should be given) to law enforcement and immigration enforcement. People should understand how and when they can say no to inquiries about students, and campuses should investigate the legal limits of their non-compliance with such inquiries.

3) Provide audit of data to any individual who wants to know what data are kept on them, how they are kept, where they are kept, and who else has access. That is, if a student wants to know more about how their data are kept, what data we keep, and who has access, we should be able to give them that information.

4) Have clear guidelines and regulations for how data are communicated and transmitted between offices. How can we better protect student data as we transmit between people who should have access (e.g., maybe not via email)? We should have clear policies and guidelines about protection of student data on devices. I was really proud of Stanford’s School of Medicine when they undertook a major initiative to protect devices used by faculty, staff, and students–via services like two-factor authentication and encryption services–to better protect student and patient data. We need more of that kind of thinking.

5) Take seriously the data policies of your third party vendors. Don’t work with vendors whose contracts stipulate that they can use and share your students’ data without your or students’ consent (ahem, I’d call out Turnitin here, but we all know they’re just a flagrant representation of terrible practices across educational technology). Common Sense Media has some good resources on this: https://privacy.commonsense.org/

6) Closely examine and rethink student tracking protocols. How necessary are the dashboards we use? How problematic are our “acceptable use” policies? How long do we need to keep data? Do we really need all of the data we’re collecting?

It may be straightforward to think about these questions and opportunities when protecting undocumented students. I wonder if we’re also thinking about them in terms of other students who experience greater risk at our institutions. LGTBQ+ students. Students of color. Students facing poverty and economic instability. We are sold learning analytics—dashboards, percentages, key indicators—at every turn in educational technology. What if we decided that the risk to students of those tools was greater than the benefits?

In a roundtable discussion immediately following my talk, colleagues from the University of Edinburgh shared their thoughts and questions about the idea of a digital sanctuary initiative. I am sharing some of their thoughts/responses here–anonymized because that’s how my hastily-typed notes turned out–to prompt further discussion.

On how much data do we really need, and how long should we keep it:

  • We have a hoarder mentality about data and we need to think seriously about what data we actually need. How can we still learn from the past (permanence v. ephemerality) through their data without putting students unnecessarily at risk? 
  • Can we develop processes that collect the smallest amount of data needed on students?
  • If some data are being preserved for archiving purposes, can we create an “extreme embargo” around certain kinds of data?

On using data literacy as a teaching opportunity: 

  • One question educators should be asking is, “how do we teach students about their data?” We (at University of Edinburgh) do a data walkthrough with students, showing them what data we collect on them through the learning management system and the ways their data are presented in dashboards. We show students their data body.
  • Could we build an inventory of all of the digital tools that collect data and then surface that information to students as part of curriculum?
  • What if what we teach students about analytics/data reflects poorly on our institutions? What are the ethics of our profession around student data we collect and the teaching of data literacy?

On resisting governmental/third-party policies and regulations about student data:

  • What does refusal to record certain kinds of data about students look like?
  • How can we build better processes (e.g., procurement, security screening) that help our institutions to not work with third-party vendors who put our students’ data at risk?

On our students at risk:

  • Besides undocumented students, what are other communities at risk? What are future communities at risk?

  • What lessons can we take away from tactics used by communities/people at risk in the past when they were avoiding/resisting surveillance?

 

I hope this is the start of a conversation, a much-needed conversation at Middlebury and in higher education. Please share your thoughts and ideas via the Comments section below. I’ll share our progress on these discussions in the near future.

Amy Collier

Trackbacks & Pings

Leave a Reply Text

Your email address will not be published. Required fields are marked *