Last updated: 
3 weeks 3 days ago
Blog Manager
One of Jisc’s activities is to monitor and, where possible, influence regulatory developments that affect us and our customer universities, colleges and schools as operators of large computer networks. Since Janet and its customer networks are classified by Ofcom as private networks, postings here are likely to concentrate on the regulation of those networks. Postings here are, to the best of our knowledge, accurate on the date they are made, but may well become out of date or unreliable at unpredictable times thereafter. Before taking action that may have legal consequences, you should talk to your own lawyers. NEW: To help navigate the many posts on the General Data Protection Regulation, I've classified them as most relevant to developing a GDPR compliance process, GDPR's effect on specific topics, or how the GDPR is being developed. Or you can just use my free GDPR project plan.

Group administrators:

How to Start Learning Analytics?

Friday, January 5, 2018 - 13:24

One of my guidelines for when consent may be an appropriate basis for processing personal data is whether the individual is able to lie or walk away. If they can, then that practical possibility may indicate a legal possibility too.

When we're using learning analytics, as a production service, to identify when students could benefit from some sort of personalisation of their learning experience, that's not what we want. Those opportunities should be offered to all students who might benefit from them, with the option to refuse when they know exactly what alteration or intervention is being proposed. Hence Jisc's recommended model uses consent only at the point of intervention (and, by the same "can lie" test, if we are inviting students to provide self-declared input data into our models).

Legally, and morally too, if we are imposing processing on individuals then we need to ensure that it doesn't create unjustified risks for them. Doing that shouldn't be a problem when we know what objective we are aiming at and what information is likely to be relevant to that objective. However this creates a chicken/egg problem: how do we find out what objectives are possible and what data might help with them?

For this sort of exploratory investigation, consent may be a more appropriate option. At this preliminary stage inclusiveness may be less important (though we need to beware of self-selecting inputs producing biased models) and we may indeed be able to offer the option to walk away at any time. Participants who do so must not suffer any detriment: one way to ensure this, and to satisfy the requirement that individuals must know the detailed consequences of participation, is to state that the outputs from pilot systems will not be used for any decisions, or to offer any interventions. So no consequences and no detriment. Learning which types of data can inform which types of outputs should be sufficient for the pilot stage: we can then use that knowledge to assess and implement our production algorithms and processes.

These thoughts were explored in my talk at the Jisc Learning Analytics Network meeting in November