Last updated: 
1 month 3 weeks ago
Blog Manager
One of Jisc’s activities is to monitor and, where possible, influence regulatory developments that affect us and our customer universities, colleges and schools as operators of large computer networks. Since Janet and its customer networks are classified by Ofcom as private networks, postings here are likely to concentrate on the regulation of those networks. Postings here are, to the best of our knowledge, accurate on the date they are made, but may well become out of date or unreliable at unpredictable times thereafter. Before taking action that may have legal consequences, you should talk to your own lawyers. NEW: To help navigate the many posts on the General Data Protection Regulation, I've classified them as most relevant to developing a GDPR compliance process, GDPR's effect on specific topics, or how the GDPR is being developed. Or you can just use my free GDPR project plan.

Group administrators:

Wellbeing analytics: legal explorations

Friday, May 3, 2019 - 10:16
Wellbeing analytics falls into a legal gap

While colleagues are looking at whether data can be used to pick up early signs of mental health and wellbeing problems, I'm exploring possible legal frameworks for doing that safely. As the diagram shows, trying to deliver an early warning service to all students falls into a gap between three reasonably familiar areas of data protection law:

  • We'd like a framework that could include services that cover all students, not just those who consent to their personal data being processed for this purpose;
  • We want to act (or alert the students) early, well before doing so is necessary to protect their vital interests; and
  • At least at present, such a service is considerably more than necessary for universities' legal duty of care under common law.

In trying to find a legal basis that fits the space between these three, the fact that such a service may well be inferring health data – i.e. special category data – from non-special-category activity data is helpful. Rather than just six broad legal bases for processing personal data in Article 6, for special category data, GDPR Article 9 has ten and Schedule 1 of the Data Protection Act 2018 (DPA) more than twenty-five. These are, of course, much more narrowly defined. But if our processing fits one of these narrow definitions, then we can work back to the appropriate Article 6 basis, and identify at least a minimum set of safeguards for the processing.

Going through the DPA Schedule – a process memorably described by one legal scholar as "like wading through treacle-coated spaghetti" – the best fit appears to be paragraph 17 in Part 2 – that the processing is "necessary for the provision of confidential counselling, advice or support or of another similar service provided confidentially". A further promising sign is the restriction in 17(2)(c) that that paragraph can only be used where "obtaining the consent of the data subject would prejudice the provision of the service". As above, that does indeed describe our situation. So, as discussed in an earlier post, we've chosen this particular area of spaghetti as the likely basis for a Code of Practice. Comparing with our legal framework for Learning Analytics, paragraph 17 and its associated Article 6 justification (which appears to be Public Interest) would cover the orange Collection and Analysis (of Wellbeing) stages.

Since this is a new area for holding data-informed conversations, it's particularly important to test and validate the results – are we using the right data sources? are we extracting the right signals from them? are we informing students (and others) of those signals in constructive ways? But it's debatable, in strict legal terms, whether that testing is actually "necessary" to deliver the counselling service. Since validation, in particular, may well require processing additional data – for example about historic outcomes for those who both were and were not flagged by the system – it may in any case be preferable to do that under a different legal basis, with a strong Chinese wall between the two activities. This should reduce the risk of leakages in both directions: that testers become aware of individual identities, and that validation data might be incorporated into the live early warning system, for example. Rather than stretching Schedule 1 Paragraph 17, these requirements are a much better match for the rules on handling data for "scientific or historical research or statistical purposes" in GDPR Article 89 and DPA section 19, so we are likely to use those as the basis for this part of the Code of Practice.

If there are any other explorers of this area of law out there, I'd love to compare maps. And many thanks to the Jiscmail subscriber who pointed me at paragraph 17 when I was stuck in a GDPR dead-end.