In a workshop at last week's AMOSSHE conference, we discussed how wellbeing analytics might be able to assist existing Student Support services.
Earlier this week I did a presentation to a group from Dutch Universities on the ethics work that Jisc has done alongside its studies, pilots and services on the use of data.
With the GDPR having now been in force for more than six months, my talk at this week's EUNIS workshop looked at some of the less familiar corners of the GDPR map. In particular, since EUNIS provided an international audience, I was looking for opportunities to find common, or at least compatible, approaches across the international endeavours of education and research.
Topics covered: What is a University? Network and Information Security; Research; Learning Analytics; Intelligent Campus; and Wellbeing.
An interesting observation made by a Dutch colleague earlier this week. The arrows in my standard model of learning analytics (here rearranged and recoloured to match the "swimlane" visualisation of the learning process) all mark "gatekeeper" points where information flow is filtered and reduced.
Recently I've been presenting our suggested legal framework for learning analytics to audiences involved in teaching, rather than legal people. For that I've been trying out a different visualisation, which considers the teaching process as involving three layers:
In thinking about the legal arrangements for Jisc's learning analytics services we consciously postponed incorporating medical and other information that Article 9(1) of the General Data Protection Regulation (GDPR) classifies as Special Category Data (SCD): "personal data revealing racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, and the processing of genetic data, biometric data for the purpose of uniquely identifying a natural person, data concerning health or data concerning a natural person's sex life or sexual orientation" (mo
Reflecting on the scope chosen by Blackboard for our working group - "Ethical use of AI in Education" - it's worth considering what, if anything, makes education different as a venue for artificial intelligence. Education is, I think, different from commercial businesses because our measure of success should be what pupils/students achieve. Educational institutions should have the same goal as those they teach, unlike commercial settings where success is often a zero-sum game.
Last week I was invited to a fascinating discussion on ethical use of artificial intelligence in higher education, hosted by Blackboard. Obviously that's a huge topic, so I've been trying to come up with a way to divide it into smaller ones without too many overlaps. So far, it seems a division into three may be possible:
One of the concerns commonly raised for Artificial Intelligence is that it may not be clear how a system reached its conclusion from the input data. The same could well be said of human decision makers: AI at least lets us choose an approach based on the kind of explainability we want. Discussions at last week's Ethical AI in HE meeting revealed several different options:
One of my guidelines for when consent may be an appropriate basis for processing personal data is whether the individual is able to lie or walk away. If they can, then that practical possibility may indicate a legal possibility too.
