Last updated: 
2 months 1 week ago
Blog Manager
One of Jisc’s activities is to monitor and, where possible, influence regulatory developments that affect us and our customer universities, colleges and schools as operators of large computer networks. Since Janet and its customer networks are classified by Ofcom as private networks, postings here are likely to concentrate on the regulation of those networks. Postings here are, to the best of our knowledge, accurate on the date they are made, but may well become out of date or unreliable at unpredictable times thereafter. Before taking action that may have legal consequences, you should talk to your own lawyers. NEW: To help navigate the many posts on the General Data Protection Regulation, I've classified them as most relevant to developing a GDPR compliance process, GDPR's effect on specific topics, or how the GDPR is being developed. Or you can just use my free GDPR project plan.

Group administrators:

Explaining AI algorithms

Wednesday, January 17, 2018 - 15:54

One of the concerns commonly raised for Artificial Intelligence is that it may not be clear how a system reached its conclusion from the input data. The same could well be said of human decision makers: AI at least lets us choose an approach based on the kind of explainability we want. Discussions at last week's Ethical AI in HE meeting revealed several different options:

  • When we are making decisions such as awarding bursaries to students, regulators may well want to know in advance that those decisions will always be made fairly, based on the data available to them. This kind of ex ante explainability seems likely to be the most demanding, probably restricting the choice of algorithm to those using known (and meaningful to humans) parameters to convert inputs to outputs;
  • Conversely for decisions such as which course to recommend to a student, the focus is likely to be explaining to the individual affected which characteristics led to that decision being reached. Here it may be possible to use more complex models, so long as it's possible to perform some sort of retrospective sensitivity analysis (for example using the LIME approach) to discover which characteristics of the particular individual had most weight in the recommendation that was provided for them;
  • A variant of the previous type occurs where a student's future performance has been predicted and they, and their teachers, want to know how to improve it. This is likely to require a combination of information from the algorithm with human knowledge about the individual and their progress;
  • Finally there are algorithms – for example deciding which applicants are shown social medial adverts – where the only test of the algorithm is whether it delivers the planned results and we don't care how it achieved that.

Explainability won't be the only factor in our choice of algorithms: speed and accuracy are obvious other factors. But it may well carry some weight in deciding the most appropriate techniques to use in particular applications.

Finally it's interesting to compare these requirements of the educational context with the "right to explanation" contained in the General Data Protection Regulation and discussed on page 14 of in the Article 29 Working Party's draft Guidance. It seems that the education's requirements for explainability may be significantly wider and more complex.

Comments

Related to this, I've just found a fascinating paper that investigated how people like algorithms to be explained. Well worth reading the whole thing (preprint is available as open access), as there are some variations between the scenarios tested. But it seems that telling the individual what needs to change, and by how much, to change the outcome is often helpful. Explaining that the algorithm's statistics are sound doesn't seem to be.

Reuben Binns, Max Van Kleek, Michael Veale, Ulrik Lyngs, Jun Zhao and Nigel Shadbolt (2018) 'It's Reducing a Human Being to a Percentage'; Perceptions of Justice in Algorithmic Decisions. ACM Conference on Human Factors in Computing Systems (CHI'18), April 21–26, Montreal, Canada. doi: 10.1145/3173574.3173951