Last updated: 
1 month 3 weeks ago
Blog Manager
One of Jisc’s activities is to monitor and, where possible, influence regulatory developments that affect us and our customer universities, colleges and schools as operators of large computer networks. Since Janet and its customer networks are classified by Ofcom as private networks, postings here are likely to concentrate on the regulation of those networks. Postings here are, to the best of our knowledge, accurate on the date they are made, but may well become out of date or unreliable at unpredictable times thereafter. Before taking action that may have legal consequences, you should talk to your own lawyers. NEW: To help navigate the many posts on the General Data Protection Regulation, I've classified them as most relevant to developing a GDPR compliance process, GDPR's effect on specific topics, or how the GDPR is being developed. Or you can just use my free GDPR project plan.

Group administrators:

Phishing exercises?

Wednesday, August 12, 2015 - 14:52

Recently I had a thought-provoking discussion on Twitter (thanks to my guides) on the practice of setting your users phishing tests: sending them e-mails that tempt them to do unsafe things with their passwords, then providing feedback. I've always been deeply ambivalent about this. Identifying phishing messages is hard (see how you do on OpenDNS's quiz), and creating "teachable moments" may well be a good way to help us all learn. But if what we learn is "can't trust IT, they’re out to trick us" or "this looks like a phishing mail, but it's probably only IT running another test" then it will have gone horribly wrong.

It seems to me that the difference between success and failure is going to be less about technology and much more about how the organisation treats the exercise. Whether you want to host a programme in house or use a commercial service, there are plenty of technology options available. So here are some very tentative thoughts on how we might make success more likely. I'd love to hear if anyone has tried these and whether or not they worked.

Fundamentally, the word "test" worries me. We all get plenty of phishing tests in our inboxes already. And some of us who are caught out by those will then report ourselves to the helpdesk. If we're running an internal exercise, we ought to be doing something different: first motivating users to look out for phish, and second improving our ability to accurately distinguish phish from genuine e-mails. Shaming (either privately or publicly) those who fall for frauds doesn't seem a great way to do either of those. Clearly they need to have training materials brought to their attention, but that can be done within the computerised part of the system ("you clicked on a phishing link, here's how not to fall for it next time…"). So I wonder whether the organisation actually needs to know the identities of those who clicked at all? Statistics might well be useful, not least to see whether the organisation overall is reducing its risks, but might users view the exercise less negatively if we promise that that's all we'll collect? That does mean we can't use the exercise results to target those who just can't help clicking, but we can probably find them already in our helpdesk or system logs.

On the other hand we do want to recognise are the individuals who can quickly and accurately spot and report phishing e-mails, helping to keep both themselves and others safer on line. That behaviour is well worth rewarding, whether the phish they report are real ones or part of the exercise. Rewards – whether traditional chocolate or twenty-first century "gamification" – feel like a promising area to investigate. And if those rewards are public, then we need to support their recipients too. If we get the exercise right, then colleagues will be asking them "so how do you tell the difference?". If that happens, then the exercises really have been a success, and maybe we won't need to run them any more!