Last updated: 
2 days 17 hours ago
Blog Manager
One of Jisc’s activities is to monitor and, where possible, influence regulatory developments that affect us and our customer universities, colleges and schools as operators of large computer networks. Since Janet and its customer networks are classified by Ofcom as private networks, postings here are likely to concentrate on the regulation of those networks. Postings here are, to the best of our knowledge, accurate on the date they are made, but may well become out of date or unreliable at unpredictable times thereafter. Before taking action that may have legal consequences, you should talk to your own lawyers. NEW: To help navigate the many posts on the General Data Protection Regulation, I've classified them as most relevant to developing a GDPR compliance process, GDPR's effect on specific topics, or how the GDPR is being developed. Or you can just use my free GDPR project plan.

Group administrators:

Learning from Software Vulnerabilities

Friday, April 1, 2016 - 16:25

The slides from our Networkshop session on Learning from Software Vulnerabilities are now available. All three talks showed how managing the process of finding, reporting and fixing vulnerabilities can improve the quality of software and the security of our systems.

Graham Rymer and Jon Warbrick presented a case study of discovering and fixing a bug in the university's authentication system. Although the system is robust in areas that normally cause problems, examination of the source code identified two assumptions that might not always be true. In combination these could have been used to persuade a server to accept authentication messages with forged signatures. A detailed description of the problem allowed it to be fixed and deployed on university services in less than a week; assistance was also provided to other software developers known to use derived code. This quick response was possible both thanks to the quality of the investigation and report and because the reporter knew who to contact.

Giles Howard examined how an organisation's policies and practices might help the discovery and fixing of vulnerabilities. How, for example, would reporting a vulnerability be handled under a university's disciplinary or whistleblowing policies? Commercial software and systems providers increasingly provide guidance and incentives to research and report vulnerabilities in constructive ways: for example making test systems and accounts available or publishing the process that will be used when bugs are reported through the official channels. These could address reasonable management concerns about unmanaged testing of critical systems or the need to report some bugs to software vendors and depend on their response. This suggests an opportunity for universities to create a third layer of testing, between commissioned professional tests of critical systems and the random external "testing" to which any Internet-connected device is exposed. By recruiting interested students and guiding them to systems and times where managed testing is an acceptable risk the organisation could both improve the security of these systems – for which professional penetration testing may be unacceptably expensive – and give those students valuable practical and ethical experience.

Finally Richard Fuller described a project to review more than 17000 scripts that were present on the university's web servers. No automated testing was available, but it turns out to be relatively simple to train students and staff to spot the main types of vulnerability (as identified by OWASP) in Cold Fusion code. Students and staff were recruited, trained, and one day a week declared "Code Review Day" to develop a feeling of teamwork. Around 80% of applications had at least one vulnerability but, thanks to other security measures, the vast majority were protected against exploitation. Those responsible were contacted and offered support, including an internal training course featuring both theory and practical exercises. As awareness spread within the university, other developers started asking for help and training as well. But deadlines were needed to ensure the necessary changes were actually made. "Fix code within two weeks or it'll be disabled" turned out to be an effective way to identify scripts that were no longer used. It also produced many fewer complaints or consequences than the team had feared. As well as developer awareness and training, the university identified code review as a useful measure to reduce problems in future. This has the incidental benefit of ensuring that there are at least two people who know what code does and why, so should result in fewer "orphan" scripts in future.

The Dutch National CyberSecurity Centre has published guidelines and a best practice guide for managed vulnerability disclosure. These contain many ideas for policies and incentives to develop vulnerability discovery as a tool for improving security. I'd be interested to hear from anyone using these, or any other approaches to vulnerability management, in an educational institution.