skip to primary navigationskip to content

Department of Computer Science and Technology

Undergraduate

Course pages 2021–22

Software and Security Engineering

Principal lecturer: Prof Ross Anderson
Taken by: Part IA CST
Hours: 11
Suggested hours of supervisions: 3
Prerequisites: This course is a pre-requisite for the Part IB Group Project
This course is a prerequisite for: Cybercrime, Security
Past exam questions

Aims

This course aims to introduce students to software and security engineering, and in particular to the problems of building large systems, safety-critical systems and systems that must withstand attack by capable opponents. Case histories of failure are used to illustrate what can go wrong, while current software and security engineering practice is studied as a guide to how failures can be avoided.

Lectures

 

1. What is a security policy or a safety case? Definitions and examples; one-way flows for both confidentiality and safety properties; separation of duties. Top-down and bottom-up analysis methods. What architecture can do, versus benefits of decoupling policy from mechanism.

2. Examples of safety and security policies. Safety and security usability; the pyramid of harms. Predicting and mitigating user errors. The prevention of fraud and error in accounting systems; the safety usability of medical devices.

3. Attitudes to risk: expected  utility, prospect theory, framing, status quo bias. Authority, conformity and gender; mental models, affordances and defaults. The characteristics of human memory; forgetting passwords versus guessing them.

4. Security protocols; how to enforce policy using  structured human interaction, cryptography or both. Middleperson attacks.The role of verification and its limitations.

5. Attacks on TLS, from rogue CAs through side channels to Heartbleed. Other types of software bugs: syntactic, timing, concurrency, code injection, buffer overflows. Defensive programming: secure coding, contracts. Fuzzing.

6. The software crisis. Examples of large-scale project failure, such as the London Ambulance Service system and the NHS National Programme for IT. Intrinsic difficulties with complex software.

7. Software engineering as the management of complexity. The software life cycle; requirements analysis methods; modular design; the role of prototyping; the waterfall, spiral and agile models.

8. The economics of software as a Service (SaaS); the impact SaaS has on software engineering. Continuous integration, release engineering, behavioural analytics and experiment frameworks, rearchitecting systems while in operation.

9. Critical systems: safety as an emergent system property. Examples of catastrophic failure: from Therac-25 to the Boeing 737Max. The problems of managing redundancy. The overall process of safety engineering.

10. Managing the development of critical systems: tools and methods, individual versus group productivity, economics of testing and agile development, measuring outcomes versus process, the technical and human aspects of management, post-market surveillance and coordinated disclosure. The sustainability of products with software components.

At the end of the course students should know how writing programs with tough assurance targets, in large teams, or both, differs from the programming exercises they have engaged in so far. They should understand the different models of software development described in the course as well as the value of various development and management tools. They should understand the development life cycle and its basic economics. They should understand the various types of bugs, vulnerabilities and hazards, how to find them, and how to avoid introducing them. Finally, they should be prepared for the organizational aspects of their Part IB group project.

Recommended reading


Anderson, R. (Third Edition 2020). Security engineering (Part 1 and Chapters 27-28). Wiley. Available at: http://www.cl.cam.ac.uk/users/rja14/book.html