Reliable Software and Security Engineering with Unreliable Tools

Written by David Khachaturov (dgk27) as a 4-part informal course, intended as a supplement to the Software and Security Engineering course taught in Easter Term of Part 1A of the Computer Science Tripos at the University of Cambridge. It is heavily inspired by the original lecture course given by the late Professor Ross Anderson, and draws upon the research conducted for my PhD thesis - Practical and societal implications of machine learning security - as well as anecdotes I collected during my time at Cambridge.

This is optional, but if you would like to attend please fill in the expression of interest form. (log in with your CRSid)

Please email any questions or suggestions to dgk27(AT)cam.ac.uk or if you would like to submit feedback anonymously, you can do so via this form.


Syllabus

Hours: 4
Format: In-person lectures
Location: Rushmore Room, St Catharine’s College
Timings: 6-7pm, Thursdays Week 5-8 (19th Feb, 26th Feb, 5th Mar, 12th Mar 2026) ical
Suggested hours of supervisions: N/A
Prerequisites: Foundations of Computer Science

Aims

This course aims to bridge the gap between classical security engineering and the emerging paradigm of AI-assisted software development. It seeks to equip students with a critical understanding of how generative tools shift the engineering focus from syntax (writing code) to semantics (verifying system intent). The course demonstrates why foundational computer science skills, such as systems design, formal verification, and threat modelling, are becoming more, not less, critical. Students will learn to treat AI not as an oracle, but as an unreliable component that requires architectural guardrails, testing, and ethical oversight.

Lectures

Broadly, the narrative arc of the course — theory, attacks, defences, consequences — corresponds with the lecture order.

  1. Introduction to Security; history of security engineering; motivations; brittle failure and graceful degradation; the problem with trusting trust; types of attacks and attack surfaces; formal security syntax; moving fast and breaking things.
    Recommended reading: Chapters 1 and 3 of 1; 2
  2. Death of Syntax; black-box code generation; adversarial machine learning and its consequences; hallucinations and related supply chain attacks; compounding of security failures; case studies.
    Recommended reading: 3; 4; 5
  3. Constraints and Verification; practical engineering techniques; formal verification; type systems as guardrails; testing 2.0; the art of code review.
    Recommended reading: Chapters 27 and 28 of 1; 6; Chapters 12 and 13 of 7
  4. Law, Ethics, Accountability, and The Future; liability and negligence in non-deterministic systems; cognitive offloading and the erosion of expertise; system safety vs. security; the future of the CS profession.
    Recommended reading: Chapters 2, 3, and 25 of 1; 8; Chapter 9 of 9; 10; 11

1 Anderson, R. (Third Edition 2020). Security engineering. Wiley. Available at: http://www.cl.cam.ac.uk/users/rja14/book.html

2 Thompson, K. Reflections on Trusting Trust. Turing award lecture. Available at: https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_ReflectionsonTrustingTrust.pdf

3 Kiribuchi, N et al. “Securing AI Systems: A Guide to Known Attacks and Impacts”. Japan AI Safety Institute. Available at: https://arxiv.org/abs/2506.23296v1

4 Perry, N et al. “Do Users Write More Insecure Code with AI Assistants?”. Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, ACM, 2023, pp. 2785–99. Available at: https://arxiv.org/abs/2211.03622

5 Pearce, H et al. “Asleep at the Keyboard? Assessing the Security of GitHub Copilot’s Code Contributions”. Commun. ACM 68, 2 (February 2025), 96–105. Available at: https://arxiv.org/abs/2108.09293

6 King, A. Parse, don’t validate. Blog. Available at: https://lexi-lambda.github.io/blog/2019/11/05/parse-don-t-validate/

7 Adkins, H et al. Building Secure and Reliable Systems. O’Reilly. Available at: https://google.github.io/building-secure-and-reliable-systems/raw/toc.html

8 Rogaway, P. The Moral Character of Cryptographic Work. Essay. Available at: https://web.cs.ucdavis.edu/~rogaway/papers/moral.pdf

9 Leveson, N. Engineering a Safer World. MIT Press. Available at: https://direct.mit.edu/books/oa-monograph/2908/Engineering-a-Safer-WorldSystems-Thinking-Applied

10 Elish, M.C. “Moral Crumple Zones: Cautionary Tales in Human-Robot Interaction”. Engaging Science, Technology, and Society Journal. Available at: https://estsjournal.org/index.php/ests/article/view/260

11 Asimov, I. Profession. Astounding Science Fiction. Available at: https://archive.org/details/Astounding_v59n05_1957-07_Gorgon776/page/n9/mode/2up


Course materials

Lectures will be recorded and posted here but in-person attendance is recommended.

  1. Introduction to Security. slides. recording.
  2. Death of Syntax. slides. recording.
  3. Constraints and Verification. slides. recording.
  4. Law, Ethics, Accountability, and The Future. slides.

Last updated: 01/02/2026

Copyright © 2026 David Khachaturov. All rights reserved. Do not distribute without explicit permission.