Important dates
- 3 September 2025
- Applications open
- 2 December 2025
- Applications close
- 27 February 2026
- Notifications sent
- 1 October 2026
- Course starts
Computer Architectures of the Future
PhD applicants to the University of Cambridge must write a research proposal, laying out details of the research they wish to undertake as part of the PhD. More information is available on the applications page. Research proposals from applicants to CASCADE must address one or more of the research challenges laid out below.
We are currently accepting applications to research future systems that look beyond the RISC paradigm to rethink the ISA abstraction. However, as the Centre expands in future years, we expect to broaden into other research areas.
Programs contain a significant amount of semantic information about their operation. Some of this is encoded through programming-language constructs, some is inferred by the compiler through static analysis, and some can be determined during execution by the runtime system. However, modern toolchains – by design – throw this information away as a program is compiled, leaving just the operations encoded in the processor's usually simple, RISC-based ISA. To obtain high performance during run-time, microarchitectures have to reconstruct this information, enabling them to predict how future execution will proceed. This results in significant use of transistors with associated power costs, in addition to complexity and performance overheads, when predictions are inevitably occasionally wrong.
While classical hard and software design trade-offs favoured a narrow interface between applications and hardware, there is no fundamental reason why application binaries must pass through a funnel where known information about a program has to be thrown away during the process of compiling and running it. In fact, if we could efficiently pass on this information to the processor, it could execute the application with knowledge of the future that it did not have to infer itself. In effect, by widening the communication channel between software and hardware we can build more efficient and less complex processors that can still take advantage of this future and know even more about the application than they can easily infer nowadays.
Challenges
We seek to improve the performance-efficiency of processors running existing compute-intensive workloads, written in industry-standard languages such as C and C++ without source-level modification. The overall aim is to consider how augmenting program binaries with performance-relevant semantic information can directly improve execution speed, resource utilization, or power usage.
- Are there aspects of program structure, data usage and dependencies, algorithmic insights, parallelism opportunities, and so on that the compiler knows about but currently discards during compilation, that we could make use of at run-time?
- How should relevant performance-enhancing program information be transferred from compiler to processor?
- How might conventional microarchitectures be extended to exploit this augmented information, without penalising performance and compatibility of non-augmented binaries?
- Can augmented program information enable exploitation of new forms of parallelism?