skip to primary navigationskip to content

Department of Computer Science and Technology



Course pages 2023–24

Large-scale data processing and optimisation

Principal lecturer: Dr Eiko Yoneki
Taken by: MPhil ACS, Part III
Code: R244
Term: Michaelmas
Hours: 16
Class limit: max. 15 students
Moodle, timetable


This module provides an introduction to large-scale data processing, optimisation, and the impact on computer system's architecture. Large-scale distributed applications with high volume data processing such as training of machine learning will grow ever more in importance. Supporting the design and implementation of robust, secure, and heterogeneous large-scale distributed systems is essential. To deal with distributed systems with a large and complex parameter space, tuning and optimising computer systems is becoming an important and complex task, which also deals with the characteristics of input data and algorithms used in the applications. Algorithm designers are often unaware of the constraints imposed by systems and the best way to consider these when designing algorithms with massive volume of data. On the other hand, computer systems often miss advances in algorithm design that can be used to cut down processing time and scale up systems in terms of the size of the problem they can address. Integrating machine learning approaches (e.g. Bayesian Optimisation, Reinforcement Learning) for system optimisation will be explored in this course.


This course provides perspectives on large-scale data processing, including data-flow programming, graph data processing, probabilistic programming and computer system optimisation, especially using machine learning approaches, thus providing a solid basis to work on the next generation of distributed systems.

The module consists of 8 sessions, with 5 sessions on specific aspects of large-scale data processing research. Each session discusses 3-4 papers, led by the assigned students. One session is a hands-on tutorial on MapReduce using data flow programming of Deep Neural Networks training using Google TensorFlow also Bayersian Optimisation basics. The first session advises on how to read/review a paper together with a brief introduction on different perspectives in large-scale data
processing and optimisation. The last session is dedicated to the student presentation of opensource project studies.

  1. Introduction to large-scale data processing and optimisation
  2. Data flow programming: Map/Reduce to TensorFlow
  3. Large-scale graph data processing: storage, processing model and parallel processing
  4. Map/Reduce and Deep Neural Network using TensorFlow hands-on tutorial
  5. Probabilistic Programming
  6. Many Aspects of Optimisation in Computer Systems
  7. Optimisation of Computer Systems using ML
  8. Presentation of Open Source Project Study


On completion of this module, students should:

  • Understand key concepts of scalable data processing approaches in future computer systems.
  • Obtain a clear understanding of building distributed systems using data centric programming and large-scale data processing.
  • Understand a large and complex parameter space in computer system's optimisation and applicability of Machine Learning approach.


Reading Club:

  • The preparation for the reading club will involve 1-3 papers every week. At each session, around 3-4 papers are selected under the given topic, and the students present their review work.
  • Hands-on tutorial session of data flow programming including writing an application of processing streaming in Twitter data and/or Deep Neural Networks using Google TensorFlow using cluster computing.


The following three reports are required, which could be extended from the assignment of the reading club, within the scope of data centric systems.

  1. Review report on a full length paper (max 1800 words)
    • Describe the contribution of the paper in depth with criticisms
    • Crystallise the significant novelty in contrast to other related work
    • Suggestions for future work
  2. Survey report on sub-topic in large-scale data processing and optimisation (max 2000 words)
    • Pick up to 5 papers as core papers in the survey scope
    • Read the above and expand reading through related work
    • Comprehend the view and finish an own survey paper
  3. Project study and exploration of a prototype (max 2500 words)
    • What is the significance of the project in the research domain?
    • Compare with similar and succeeding projects
    • Demonstrate the project by exploring its prototype

Reports 1 and 2 should be handed in by the end of 5th week and 7th week of the course. Report 3 should be handed in by the end of the Michaelmas Term.


The final grade for the course will be provided as a percentage, and the assessment will consist of two parts:

  1. 25%: for reading club (participation, presentation)
  2. 75%: for the three reports:
    • 15%: Intensive review report
    • 25%: Survey report
    • 35%: Project study

Recommended reading

  1. M. Abadi et al. TensorFlow: A System for Large-Scale Machine Learning, OSDI, 2016.
  2. D. Aken et al.: Automatic Database Management System Tuning Through Large-scale Machine Learning, SIGMOD, 2017.
  3. J. Ansel et al. Opentuner: an extensible framework for program autotuning. PACT, 2014.
  4. V. Dalibard, M. Schaarschmidt, E. Yoneki. BOAT: Building Auto-Tuners with Structured Bayesian Optimization, WWW, 2017.
  5. J. Dean et al. Large scale distributed deep networks. NIPS, 2012.
  6. G. Malewicz, M. Austern, A. Bik, J. Dehnert, I. Horn, N. Leiser, G. Czajkowski. Pregel: A System for Large-Scale Graph Processing, SIGMOD, 2010.
  7. A. Mirhoseini et al. Device Placement Optimization with Reinforcement Learning, ICML, 2017.
  8. D. Murray, F. McSherry, R. Isaacs, M. Isard, P. Barham, M. Abadi. Naiad: A Timely Dataflow System, SOSP, 2013.
  9. M. Schaarschmidt, S. Mika, K. Fricke and E. Yoneki: RLgraph: Modular Computation Graphs for Deep Reinforcement Learning, SysML, 2019.
  10. Z. Jia, O. Padon, J. Thomas, T. Warszawski, M. Zaharia,  A. Aiken: TASO: Optimizing Deep Learning Computation with Automated Generation of Graph Substitutions: SOSP, 2019.
  11. H. Mao et al.: Park: An Open Platform for Learning-Augmented Computer Systems, OpenReview, 2019.

A complete list can be found on the course material web page. See also 2019-2020 course material on the previous course Large-Scale Data Processing and Optimisation.