Computer Laboratory

Systems Research Group – NetOS

Student Projects (2016-2017)

NetOS

This page collects together various Part II project suggestions from the Network and Operating Systems part of the Systems Research Group. In all cases there is a contact e-mail address given; please get in touch if you want more information about the project.

Under construction: Please keep checking back, as more ideas will hopefully be added to this page during the coming weeks.

Note: there are a number of stumbling blocks in Part II project selection, proposal generation and execution. Some useful guidance from a CST alumnus (and current NetOS PhD student) is here.

Current project suggestions

1. P4 on NetFPGA

Contact: Andrew Moore,Noa Zilberman (email)

P4 is a programming language designed to allow programming of packet forwarding dataplanes. In contrast to a general purpose language such as C or python, P4 is a domain-specific language with a number of constructs optimized around network data forwarding. The NetFPGA is a line-rate, flexible, and open platform for research, and classroom experimentation. More than 2,000 NetFPGA systems have been deployed at over 150 institutions in over 40 countries around the world. The NetFPGA SUME is the 3rd generation NetFPGA platform, developed in Cambridge, with I/O capabilities for 10 and 100 Gbps operation. It can be used as NIC, multiport switch, firewall, test/measurement environment, and more.
This project aims to enable P4 over the NetFPGA SUME platform. It will use the P4FPGA to compile from P4 to Verilog, and will require to create NetFPGA templates for P4FPGA, mapping the design to the platform. The generated P4 module will fit into the NetFPGA datapath, and enable new NetFPGA designs using P4.

References:
[1] NetFPGA
[2]Noa Zilberman, Yury Audzevich, Adam Covington, Andrew W. Moore. NetFPGA SUME: Toward Research Commodity 100Gb/s, IEEE Micro, vol.34, no.5, pp.32,41, Sept.-Oct. 2014
[3] P4
[4] P4FPGA

Pre-requisites: This project requires basic knowledge of computer networks and Verilog.


2. Modelling Interconnect Bottlenecks

Contact: Andrew Moore,Noa Zilberman (email)

New computer systems architectures seek to build hyper-converged, large systems, where a large number of compute nodes are connected through a dedicated fabric. Current computer architecture simulators provide a simplistic model of computing interconnect, failing to reflect interconnect bottlenecks.
This project aims to extend the gem5 simulator, by providing accurate modelling of the PCIe interconnect. It will then be used to show how interconnect bottleneck affects overall system performance.

References:
[1] gem5

Pre-requisites: This project requires basic knowledge of computer networks and computer architecture.


3. Modelling Networking Bottlenecks

Contact: Andrew Moore,Noa Zilberman (email)

New computer systems architectures seek to build hyper-converged, large systems, where a large number of compute nodes are connected through a dedicated fabric. Current computer architecture simulators provide a simplistic model of networking devices, failing to reflect networking bottlenecks and properties. This project aims to extend the dist-gem5 simulator, by providing accurate modelling of networking devices. It will then be used to simulate rack-scale system performance.

References:
[1] Mohammad Alian , Daehoon Kim, Nam Sung Kim, Gabor Dozsa, and Stephan Diestelhorst. Dist-gem5 Architecture. Micro-48 Tutorial.

Pre-requisites: This project requires basic knowledge of computer networks and computer architecture.


4. Power Analysis of Rack-Scale Systems

Contact: Andrew Moore,Noa Zilberman (email)

Rack-scale computing is an emerging technology in networked systems, replacing the server as the basic building block in enterprises and data centres ICT infrastructure. Rack-scale computing provides superior performance compared with rack enclosures fitted with stand-alone servers, providing scalable high performance networked systems. Power efficiency is of utmost importance to rack-scale computing: the power budget of the system is bounded by rack enclosure (typically 10kW-20kW), thus any increase in performance must still retain the same system power consumption. We are building an apparatus for the evaluation of rack-scale systems implementation at scale. This project will focus on the instrumentation of the system for power consumption measurement and analysis, mainly through the instrumentation existing on the NetFPGA SUME platform.

References:
[1] NetFPGA
[2] Noa Zilberman, Yury Audzevich, Adam Covington, Andrew W. Moore. NetFPGA SUME: Toward Research Commodity 100Gb/s, IEEE Micro, vol.34, no.5, pp.32,41, Sept.-Oct. 2014
[3] Rack-scale Computing (Dagstuhl Seminar 15421)

Pre-requisites: This project requires basic knowledge of Verilog.


5. Rack-Scale Fabric Topologies

Contact: Andrew Moore,Noa Zilberman (email)

Rack-scale computing is an emerging technology in networked systems, replacing the server as the basic building block in enterprises and data centres ICT infrastructure. Rack-scale computing provides superior performance compared with rack enclosures fitted with stand-alone servers, providing scalable high performance networked systems. Rack-scale computing brings together researchers from a large number of fields: HPC, distributed systems, computer architecture, storage and more. Consequently, different types of network fabric topologies are being promoted, from supercomputer-type topologies to data-center topologies. However, rack scale computing has some properties that differentiate it from traditional systems, such as the high locality of data and the small latency between nodes. This project will build a simulator that takes real workloads as inputs and simulate their performance under different rack-scale fabric topologies.

References:
[1] Rack-scale Computing (Dagstuhl Seminar 15421)

Pre-requisites: This project requires basic knowledge of computer networks.


6. 120G Switch

Contact: Andrew Moore,Noa Zilberman (email)

The bandwidth of network switching silicon has increased by several orders of magnitude over the last decade. This rapid increase has called for innovation in datapath architectures, with many solutions competing for their place. NetFPGA-SUME [1] , the third generation of the NetFPGA [2] open source networking platforms, is a technology enabler for 100Gb/s and datacentre research. As a reconfigurable hardware platform, it allows rapid prototyping of various architectures, allowing to explore various trade offs.
In this project you will extend the NetFPGA SUME 40G (4x10G) reference switch to a 120G (12x10G) switch and demonstrate its operation. This includes adding 8 new interfaces to the switch design and extending its datapath to support 120Gbps bandwidth. You will use the NetFPGA SUME python-based test harness to simulate and test the hardware that you design.

References:
[1] Noa Zilberman, Yury Audzevich, Adam Covington, Andrew W. Moore. NetFPGA SUME: Toward Research Commodity 100Gb/s, IEEE Micro, vol.34, no.5, pp.32,41, Sept.-Oct. 2014
[2] http://www.netfpga.org

Pre-requisites: This project requires basic knowledge of computer networks and Verilog.


7. Low Latency Switch

Contact: Andrew Moore,Noa Zilberman (email)

The bandwidth of network switching silicon has increased by several orders of magnitude over the last decade. This rapid increase has called for innovation in datapath architectures, with many solutions competing for their place. NetFPGA-SUME [1] , the third generation of the NetFPGA [2] open source networking platforms, is a technology enabler for 100Gb/s and datacentre research. As a reconfigurable hardware platform, it allows rapid prototyping of various architectures, allowing to explore various trade offs.
In this project you will design a low-latency switch using the NetFPGA SUME platform. The project will modify the NetFPGA store-and-forward switch design to a cut-through switch architecture which reduces latency significantly.

References:
[1] Noa Zilberman, Yury Audzevich, Adam Covington, Andrew W. Moore. NetFPGA SUME: Toward Research Commodity 100Gb/s, IEEE Micro, vol.34, no.5, pp.32,41, Sept.-Oct. 2014
[2] http://www.netfpga.org

Pre-requisites: This project requires basic knowledge of computer networks and Verilog.


8. Micro-services RITA

Contact: Richard Mortier (with Ian Lewis) (email)

We have access to a near-realtime feed of bus data for Cambridge public buses which is currently handled using the RITA system. At present, data is received via POST to a webserver, which then both archives and serves it. This project would design and build a scalable custom webserver to perform these tasks as a set of microservices, built using MirageOS and/or Docker Containers with DataKit. This would mean building a set of microservices to collect, record and then apply various processing to the data, all interconnected with DataKit. Extensions would include building various visualisations, e.g., MBTA metro dataset, or Bacon et al (2011). "Using Real-Time Road Traffic Data to Evaluate Congestion". LNCS 6875:93-117, or using Jitsu to auto-scale the services.


9. TripIt! for Papers

Contact: Richard Mortier (email)

Recently introduced rules from the UK Research Councils require that all published academic outputs are made available as open access within 3 months of publication, typically via institutional repositories. Unfortunately most institutional repository submission processes are rather cumbersome, involve extensive human interaction, and are prone to being forgotten or delayed potentially rendering an academic output inadmissible for the next Research Evaluation Framework exercise.

TripIt is a rather useful service that allows a traveller to manage travel plans by forwarding email confirmations, tickets, receipts, etc to an email address ([Javascript required]). Upon receipt the TripIt service aggregates and parses the information concerned into a sequence of "trips" which can then be exported as (e.g.) a Google Calendar. This makes it quite straightforward to have trip details appear in one's calendar without needing to go through the tedious and error-prone process of re-entering all details manually.

This project will design and build a service that provides TripIt-like interaction for tracking academic outputs. Elaborating details of the workflow forms part of the project, with care needing to be paid to the requirements for REF eligibility as well as common academic working practices for conference and journal publication. The service can be implemented using any appropriate tools, though implementation as a MirageOS unikernel using the Irmin storage backend would be particularly welcome.


10. Spoofing TCP with MirageOS

Contact: Richard Mortier (email)

MirageOS is a framework developed by the group in which to write unikernels: compact, application-specific OS kernels. As traditional OS services, such as the network stack, are provided as libraries, there is an opportunity to manipulate those libraries to customise them for particular application demands.

Coupled with the project above, it would be interesting to write a TCP stack (or stacks) for MirageOS that replicated different signature behaviours of other stacks (Linux, BSD, etc). This would enable the web measurement referenced above to be extended to present as different host OSs as well as simply different browsers.


11. MirageOS Protocol Servers

Contact: Richard Mortier

MirageOS is a unikernel framework using the OCaml language. Among its primary targets are building small, lightweight cloud-hosted network services. This project will design and build such a service, evaluating for performance on a number of axes. Suitable services include the XMPP messaging standard, and the BGP v4 routing protocol.


12. OCaml meets WebKit

Contact: Richard Mortier (email)

WebKit2 is an incompatible API layer for WebKit, the web content processing library from Apple, supporting OSX, iOS and Linux. It implements a split process model where web content lives in a separate process from the application UI. PhantomJS is a headless wrapper for WebKit, scriptable via JavaScript used for screen capture, page automation, site testing, etc., and supporting CasperJs as a higher-level API wrapper.

Web automation scripts in JavaScript for CasperJS can rapidly become rather complex -- it would be nice to have a more modern, feature rich language to do this, such as OCaml. One way forward would be to use the Ctypes library which enables binding C libraries using pure OCaml.

This project is complex, and you will benefit from having experience with at least one of OCaml or WebKit, as well as familiarity with C programming.


13. Niche Social Networks

Contact: Richard Mortier (email)

Social networks such as Facebook support a wide range of interactions and purposes. However, there are times where it is not appropriate to push smaller scale, more niche social groups onto such generic platforms while it would still be nice to take adavantage of some of their features.

This project will build a simple social-network-as-a-library that can provide the standard features of a social network (pseudo-identity, tracking followed/following relations) while interfacing with a range of other services such as email. A particular demonstrator application will also be built that allows members of a group (e.g., a sports society, a College fellowship) to express interest in a subset of other members' behaviours, and be notified when those members perform a certain action (e.g., sign up to attend a regular social event). A number of different notification channels could be integrated (e.g., email, SMS, telephone).

Ideally this will be built as one or more microservices using the MirageOS unikernel framework, but other implementation platforms can be considered.


14. Coq4j: A Neo4j database for Coq dependency graphs.

Contact: Tim Griffin

The Coq Proof Assistant is a widely used system for interactively developing formal proofs in a logic based on dependent types.

The existence of large bodies of formalised computer science and mathematics gives rise to the opportunity to treat these libraries as objects of study in themselves. That is, we can explore the structure of mathematical theories using the same kinds of algorithms we might use to explore a social network graph.

One example is coq-dpdgraph, a tool that generates directed graphs to capture the dependencies between coq objects (definitions, lemmas, proofs). Currently this can be used to generate static graphs that can be inspected with dot. This is very useful, but as theories get large it quickly becomes difficult to gain useful insights from enormous complex graphs.

The Coq4j project aims to go a step further. This project will develop code to translate any graph generated by coq-dpdgraph to a Neo4j database. We can then use the cypher query language to interactively explore the structure of the associated theory. We will test this approach on a number of existing coq theories and develop a library of cypher queries that are especially tailored to exploring the structure of mathematical theories. We might call this empirical meta-mathematics.

Our moonshot experiment will be an attempt to apply this framework to the formalisation of the Odd Order Theorem, a tour de force coq development comprised of over 15K definitions and 4K theorems.

No previous knowledge of Coq, Neo4j, or group theory will be assumed.


15. BGP4j: A Neo4j database for analysis of inter-domain routing in the Internet

Contact: Tim Griffin

The Border Gateway Protocol (BGP) is the routing protocol used to glue together all of the autonomous networks in the global Internet to provide one large interconnected network.

There are several public data archives that have recorded all bgp updates at various internet exchange points (see for example RIS raw data ). These archives contain BGP updates and table dumps in a binary format which can be parsed with tools such as bgpdump and bgpreader.

This project will transform bgp archives into Neo4j databases. It will develop a graph-oriented data model that captures BGP relationships between networks (Autonomous Systems) as well as some of the rich temporal data contained in the BGP update archives. The project will develop a library of cypher queries that reveal some of the static and dynamic information locked up in these archives.


16. Network Profiling of Big Data Applications

Contact: Andrew Moore,Noa Zilberman (email)

Big data applications are taking an increasing part in our everday life, from shopping online to social networks. These applications usually run in the cloud rather than on user's devices, which means that knowledge of these application's behavior is limited to data centre operators. As data centre operators keep their data confidential, very little information was published (e.g. [1],[2]). In the lack of such ground truth, academic research is limited in its ability to develop novel system and networking solutions for data centres.
In this project you will profile the networking characteristics of different big data applications: from standard Memcached benchmarks to astronomy projects. You will run the applications within the local data centre and collect traces of all communication, later analysing them and creating a network profile for each application.

References:
[1] Theophilus Benson, Aditya Akella and David Maltz. Network Traffic Characteristics of Data Centers in the Wild. Proceedings of the Internet Measurement Conference (IMC), 2010.
[2] Berk Atikoglu, Yuehai Xu, Eitan Frachtenberg, Song Jiang, and Mike Paleczny. Workload analysis of a large-scale key-value store. SIGMETRICS 2012.


17. Distributed Stochastic Gradient Descent on Naiad

Contact: Eiko Yoneki

Keywords: Data flow programming, Machine Learning, Parallel Computing in Distributed Systems

Naiad is a distributed system framework that allows for automatic parallelisation of programs in a distributed environment [1]. In this project, you will use the RUST implementation of NAIAD and build a distributed implementation of Stochastic Gradient Descent (SGD) [2], which is common algorithms used in Machine Learning. The project will provide efficient parallel processing for SGD in optimised fashion, where different iterations of the algorithm for different data points can be run in parallel. An efficient distributed implementation using NAIAD can take advantage of the incremental and iterative computation. You would also write an application to evaluate the project.

[1] D. Murray, F. McSherry, R. Isaacs, M. Isard, P. Barham, and M. Abadi. Naiad: a timely dataflow system. SOSP, 2013.
[2] Kevin P Murphy. Machine learning: a probabilistic perspective. MIT press, 2012.


18. Building Graph Query Function using Functional Programming

Contact: Eiko Yoneki

Keywords: Graph, Functional programming, Database, NOSQL

Demand to store and search of online data with graph structure is emerging. Such data range from online social networks to web links and it requires efficient query processing in a scalable manner. In this project, you will build a graph query function (layer) to achieve efficient graph data processing. The graph query function builds on a lazy graph loaded from multiple sources and performs queries at the same speed or faster than a relational database. The function should be written in Clojure or another functional language to support practical lazy loading from data source to an in-memory graph database. Efficient representations for graph queries should be also investigated. The evaluation includes comparisons with existing Graph Database and the query capability comparing with SQL. You can start the project by our existing work in this area.

[1] Microsoft Research, Trinity project: Distributed graph database,http://research.microsoft.com/en-us/projects/trinity/
[2] Neo Technology, Java graph database,http://neo4j.org/


19. Dynamic Task Scheduling on Heterogeneous CPU/GPU Environment using ML for Parallel Processing

Contact: Eiko Yoneki

Keywords: GPU Clusters, Heterogeneous many/multi-core, Parallel Computing, OpenCL, Task Scheduling

In this project, various aspects of parallel processing will be explored using a new generation of CPU/GPU integrated board, where more than one GPU clusters are placed on a chip. We use ARM based Mali-T628 MP6, in Exynos 5422 [1] [2]. Using OpenCL, tasks can be dispatched to GPU and CPU code in parallel. This new GPUs makes it possible to cluster the GPU nodes for different scale of parallel processing. We use a simulator on top of the hardware to experiment various task scheduling strategies explored by the machine learning methodologies for prediction of workload, vector instructions, and mixture of model parallelism and data parallelism. Application running on top could be image analysis or graph processing. Graph processing can take advantage of processor heterogeneity to adapt to structural data patterns. The overall aim of graph processing can be seen as scheduling irregular tasks to optimise data-parallel heterogeneous graph processing, by analysing the graph at runtime and dispatching graph elements to appropriate computation units. Efficient scheduling underlies the vision of a heterogeneous runtime platform for graph computation, where a data-centric scheduler is used to achieve optimal workload.

[1] http://www.anandtech.com/show/8234/arms-mali-midgard-architecture-explored
[2] www.samsung.com/global/business/semiconductor/product/application/detail?productId=7978&iaId=2341


20. Approximate Algorithms Determining Local Clustering Coefficients Anonymously

Contact: Eiko Yoneki

Keywords: Sampling, Approximation, Privacy, Cluster Coefficient

Anonymous social networks are a new phenomenon in an increasingly privacy conscious world. A natural question to ask in this setting is whether we can continue to apply known principles of network science in such settings where neighbourhood information is concealed both between nodes and external observers. This project is to work on approximate algorithms that determines clustering coefficient in such settings. Clustering coefficients provide a way to relatively order nodes by importance in a network, determine densely connected neighbourhoods and distinguishing social graphs from web graphs. Algorithms to measure clustering coefficients have hitherto required global knowledge of graph links. This requires individual nodes in the graph to give up the identity of their neighbours. This project investigates an algorithm for estimating the clustering coefficient of a vertex by exchanging only anonymised set summaries of neighbours, from which it is difficult to reverse engineer individual links. The bulk of the project will consist of working on and improving sampling techniques to arrive at accurate local clustering coefficients without exchanging explicit neighbour lists in social networks.

[1] P. Flajolet, Eric Fusy, O. Gandouet, and et al. Hyperloglog: The analysis of a near-optimal cardinality estimation algorithm. In Proceedings of the International Conference on Analysis of Algorithms, 2007.


21. RasPi-Net: Building Stream Data Processing Platform over RasPiNET

Contact: Eiko Yoneki

Keywords: Raspberry Pi, Delay Tolerant Networks, Satellite Communication, Stream Processing

We have built a decentralised Raspberry Pi network (RasPiNET [1]), which can be deployed in wild and remote regions as a standalone network. The gateway Raspberry Pi nodes are integrated with satellite communication devices, where the light version of Delay Tolerant Network (DTN) bundle protocol is embedded. RasPiNET could consist of 10-100 nodes. As an example, a remote sensing application could be written either in RasPi or Smart phones that can connect to RasPi. Collected data could be processed within RasPiNET to reduce data size that streams over the satellite communication to the base location. The crowd sourcing application can run on top of RasPiNET, too. The goal of this project is building a stream processing platform in both directions: from data collection from RasPiNET nodes to the data processing nodes possibly via a satellite gateway and from bulk of data delivery to the satellite gateway node to disseminate necessary information to RasPiNET nodes. A good filtering function and RasPiNET in-network data aggregation could be developed.

[1] E. Yoneki: RasPiNET: Decentralised Communication and Sensing Platform with Satellite Connectivity. ACM CHANTS, 2014.
[2] Delay Tolerant Network Bundle Protocol: http://tools.ietf.org/html/rfc6255
[3] RockBlock Technology:http://rockblock.rock7mobile.com/


22. Clustering Entities across Multiple Documents in Massive Scale

Contact: Eiko Yoneki

Keywords: Clustering, Graph Partitioning, Random Walk, Distributed Algorithms

Many large-scale distributed problems include the optimal storage of large sets of graph structured data over several hosts - a key problem in today's Cloud infrastructure. However, in very large-scale distributed scenarios, state-of-the-art algorithms are not directly applicable, because frequent global operations over the entire graph are difficult. In [1], balanced graph partitioning is achieved by a fully distributed algorithm, called Ja-be-Ja that uses local search and simulated annealing techniques for graph partitioning annealing techniques for graph partitioning. The algorithm is massively parallel: each node is processed independently, and only the direct neighbours of the nodes and a small subset of random nodes in the graph need to be known locally. Strict synchronisation is not required. These features allow Ja-be-Ja to be easily adapted to any distributed graph processing system. This project starts by understanding Ja-be-Ja, and investigates further performance improvement. A case study: a graph-based approach to coreference resolution, where a graph representation of the documents and their context is used and applying a community detection algorithm based in [1] can speed up the task of coreference resolution by a very large degree.

[1] Fatemeh Rahimian, Amir H. Payberah, Sarunas Girdzijauskas, Mark Jelasity and Seif Haridi: JabeJa: A Distributed Algorithm for Balanced Graph Partitioning, IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO), 2013.
[2] Fatemeh Rahimian, Amir H. Payberah, Sarunas Girdzijauskas, and Seif Haridi: Distributed Vertex-Cut Partitioning, DAIS, 2014.


23. Graph Compression in the Semi-External Memory Environment

Contact: Eiko Yoneki

Keywords: Graph Compression, Encoding

This project explores graph compression mechanisms as part of a project looking into high performance semi-external memory graph processing (see [1] to get an idea of semi-external memory approaches). The graph compression work will build on an in-house graph representation format that we have developed that allows succinct representation of graphs that show hierarchical structures. The project will explore ways to improve the representation yielding smaller graphs on disk that are less costly to traverse. A key element of the representation scheme is a recursive graph partitioning step that minimises the number of edges between partitions. This is a rich space for exploration of suitable algorithms. We are concerned primarily with experimentally evaluating I/O costs on large graphs and measuring the compression performance. However, a student with a theoretical background might consider designing algorithms with provable bounds on compression performance, which would be a big plus. If time allows you could also implement an efficient transformation tool based on the developed graph compression algorithm using parallel processing tools (e.g. map/reduce).

[1] R. Pearce, M. Gokhale and N. Amato: Multithreaded Asynchronous Graph Traversal for In-Memory and Semi-External Memory, ACM/IEEE High Performance Computing, Networking, Storage and Analysis, 2010. http://dl.acm.org/citation.cfm?id=1884675


26. A Smart Smoking Cessation Intervention Mobile App

Contact: Cecilia Mascolo and Felix Naughton (Department of Public Health and Primary Care)

In this project we are seeking to develop a mobile application which uses the device sensors to track the user behaviour (such as context and location) in order to help change the behaviour of smokers who are trying to stop smoking. A good component of the project would be related to machine learning logic to "learn" user behaviour for more efficient delivery of interventions.
The challenges of the project involve the development of a sensible UI, the efficient and smart use of the device sensors in terms of battery and data accuracy, and if possible some data analytics during a pilot deployment with some users.


27. Mood Tracking Mobile App

Contact: Cecilia Mascolo and Jason Rentfrow (Department of Psychology)

In this project we are seeking to develop a mobile application which uses the device sensors to track the user mood and mental state as well as behaviour (such as context and location) through the day. Aspects related to how to compute locally or in the cloud to limit the breach in user privacy will be studied. Machine learning models will be used to decide when to interrupt the user.
The challenges of the project involve the development of a sensible UI, the efficient and smart use of the device sensors in terms of battery and data accuracy, considerations of where to compute (locally or remotely) and which models can be used locally for inference. A pilot user study with students will be run and data will be analyzed.


28. DTrace support for OCaml

Contact: Hannes Mehnert (website)

Keywords: Programming Languages, Runtime, Tracing

Are you programming OCaml? Ever wondered what are the hot functions of your code? Ever got stuck in debugging live systems?
The aim of this project is to extend the OCaml runtime with dynamic tracing facilities using DTrace. There are two flavours of probes, dynamic ones on function entry and exit, and static ones. To support the former one, the OCaml runtime needs to be extended, whereas for supporting static ones an OCaml library can be developed.
DTrace is a comprehensive dynamic tracing framework, originally developed for Solaris, used in FreeBSD.
OCaml is a functional programming language.
Python DTrace
Pre-requisites: This project requires basic knowledge of the OCaml runtime.



More Systems Projects at the DTG Project Page

Contact: Ripduman Sohan

Please see the DTG project suggestions page for a number of interesting systems projects.