Department of Computer Science and Technology

Systems Research Group – NetOS

ACS Projects (2013—2014)


This page collects together various ACS project suggestions from the Network and Operating Systems part of the Systems Research Group. In all cases there is a contact e-mail address given; please get in touch if you want more information about the project.

Under construction: please keep checking back, as more ideas will hopefully be added to this page during the coming weeks.

  1. Architecting the Internet for the Challenged
    Contact: Arjuna Sathiaseelan (with Jon Crowcroft and Dirk Trossen)

    The current Internet architecture is progressively reaching a saturation point in meeting increasing user's expectations, and it is progressively showing inability to efficiently respond to new challenges:

    (1) The current economic models for accessing the Internet build on the basic Best-Effort model (which would be a paid user’s basic Service Level Agreement (SLA)) and the transport protocols that govern the transmission of data were adapted to suit the Best Effort nature of the Internet to contend for available resources. This makes it impossible to support service models that could lower the cost of Internet access, e.g., through adaptive QoS. This is vital to the notion of Universal access reducing the economic barriers for access.

    (2) The current Internet is architected in such a way that it requires end-to-end always-on connectivity to transmit information; receivers are required to be prepared to accept data at any time, never knowing when transmissions will be infrequent and when more energy-efficient operating modes may be safely entered. This scheduling uncertainty therefore wastes energy, raising a barrier to implementation of time-windowed – or time-shifted – access which could bring in new lower cost access opportunities for, e.g., utilising unused capacity [1].

    (3) With the growing need for accessing more content, the host-centric model of the Internet leads to waste of resources, such as redundant transmission leading to congestion, while wasting opportunities to cache content on- as well as off-path.

    This project will propose to address these challenges with the vision of making the Internet ubiquitous, accessible and energy-efficient. This is done by traversing a range of connectivity options that ensure universal coverage, while providing a single unifying communication architecture with a single set of abstractions that not only spurs innovation for a wide range of new services and applications but also encompasses existing successful Internet services. We will utilise advances in information-centric networking (ICN) [2] to provide this abstraction - an abstraction driven by access to and provisioning of information rather than the connection to explicitly identified endpoints.

    The concept of overarching ICN enables us to pursue multiple complementary connectivity options, specifically including Delay Tolerant Networking (DTN) [3], as distinct dissemination strategies, each of which constitutes a set of protocols that optimally utilise local resources. Integrating multiple concurrent dissemination strategies enables the utilisation of connected and disconnected modes of access under a single architectural (information-centric) abstraction.

    The project student will develop a hands-on experience using the ICN Blackadder platform and will extend the Blackadder platform to include a DTN dissemination strategy. The student is also expected to demonstrate the extended platform through testbeds and/or simulations in ns-3. There are limitless possibilities to collaborate with other international research organizations like JPL/NASA and Aalto University.

    [1] A. Sathiaseelan, J. Crowcroft, LCD-Net: Lowest Cost Denominator Networking. ACM Computer Communication Review, April 2013.
    [2] D. Trossen, G. Parisis, “Designing and Realizing an Information-Centric Internet”, Communications Magazine, IEEE , vol.50, no.7, pp.60,67, July 2012.
    [3] K. Fall, A Delay-Tolerant Network Architecture for Challenged Internets, IRB-TR-03-003, February 2003.

  2. Flexible Transport of Time-shifted Video Content
    Contact: Arjuna Sathiaseelan (with Jon Crowcroft and Toby Moncaster)

    Note: Two possible projects.

    The exponential growth in on-demand television has imposed a significant strain on ISPs core and backhaul networks. In the UK, BBC iPlayer received 62 million requests in January 2009, which increased to 159 million by May 2011 (a growth rate of 3.5% per month or 50% per year) [1]. Recent studies have shown that most requests for iPlayer content are during peak periods [2]. This can lead to a significant degradation in performance during peak periods (for instance with HD content becoming unavailable). So the user, ISP and content provider would all benefit from the ability to time shift demand and to cache content nearer the destination.

    Streaming content from services such as iPlayer usually uses an application-layer transport protocol running over the top of TCP. Time-shifting the content opens up the chance to use alternative transports that may offer benefits to the operator (and user).

    This project explores an intelligent and flexible transport approach for time-shifted content. The project assumes the content server is co-located within the ISP’s network.

    (1) The server partitions the content into several sub-flows and associate individual deadlines for each sub-flow based on importance/timeliness.

    (2) The server uses network level information to determine the quality and condition of the network.

    (3) The server uses the above information to model the achievable throughput and level of fairness if each individual subflow was transported by different transport protocols such as

    * TCP and its variants,

    * Scavenger transports such as LEDBAT

    * Network coding for lossy wireless links

    * Remy

    (4) The server then chooses the optimal transport for each sub-flow based on its importance, timeliness and current network conditions: e.g. if a video content needs to be delivered in a few hours the content server could start using a scavenger transport method such as LEDBAT to stream the video. If the deadline is very near the server can choose to switch to a different transport method, possibly using multiple TCP connections to transport the video.

    (5) The system should keep the final deadline in mind and make intelligent optimized decisions that ensure a fair transmission of the content and efficient use of the network resources (to the extent of delaying transmission when the network is congested and when the deadline is not near).

    There are 2 main parts of this system, each of which probably contains enough content for a part II project in its own right:

    1) Creating the optimisation model for the network (and creating simulations to show it works).

    2) Implementing a client and server that can switch between transports depending on network conditions.

    [1] YD. Maynard, “BBC iPlayer, Monthly Performance Pack”, BBC Marketing, Communications & Audiences, February 2011.
    [2] Gianfranco Nencioni, Nishanth Sastry, Jigna Chandaria and Jon Crowcroft, Understanding and decreasing the network footprint of over-the-top on-demand delivery of TV content, 22nd International World Wide Web Conference (WWW), 2013.

  3. RasPi-Net: Building a Distributed Raspberry Pi Network together with Satellite Communication
    Contact: Eiko Yoneki (with Abraham Martin )
    Keywords: Raspberry Pi, Delay Tolerant Networks, Satellite Communication, Stream Processing  

    This project aims to build a decentralised Raspberry Pi network (Opportunistic Networks), which can be deployed in wild and remote regions. The gateway Raspberry Pi will be integrated with a satellite communication devices, where the light version of Delay Tolerant Network (DTN) bundle protocol will be embedded. RasPi-Net could consist of 10-100 nodes. As an example, a remote sensing application could be written either in RasPi or Smart phones that can connect to RasPi. Collected data could be processed within Ras-Pi Net to reduce data size that streams over the satellite communication to the base location. The crowd sourcing application can run on top of RasPi-Net, too. You could write a specific application to help in a disaster situation or build a typical streaming platform.
    [1] Delay Tolerant Network Bundle Protocol:
    [2] RockBlock Technology:

  4. Raspberry-Pi Bulk Synchronous Processing: Distributed Data (Graph) Processing
    Contact: Eiko Yoneki (with Karthik Nilakant/ Abraham Martin)
    Keywords: Graph Processing, Data Parallel, Bulk Synchronous Processing

    This project aims to replicate the benefit of an array of low power processors backed by persistent storage and graph mining. The aim is to build a software stack for Raspberry-Pi that allows a set of such devices, each equipped with either an SD card or a USB flash drive to act as a node for Bulk Synchronous Processing of graphs (see Pregel [2] for an example of bulk synchronous processing). All the necessary data structures will reside on flash with the small 256MB RAM on the Raspberry-PI acting as a scratchpad.

    This means that a failure of the Raspberry-Pi unit can be tolerated by simply swapping the flash memory over to another good unit. However this will need careful integration of a 2-phase commit protocol with bulk synchronous processing (BSP). One of the other goals of this project is to integrate a prefetching algorithm we have developed for graphs stored on flash drives with the software stack built for this project. The final aim will be to show that when processing time, energy consumption and cost are taken together this solution is competitive to running BSP on a cluster of PCs. Additionally you can explore the impact of data locality in such environments.

    [1] D. Andersen, J. Franklin, A. Phanishayee, L. Tan and V. Vasudevan: FAWN: A Fast Array of Wimpy Nodes, Communications of the ACM, July 2011.
    [2] G. Malewicz, M. Austern, A. Bik, J. Dehnert, I. Horn, N. Leiser, and G. Czajkowski: Pregel: A System for Large-Scale Graph Processing, SIGMOD, 2010.

  5. Graph Compression
    Contact: Eiko Yoneki (with Weixiong Rao)
    Keywords: Graph Compression

    This project explores graph compression mechanisms as part of a project looking into high performance semi-external memory graph processing (see [1] to get an idea of semi-external memory approaches). The graph compression work will build on an in-house graph representation format that we have developed that allows succinct representation of graphs that show hierarchical structures. The project will explore ways to improve the representation yielding smaller graphs on disk that are less costly to traverse. A key element of the representation scheme is a recursive graph partitioning step that minimises the number of edges between partitions. This is a rich space for exploration of suitable algorithms. We are concerned primarily with experimentally evaluating IO costs on large graphs and measuring the compression performance. However a student with a theoretical background might consider designing algorithms with provable bounds on compression performance, which would be a big plus. If time allows you could also implement an efficient transformation tool based on the developed graph compression algorithm using parallel processing tools (e.g. map/reduce).

    [1] R. Pearce, M. Gokhale and N. Amato: Multithreaded Asynchronous Graph Traversal for In-Memory and Semi-External Memory, ACM/IEEE High Performance Computing, Networking, Storage and Analysis, 2010.

  6. Develop Scale-Out SSD based Graph Traversal Platform
    Contact: Eiko Yoneki (with Karthik Nilakant)
    Keywords: Graph processing, I/O optimisation, Graph structure

    This project contributes to our ongoing work on high performance semi-external memory graph processing (see [1]). We have developed an in-house prefetching algorithm for mining large graphs stored on Solid State Drives. A solid state drive provides the ability to service many random reads in parallel, provided that the requesting application is able to deliver requests at a rate that keeps the drive sufficiently busy. Thus pre-fetching multiple pages based on the current request in an “intelligent” manner can greatly improve performance. Our current pre-fetcher implementations are aligned with the graph traversal approach, (that is the “graph iterator” being employed).

    Furthermore, the prefetcher implementation could be extended to utilise multiple storage devices in parallel. The aim in this setting would be to out-perform standard storage parallelisation techniques such as hardware or software RAID. The performance of this approach would also depend on the method of dividing graph data between available storage devices. Your task is developing scale-out version of our current work. You could also extend it in distributed graph traversals to study how a large graph (e.g. over 2 billion vertices) could get benefits from scale-out version, even though the current version can handle large graphs as a single node. You could also explore generating a large graph for testing by combining various entities of social network data (e.g. what happens when every "Like", "Comment" and "Photo" on Facebook are treated as vertices inside a graph -- not just "People" and "Pages" - such graphs would be huge).

    [1] E. Yoneki and A. Roy: Scale-up Graph Processing: A Storage-centric View. ACM SIGMOD - GRADES, 2013.
    [2] GraphLab:

  7. Building Graph Query Function using Functional Programming
    Contact: Eiko Yoneki (with Weixiong Rao)
    Keywords: Graph, Functional programming

    Demand to store and search of online data with graph structure is emerging. Such data range from online social networks to web links and it requires efficient query processing in a scalable manner. In this project, you will build a graph query function (layer) to achieve efficient graph data processing. The graph query function builds on a lazy graph loaded from multiple sources and performs queries at the same speed or faster than a relational database. The function should be written in Clojure or another functional language to support practical lazy loading from data source to an in-memory graph database. Efficient representations for graph queries should be also investigated. The evaluation includes comparisons with existing Graph Database and the query capability comparing with SQL. You can start the project by our existing work in this area.

    [1] Microsoft Research, “Trinity project: Distributed graph database,”
    [2] Neo Technology, “Java graph database,”

  8. The impact of social influence on Information Diffusion - Understanding Social Influence by analysing EEG and Eye Tracking from Cognitive Perspectives
    Contact: Eiko Yoneki (with Arman Idani)
    Keywords: Information Diffusion, Behaviour Model, Perception, Cognitive Science

    Social networks like Facebook and Twitter have brought new entertainment experiences. People or companies publish information in a social network through the network's topology and by sharing them with their neighbours. The published information can either be quickly ignored or widely spread in the network. The network structure takes an important role for diffusion processes. However, the extent to which information is propagated highly depends on social influences that the members have on each other when each member makes a decision on further propagation of the content.

    This project investigates the impact of social influences on information diffusion process from cognitive perspectives. First, you would analyse the data of Eye tracking and EEG, which we have collected our experiments and model decision making process from social influence including perception, and you could extend to analyse online social network activities for validation of the model. You could compare the different states of EEG activities with psychological models of decision making. You could also write your experimental application or online social networks (e.g. Facebook) to extract unconscious social behaviour in comparison to feedback information (e.g. how the other people rated, what is the average of rating) from the participant's social clique.

    The analysis requires an understanding of statistical modelling/machine learning as well as digital signal processing. Prior knowledge and experience in speech recognition is very useful.

    [1] Evan T.R. Rosenman: Retweets, but Not Just Retweets: Quantifying and Predicting Influence on Twitter. Thesis, Harvard University.
    [2] Boyd, Golder, and Lotan, Tweet, Tweet, Retweet: Conversational Aspects of Retweeting on Twitter, HICSS-43, 2010.
    [3] Balicer, R.:Modeling Infectious Diseases Dissemination Through Online Role-Playing Games. Epidemiology, 18(2), 2007.
    [4] Roman Bednarik , Markku Tukiainen, An eye-tracking methodology for characterizing program comprehension processes, Proceedings of the 2006 symposium on Eye tracking research & applications, March 27-29, 2006, San Diego, California
    [5] Hayes, T. R., Petrov, A. A., & Sederberg, P. B. (2011) A novel method for analyzing sequential eye movements reveals strategic inuence on Raven's Advanced Progressive Matrices. Journal of Vision, 11 (10:10), 1-11,
    [6] Janis, I. L., & Mann, L. (1977). Decision making: A psychological analysis of conflict, choice, and commitment. Free Press.
    [7] Klein, G. A., Orasanu, J. E., Calderwood, R. E., & Zsambok, C. E. (1993). Decision making in action: Models and methods.

  9. Trust Relationships and Content Filtering
    Contact: Eiko Yoneki (with Arman Idani)
    Keywords: Social Trust, Content Filtering

    It is now challenging for most users to keep up with the increased amount of information in their news feed in social networks such as Facebook or Twitter where people have a lot of first-hop connections. In this project you will investigate the relevance of social trust, defined as the confidence users have in their first-hop connections in different subjects, in their reaction towards the contents that they see in their feed. The aim is to answer questions such as (i) How do users choose which content to care about, and if it is relevant who is sharing the content with them? (ii) Can the influence of users on each other's behaviour be inferred? (iii) What makes a user trustworthy?

    In this project you need prior knowledge of machine learning and statistical modelling (ideally experience with inference and learning of graphical models or structural equation modelling), a programming language for data collection (Java or Python) and knowledge of statistics (e.g. R, SPSS, MATLAB).

    [1] Yarden Katz and Jennifer Golbeck. 2006. Social network-based trust in prioritized default logic. In proceedings of the 21st national conference on Artificial intelligence - Volume 2 (AAAI'06), Anthony Cohn (Ed.), Vol. 2. AAAI Press 1345-1350.
    [2] Ugur Kuter and Jennifer Golbeck. 2010. Using probabilistic confidence models for trust inference in Web-based social networks. ACM Trans. Internet Technol. 10, 2, Article 8 (June 2010), 23 pages.

  10. Opinion and Behaviour Dynamics with Role Game
    Contact: Eiko Yoneki (with Andre Ribeiro)
    Keywords: Information Cascade, Opinion Dynamics, Twitter

    The internet made communication (and content creation) dramatically faster, cheaper and easier. Some argue that this is an evolution of media technology we use to solve problems. This project investigates understanding the communicative behavioural dynamics, information flow and decision making constrained by the media and information people have available. Thus, the investigation covers behaviour modelling and analysis from an opinion dynamics perspective and can provide an answer to questions such as "do people change their opinion or share information based on something similar to page-ranking or some game-theoretical algorithm?" or "what factors have the largest influence?"

    In this project you would write a role-playing game to understand the above problem. Millions of people worldwide play interactive online role-playing games, forming complex and rich networks among their virtual characters. Such games could project the real world and predict what might occur. For example, an unexpected outbreak of an infective communicable disease (unplanned by the game creators) occurred when a role-playing game was played in the virtual world. This outbreak holds surprising similarities to real world epidemics [3].

    You can write role-playing game application over mobile devices and web and get insight over the communicative behaviour of the players under the different conditions (when they choose to communicate? with whom? For what reason?). The game could be "what's the most popular music?" or "what's the best mobile phone to buy?" or similar to any popular game in the market. Reading Vinayak Dalmia's masters thesis on 'Investigating Herding Behaviour in an Experimental Setup' (Judge Business School) will be a good starting point; and papers [1][2] and [4] will be good general background reading.

    [1] Boyd, Golder, and Lotan, Tweet, Tweet, Retweet: Conversational Aspects of Retweeting on Twitter, HICSS-43, 2010.
    [2] Christo Wilson, Bryce Boe, Alessandra Sala, Krishna P. N. Puttaswamy and Ben Y. Zhao, User Interactions in Social Networks and their Implications, ACM EuroSys 2009.
    [3] Balicer, R.:Modeling Infectious Diseases Dissemination Through Online Role-Playing Games. Epidemiology, 18(2), 2007.
    [4] Sastry, Yoneki and Crowcroft " Buzztraq: Predicting geographical access patterns of social cascades using social networks". EuroSys workshop on Social Network Systems, 2009.

  11. Building Dynamic Time-Dependent Multicast Tree in Twitter Networks
    Contact: Eiko Yoneki (with Valentin Dalibard and Weixoing Rao)
    Keywords: Keywords: Joint Diagonalisation, Multicast Tree, Time-Dependent Spread Mode, Content Distribution Simulation, Twitter

    In online social networks like Facebook and Twitter, people publish information through the network’s topology and by sharing them with their neighbours. The source node appears spontaneously, and based on the network topology built by the followers, the content cascades. This project aims at investigating the characteristics of such dynamic and temporal cascading trees, which may appear as multiple different spanning trees even from the same source node depending on the contents or depending on the time. The ultimate goal of the project is building multicast trees based on aggregating such spanning trees to accelerate the cascade process and potentially providing a more efficient content delivery process.

    The methodology that extracts such spanning trees can be obtained from our previous work on Joint diagonalisation (JD) [1]. JD is a technique used to estimate an average eigenspace of a set of matrices. JD on matrices of spanning trees of a network extracts multiple modes. Note that there is no single underlying static graph in most of real world networks. The average eigenspace may be used to construct a graph which represents the ‘average spanning tree’ of the network or a representation of the most common propagation paths. Examining the distribution of deviations from the average reveals this distribution is multi-modal; thus indicating several modes in the underlying network. These modes are identified and are found to correspond to particular times.

    You can explore this project in two ways: 1) modelling multicast tree from aggregation of spanning tree in more theoretical manner or 2) using OSN data and demonstrate the extracted tree and efficiency of multicast tree for content dissemination in content distribution network simulation.

    [1] D. Fay, J. Kunegis, and E. Yoneki: Centrality and Mode Detection in Dynamic Contact Graphs; a Joint Diagonalisation Approach. IEEE/ACM ASONAM, 2013
    [2] Meeyoung Cha, Hamed Haddadi, Fabrício Benevenuto, P. Krishna Gummadi: Measuring User Influence in Twitter: The Million Follower Fallacy. ICWSM 2010.
    [3] Evan T.R. Rosenman: Retweets, but Not Just Retweets: Quantifying and Predicting Influence on Twitter. Thesis, Harvard University.

  12. Cross-cultural Interactions
    Contact: Christos Efstratiou (with Daniele Quercia)

    Face-to-face interactions within the workplace can be influenced by a wide range of factors, including spacial variations (e.g., location of a person's desk) and cultural differences (e.g., a person's country of origin). In July 2012, we collected co-location information of 60 colleagues in the Computer Lab building. Participants were asked to wear active RFID tags ( that were able to record face-to-face encounters in the building. From this dataset, it is possible to identify social interactions and corresponding interaction strength (defined as contact frequency). The aim of this project is to identify how cultural differences among participants have affected their interactions. Cultural differences will be encoded using Hofstede's five dimensions, which nicely correspond to five publicly-available numbers for each nation of origin. The work will consist in selecting the appropriate network metrics, extracting them from our face-to-face network, and correlating their values with the five cultural dimensions. Desirable skills include willingness to learn data mining techniques and engage with basic stats.

    [1] G. Hofstede, G. J. Hofstede, M. Minkov. Cultures and Organizations: Software of the Mind.

  13. Trust and Influence on Facebook
    Contact: Jon Crowcroft

    Two possible projects
    1. A quick comparison of the two different clustering tricks - CNR based on interaction data, and the MPI one based on homophily - one could also build some more tools for this (like the MPI ones) see this CNR paper on Ego Nets and discovering clusters of users via their interactions, and the MPI paper on same thing but via the "likeness" of users.

    To make this into a nice Masters level project, one could then use the nested clusters (Dunbar's circles, for which, see Roberts, S., Dunbar, R.: Communication in social networks: Effects of kinship, network size, and emotional closeness. Personal Relationships 18(3) (2011) 439-452) to build facebook privacy setting defaults (presumably, as you go out each level, you decrease trust) - then one could do a user study to see how well the privacy settings matched their understanding of trust - coupled with the myPersonality facebook app one could control for various quirks of users (extrovert, introvert etc)

    One step more might be to link privacy and circle setting failures to unfriending behaviours....
    2. A longer project on intervention in social nets to "flatten" the double impact that high popularity users get from having greater degree, centrality, but also greater influence inherently (which is why they get greater degree and centrality - so their opinion is getting double counted) - there's a paper about this in the area sample biases in medical work somewhere here...but not on opinion dynamics - so it might be interesting to see if one can get a fairer measure of opinion somehow...

    this could be done by slowing down tweets or randomly dropping tweets (to some randomly selected subset of followers) from popular users - this is interesting ethically and technically:)

    See this paper on Encouraging moderation: Clues from a simple model of ideological conflict

    some more details
    Originator:Andrea Passarella [Javascript required]
    and the Recognition project

  14. Mobility Prediction in Transport Networks from Sparse Data
    Contact: Neal Lathia

    While Oyster cards have become rich sources of historical public transit mobility data, there are a number of cases where it would be useful to predict where transit passengers are going to travel to in the future. These include, for example, being able to give passengers timely information about their destinations as they travel, or giving TfL information about where customer segments they are interested in will be traveling to. One of the main problems that we have identified in preliminary research is how to successfully predict those trips of passengers who have a very sparse transaction history. This research proposes to use (a) historical Oyster card data (b) a sample of Wi-Fi traces from the London Underground to design, test, and evaluate the extent that passengers future trips can be predicted, and analyse how these predictions differ when the availability of historical transactions varies.

  15. The Language of Influence
    Contact:Diarmuid O Seaghdha (with Daniele Quercia)

    While macro-level studies of Twitter have concluded that "Twitter is not a social network but a news media" (Kwak et al., 2010), micro-level analyses of users and user-user interactions suggest that the Twittersphere functions as a set of real communities according to well-established sociological definitions of community (Cheng et al., 2011; Quercia et al., 2011). Twitter facilitates conversations between users, each of whom has a particular degree of status online (which may or may not reflect real-life status). This project would study social power relationships on Twitter, with a focus on linguistic indicators. Notions of influence on Twitter have been studied before (Liu et al, 2010; Danescu-Niculescu-Mizil et al, 2011) but there are still many potential aspects to investigate. Recently, Bramsen et al. (2011) looked at identifying power hierarchies as manifested in email data; here the sociolinguistic motivation would be similar but the problem would be quite different (and novel!). The NLP techniques involved would start with basic lexical, topic-model and sentiment analysis, but could extend to sophisticated dialogue act modelling (Ritter et al., 2010) and discourse effects such as accommodation (Danescu-Niculescu-Mizil et al, 2011; Danescu-Niculescu-Mizil et al, 2012) and coherence.

  16. Polypath Transport Protocols
    Contact: Anil Madhavapeddy or Toby Moncaster

    Cloud computing has become a central part of modern business. Services such as Amazon's EC2 offer users "Infrastructure as a service" -- the ability to lease virtual servers to run their business applications on. These virtual servers are actually virtual machines running on top of a hypervisor (such as Xen or VMWare) running on a physical server within a data-centre. Applications that talk between multiple such VMs usually use TCP (or a related variant such as DCTCP) as they don't know where the machines are physically located. Work such as [1] shows that TCP is rather slow for many tasks (but it always works). Shared memory (shmem) type transports are very much quicker for inter-process communications, however they can only be used between VMs residing on the same physical core.

    We have been working on a new approach that builds on multipath TCP [2] and offers the ability to set up shmem links between VMs regardless of their physical location. The key aspect is that it always maintains an underlying TCP flow for resilience. This will mean that the connection can survive live migration of the VM between physical cores. Currently this work is still in its early stages. We have completed the basic protocol design, but have only completed limited implementation and evaluation to date. This makes it a very open-ended project that is targeted for publication in a major conference (such as SIGCOMM or Infocom). As a minimum we need a more complete implementation of the protocol. This could simply build directly on the mechanisms of MPTCP (in which case it should take a relatively modest amount of effort), but a student might also want to significantly enhance the implementation and improve the protocol design to address issues such as how to deal with short flows (where control traffic can dominate the actual data) and the issue of dealing with hardware offload without sacrificing the performance benefits it offers [3]. One key aspect of this approach is that it should offer transparent integration with existing system libraries such as OpenSSL. We also intend to integrate it with name services such as DNSSEC to provide a mechanism to identify which hosts offer support for the protocol.

    [1] The case for reconfigurable I/O channels. S. Smith, A. Madhavapeddy, C. Smowton, M. Schwarzkopf, R. Mortier, R.M. Watson, S. Hand. In ReSoLVE12, Mrch 2012.
    [2] The IETF Multipath TCP Working Group Homepage
    [3] How had can it be? Designing and implementing a deployable Multipath TCP. C. Raiciu, C. Paasch, S. Barre, A. Ford, M. Honda, F. Duchene, O. Bonaventure, M. Handley. In SIGCOMM'12

  17. Scheduling on speed: doping processes for fairness
    Contact: Malte Schwarzkopf

    Operating system schedulers currently enforce a notion of fairness by making sure that processes receive an equal share of CPU time (modulo their priority level). However, modern CPUs are deeply hierarchical systems with many levels of shared micro-architectural resources, and co-scheduling processes can (and often does) lead to performance interference between them [4, 5].

    Recent Intel and AMD architectures support forms of "dynamic overlocking", by which they can dynamically increase the clock frequency of one or several cores on a chip, with differences as high as 500 MHz [2, 3]. However, to remain within the overall TDP limits, the processor may only apply this "doping" to either a subset of cores, or for a part of the time. The operating system can control the boosting of cores' clock by setting ACPIi power states for them, where the highest power state corresponds to the highest clock frequency. The control exterted by the OS is, however, limited by throttling mechanisms on the CPU: if it gets too hot, it will force the frequency down to a lower level.

    The aims of this project would be to (i) monitor unfairness arising from performance interference (e.g. cache misses, memory bus contention), meaning that processes cannot make full use of their CPU time, and (ii) compensate for such unfairness by regularly giving the disadvantaged process a "shot of dope", so that it can achieve more during its allocated time. This would be a similar approach to that described in [1], except that, instead of increasing the time the process is scheduled for, the frequency of the core it runs on is increased. In further work, you could evaluate how this is best balanced between different cores in order to avoid any being throttled; and look into trading off the cost of migrating a process to a different core against the benefit of getting doped.

    This project, if completed well and with good performance results, is likely to lead to a conference or workshop publication.

    [1] Improving Performance Isolation on Chip Multiprocessors via an Operating System Scheduler. A. Fedorova, M. Seltzer, M.D. Smith. In Proceedings of PACT 2007.
    [2] Intel TurboBoost, Wikipedia
    [3] AMD Bulldozer turbo mode, Wikipedia
    [4] Resource-Conscious Scheduling for Energy Efficiency on Multi-Core Processors, A. Merkel, J. Stoess, F. Bellosa. In Proceedings of EuroSys 2010.
    [5] Dynamic Classification of Program Memory Behaviors in CMPs, Y. Xie, G. Loh. In Proceedings of CMP-MSI 2008.

  18. Executing Hardware or: How I learned to run a 40G network on a whimpy laptop
    Contact: Matthew P. Grosvenor

    Device drivers are highly complex pieces of software. They have to contend with asynchronous event notifications from both hardware and software (often simultaneously), subtle kernel threading and locking models, complex hardware offload state management and notoriously poorly documented interfaces to both the kernel and the hardware itself. Additionally, device drivers are often written to support a multitude of subtle hardware variations. The popular Intel 1G (e1000) and 10G (ixgbe) Linux device drivers both support more than 25 different PCI endpoint identifiers each.

    Faced with this range and complexity of devices, driver developers rarely test their device drivers against all possible devices and all device variants thereof. A recent study found that over two dozen Linux and BSD driver patches were marked as “compiled tested only.”

    Ultraviolet (UV) [1] is an ongoing research project designed to facilitate in kernel testing of device drivers against software models of hardware. Currently, Ultraviolet implements a fully operational NetFPGA 10G [2] hardware model (in C) and device driver.

    The aim of this project is to implement the more complex Intel e1000 or Intel 10G driver and hardware model for one or more of the supported chipsets (e.g. Intel 82599 [3]) and show that device driver bugs introduced into the kernel can be detected and fixed using Ultraviolet.

    As part of an ongoing research project, it is hoped that successful completion of this project may lead to publishable outcomes.

    [1] Ultraviolet Project
    [2] NetFGPA 10G Project
    [3] Intel 82599 Controler

  19. Dr Frankenstein's Monster or: How I learned to duct-tape software and hardware together
    Contact: Matthew P. Grosvenor

    When designing new hardware, designers often use simulators to test their implementations before targeting real hardware. A number of tools exist for such work. Verilator [1] is a unique approach that translates Verilog code into an executable C program for testing. Currently, this tool tests the hardware design only, but do not also test device drivers associated with the hardware.

    Ultraviolet (UV) [2] is an ongoing research project designed to facilitate in kernel testing of device drivers against software models of hardware. Currently, Ultraviolet implements a fully operational NetFPGA 10G [3] hardware model in C and runs it against the production device driver.

    The aim of this project is to connect a cycle-accurate simulation of the NetFPGA hardware description in Verliog, to the Ultraviolet framework, possibly by using Verilator to “glue” the two together. This would allow in development hardware to be tested against a live, running kernel.

    As part of an ongoing research project, it is hoped that successful completion of this project may lead to publishable outcomes.

    [1] Verilator Project
    [2] Ultraviolet Project
    [3] NetFPGA 10G Project

  20. Reclaiming Productivity or: How I learned debug hardware with software
    Contact: Matthew P. Grosvenor

    Ultraviolet (UV) [1] is an ongoing research project designed to facilitate in kernel testing of device drivers against software models of hardware. The aim of this project is to implement and test a new Direct Memory Access (DMA) engine in real hardware using the Ultraviolet framework/methodology and a NetFPGA 10G [2] as deployment target.

    Using Ultraviolet, designers first specify and model a new device in a high level language such as C or Python. They can then write a functioning device driver and system can be fully functionally tested before hardware work begins. Once the system is running, a real hardware device is written in a Hardware Description Language (such as Verilog) and simulations of the device are can now be tested against the functioning software model. Finally, the real device can be deployed tested against the functioning driver. It is hoped that ultraviolet will speed up the development of new hardware by providing a mechanism to differentiate between hardware and software bugs.

    As part of an ongoing research project, it is hoped that successful completion of this project may lead to publishable outcomes.

    [1] Ultraviolet Project
    [2] NetFPGA 10G Project

  21. Mad Hatter's Assistant
    Contact: Jon Crowcroft

    The Hat Project is building a lot of things for the internet of things. THis project is about looking at the toolchain for logging things and peoples' use of things and correlating those logs (e.g. look at all the social media activity, tweets, diary, etc, and the use of heating, fridges, water etc) - this project might also engage with the Cambridge Makespace to see what sort of things people want to build to monitor their lives.

  22. Zero Power Sensing
    Contact: Jon Crowcroft

    This project is about designing a sensor network for devices that run on scavanged power - the engineering department has some prototypes - we need to glue these to al lthe axles on all the trains in England, to senseo the vibration (e.g. caused by wheel bearing wear and tear, which can cause track damage, which can cause train derailment, etc etc). We have some hardware constraints, and we have some design equations for sensor networks, and would like a simulation/modeling tool to try stuff out.

  23. More Systems Projects at the DTG Project Page
    Contact: Rip Sohan
    Keywords: 10G, Networking, Provenance, Kernel, User, DNS

    The DTG has a number of interesting systems projects in the following systems areas: