Department of Computer Science and Technology

Systems Research Group – NetOS

ACS Projects (2014—2015)

NetOS

This page collects together various ACS project suggestions from the Network and Operating Systems part of the Systems Research Group. In all cases there is a contact e-mail address given; please get in touch if you want more information about the project.

Under construction: please keep checking back, as more ideas will hopefully be added to this page during the coming weeks.



  1. Title of the project
    Contact: Name Surname


    Decription goes here Enumeration example:

    (1) Item here

    (2) Item here

    (3) Item here

    References
    [1] Author Names Here, Title, Venue, date.

  2. Fidelity assessment infrastructure for network emulation
    Contact: Dimosthenis Pediaditakis, Charalampos Rotsos

    Keywords: network emulation, fidelity, time-dilation, xen, measurements, invariants

    Modern networks have been increasingly growing in speed (10GbE, 40GbE), size (data center deployments) and complexity (network function virtualisation, software defined networking). Common network experimentation platforms however, have not been developing at the same pace, and fail to faithfully replicate event the results obtained from trivial medium-sized real-world scenarios.
    A new network experimentation platform, SELENA [1, 2], was designed in an effort to fill the aforementioned gap. SELENA builds on Xen [3] and it aims to improve certain key properties of network experimentation.
    Reproducibility
    - experiment scenarios are described in high-level terms (Python API)
    - automatic deployment and execution of experiments
    - configurations and performance results can be replicated across different runs and platforms
    Fidelity
    - emulation based solution (unmodified applications, real OS components)
    - faithfully replicates high aggregate throughput scenarios (100Gps++)
    - support common OpenFlow [4] components and also incorporates empirical OpenFlow switch models
    Scalability
    - users control the execution speed for an experiment, and can slow down (time-dilation[5]) the progression of virtual time to obtain better fidelity for network experiments of growing sizes

    The aim of the proposed project is to explore automated techniques for assessing the fidelity of the results obtained with SELENA. More specifically, a set of network performance invariants will have to be defined, and tracked during the execution of an experiment via non intrusive and lightweight techniques. Given a desired level of fidelity (e.g. for throughput, latency, etc.) there will be a maximum tolerable degree of invariants violation, expressed in statistical terms. The project will include both single-machine and multi-machine emulation scenarios. The ultimate goal is to dynamically tune the execution speed of an experiment, based on the feedback obtained from the fidelity assessment infrastructure.

    Required skills:
    - good knowledge of C and Python
    - good understanding of common internet protocols
    - familiarity with network measurement (estimation) techniques
    - familiarity with Linux networking and Xen virtualisation

    References
    [1] The SELENA project, http://selena-project.github.io
    [2] Faithful reproduction of network experiments, http://dl.acm.org/citation.cfm?id=2658274
    [3] Xen, http://www.xenproject.org
    [4] OpenFlow, http://dl.acm.org/citation.cfm?id=1355746
    [5] To infinity and beyond: time warped network emulation,

  3. High Precision Distributed Measurement System
    Contact: Gianni Antichi

    Keywords: measurement, distributed, PTP, FPGA

    Computer networks are the hallmark of 21st Century society and underpin virtually all infrastructures in the modern world. Consequently, society relies on the correct operation of these networks. To achieve compliant and functional equipment, effort is put into all parts of the network-equipment lifecycle. Testing validates new designs. Equipment is tested throughout the production process and new deployments are rigorously tested for compliance and correctness. In addition, many owners of network equipment employ a relentless battery of testing and measurements to ensure the infrastructure operates correctly.
    Starting from the OSNT [1] system, developed using NetFPGA-10G platform [2] we will explore the possibility for coordinating large numbers of OSNT probes synchronized by a common time-base to provide the resources and port-count for testing of larger network systems.

    References
    [1] Antichi G., Shahbaz M., Geng Y., Zilberman N., Covington A., Bruyere M., Feamster N., McKeown N., Felderman B., Blott M., Moore A.W., Owezarski P., "OSNT: Open Source Network Tester", IEEE Network Magazine, Special issue on Open Source for Networking: Tools and Applications.
    [2] http://www.netfpga.org


  4. PIG: Packet Inspector for multi-Gigabit networks
    Contact: Gianni Antichi

    Keywords: deep packet inspection, regular expression matching, DFA, FPGA

    Many important services in current networks are based on payload inspection in addition to headers processing. Intrusion of the Detection/Prevention Systems as well as traffic monitoring and layer­7 filtering require an accurate analysis of packet content in order to match with predefined data set of patterns. Traditionally, the data sets were constituted of a number of signatures to be searched using string matching algorithms. However nowadays due to their increased expressiveness and ability to describe a wide variety of payload signatures the regular expressions are used.
    In this project you will implement a packet inspector over the NetFPGA [1] platform and explore the trade-offs between different regular expression matching algorithms. Real networking hardware platform will be used for experimentation.

    References
    [1] http://www.netfpga.org


  5. Scalable, Fine-Grained Monitoring Architecture for SDN-enabled IXP
    Contact: Gianni Antichi

    Keywords: monitoring, high-speed, IXP, FPGA

    The operation of an IXP [1] is complex, given the volume of data involved, and the number and diversity of members that peer through its infrastructure. Currently, IXPs lack the awareness and manageability to make their operations more efficient and resilient.
    In this project, we will develop a monitoring platform for the SDN-enabled IXPs to enable their operation with the new networking services and technologies. Traditionally, monitoring tools need to be flexible in what/how much information is collected, while being able to operate at the high-speed data rates and massive data volumes happening at current and future IXPs. Therefore, with this project you will be focusing on the implementation of aforementioned architecture using the realistic NetFPGA [2] platform.

    References
    [1] https://www.euro-ix.net/what-is-an-ixp
    [2] http://www.netfpga.org


  6. Exploring Architectures for SDN Switching
    Contact: Noa Zilberman

    Keywords: SDN, Switching, Chip architecture, FPGA

    The concept of Software Defined Networks was introduced less than a decade ago, and has quickly overtaken a significant role in today's networking. The NetFPGA [1] platform was the open-source platform used to implement OpenFlow, the first example of SDN [2].
    NetFPGA-SUME [3] is the third generation of NetFPGA and a technology enabled for 100Gb/s research. In this project you will implement an OpenFlow switch 1.4.0 [4] over the NetFPGA-SUME plarform, explore various architectures for high bandwidth designs and research performance trade-offs in implementations, experimented over real networking platforms.
    This project requires basic knowledge of computer networks. Prior knowledge of Verilog is required as well.

    References
    [1] http://www.netfpga.org
    [2] N. McKeown, T. Anderson, H. Balakrishnan, G. Parulkar, L. Peterson, J. Rexford, S. Shenker, and J. Turner, Openflow: enabling innovation in campus networks, ACM SIGCOMM CCR, vol. 38, no. 2, pp. 69–74, 2008.
    [3] Noa Zilberman, Yury Audzevich, Adam Covington, Andrew W. Moore. NetFPGA SUME: Toward Research Commodity 100Gb/s, IEEE Micro, vol.34, no.5, pp.32,41, Sept.-Oct. 2014
    [4] Open Networking Foundation, OpenFlow Switch Specification, October 2014, https://www.opennetworking.org/sdn-resources/onf-specifications/openflow


  7. Exploring 100Gb/s Datapath Architectures
    Contact: Noa Zilberman

    Keywords: Networking, Chip architecture, SoC, FPGA

    The bandwidth of network switching silicon has increased by several orders of magnitude over the last decade. This rapid increase has called for innovation in datapath architectures, with many solutions competing for their place.
    NetFPGA-SUME [1] , the third generation of the NetFPGA [2] open source hardware platforms, is a technology enabler for 100Gb/s research. As a reconfigurable hardware platform, it allows rapid prototyping of various architectures, allowing to explore various trade offs.
    In this project you will implement multiple 100Gb/s datapath architectures over the NetFPGA-SUME plarform and explore different types of high bandwidth designs. You will research trade-offs in implementations, such as bandwidth performance, latency, power and resource usage . Real networking hardware platforms will be used for experimentation.
    This project requires basic knowledge of computer networks. Prior knowledge of Verilog is required as well.

    References
    [1] Noa Zilberman, Yury Audzevich, Adam Covington, Andrew W. Moore. NetFPGA SUME: Toward Research Commodity 100Gb/s, IEEE Micro, vol.34, no.5, pp.32,41, Sept.-Oct. 2014
    [2] http://www.netfpga.org


  8. Clustering Entities across Multiple Documents in Massive Scale

    Originator/Supervisor: Eiko Yoneki

    Keywords: Clustering, Graph Partitioning, Random Walk, Distributed Algorithms

    Many large-scale distributed problems, including the optimal storage of large sets of graph structured data over several hosts, are crucial for the Cloud infrastructure today. However, in very large-scale distributed scenarios, state-of-the-art algorithms are not directly applicable, because frequent global operations over the entire graph are difficult. In [1], balanced graph partitioning is achieved by a fully distributed algorithm, called Ja-be-Ja, which uses local search and simulated annealing techniques for graph partitioning. The algorithm is massively parallel: each node is processed independently, and only the direct neighbours of the nodes and a small subset of random nodes in the graph need to be known locally. Strict synchronisation is not required. These features allow Ja-be-Ja to be easily adapted to any distributed graph processing system. This will involve the understanding of Ja-be-Ja as a starting point, and investigate further performance improvements. A case study: a graph-based approach to coreference resolution, where a graph representation of documents and their context is used, and applying a community detection algorithm based on [1] can speed up the task of coreference resolution by a very large degree.

    References:

    [1] Fatemeh Rahimian, Amir H. Payberah, Sarunas Girdzijauskas, Mark Jelasity and Seif Haridi: JabeJa: A Distributed Algorithm for Balanced Graph Partitioning, IEEE International Conference on Self-Adaptive and Self-Organizing Systems (SASO), 2013.

    [2] Fatemeh Rahimian, Amir H. Payberah, Sarunas Girdzijauskas, and Seif Haridi: Distributed Vertex-Cut Partitioning, DAIS, 2014.

    [3] Fatemeh Rahimian: Gossip-Based Algorithms for Information Dissemination and Graph Clustering, PhD Thesis, 2014.

  9. Sampling and Approximation for Triangle Counting in Massive Graph Computation

    Originator/Supervisor: Eiko Yoneki

    Keywords: Sampling, Approximation, Triangle Counting, Cluster Coefficient, Big Data

    Graphs are becoming important to analyse the data. Two key metrics used to characterise a graph are its triangle count (TC) and its clustering coefficient (CC). Both metrics give an intuition of the community structure. When the data volume gets larger or the data rate becomes higher, we need to sample/approximate for speeding up data processing. We have extended the methods presented in [1] to allow the approximation of the TC and CC of graphs stored in external memory on a single machine. These methods are based on wedge sampling. A wedge is a path of length 2: a pair of edges sharing a vertex. As is often the case for graph computations, the algorithms we use show a lack of locality in memory accesses. Key to our approach is the parallel prefetching of memory accesses which will be needed in the future. This project addresses this parallel prefetching, where you need a smart indexing system, in which the graph is stored in such indexed form (e.g. CSR). The project will fully explore sampling and approximation techniques and investigate the format of the graph data, including reading the data in streaming manner such as described in X-Stream [2]. As an extension, local TC and CC could be explored.

    References:

    [1] C. Seshadhri, A. Pinar, and T. G. Kolda. Triadic measures on graphs: the power of wedge sampling. CoRR, abs/1202.5230, 2013.

    [2] Amitabha Roy, Ivo Mihailovic, Willy Zwaenepoel: X-Stream: Edge-centric Graph Processing using Streaming Partitions. SOSP 2013

  10. Approximate Algorithms Determining Local Clustering Coefficients Anonymously

    Originator/Supervisor: Eiko Yoneki (with Amitabha Roy (EPFL))

    Keywords: Sampling, Approximation, Privacy, Cluster Coefficient

    Anonymous social networks are a new phenomenon in an increasingly privacy conscious world. A natural question to ask in this setting is whether we can continue to apply known principles of network science in such settings where neighbourhood information is concealed both between nodes and to external observers. This project is to work on approximate algorithms that determines clustering coefficient in such settings. Clustering coefficients provide a way to relatively order nodes by importance in a network, determine densely connected neighbourhoods and distinguishing social graphs from web graphs. Algorithms to measure clustering coefficients have hitherto required global knowledge of graph links. This requires individual nodes in the graph to give up the identity of their neighbours. This project investigates an algorithm for estimating the clustering coefficient of a vertex by exchanging only anonymised set summaries of neighbours, from which it is difficult to reverse engineer individual links. The bulk of the work will consist of working on and improving sampling techniques to arrive at accurate local clustering coefficients without exchanging explicit neighbour lists in social networks.

    [1] P. Flajolet, ric Fusy, O. Gandouet, and et al. Hyperloglog: The analysis of a near-optimal cardinality estimation algorithm. In Proceedings of the International Conference on Analysis of Algorithms, 2007.

  11. Graph Compression

    Originator/Supervisor: Eiko Yoneki (with Weixiong Rao)

    Keywords: Graph Compression

    This project explores graph compression mechanisms as part of a project looking into high performance semi-external memory graph processing (see [1] to get an idea of semi-external memory approaches). The graph compression work will build on an in-house graph representation format that we have developed that allows succinct representation of graphs that show hierarchical structures. The project will explore ways to improve the representation yielding smaller graphs on disk that are less costly to traverse. A key element of the representation scheme is a recursive graph partitioning step that minimises the number of edges between partitions. This is a rich space for exploration of suitable algorithms. We are concerned primarily with experimentally evaluating I/O costs on large graphs and measuring the compression performance. However a student with a theoretical background might consider designing algorithms with provable bounds on compression performance, which would be a big plus. If time allows you could also implement an efficient transformation tool based on the developed graph compression algorithm using parallel processing tools (e.g. map/reduce).

    References:

    [1] R. Pearce, M. Gokhale and N. Amato: Multithreaded Asynchronous Graph Traversal for In-Memory and Semi-External Memory, ACM/IEEE High Performance Computing, Networking, Storage and Analysis, 2010. http://dl.acm.org/citation.cfm?id=1884675

  12. Develop Scale-Out SSD based Graph Traversal Platform

    Originator/Supervisor: Eiko Yoneki (with Karthik Nilakant)

    Keywords: Graph processing, I/O optimisation, Graph structure

    This project contributes to our ongoing work on high performance semi-external memory graph processing (see [1]). We have developed an in-house prefetching algorithm for mining large graphs stored on Solid State Drives. A solid state drive provides the ability to service many random reads in parallel, provided that the requesting application is able to deliver requests at a rate that keeps the drive sufficiently busy. Thus pre-fetching multiple pages based on the current request in an intelligent manner can greatly improve performance. Our current pre-fetcher implementations are aligned with the graph traversal approach, (that is the graph iterator being employed).

    Furthermore, the prefetcher implementation could be extended to utilise multiple storage devices in parallel. The aim in this setting would be to out-perform standard storage parallelisation techniques such as hardware or software RAID. The performance of this approach would also depend on the method of dividing graph data between available storage devices. Your task is developing scale-out version of our current work. You could also extend it in distributed graph traversals to study how a large graph (e.g. over 2 billion vertices) could get benefits from scale-out version, even though the current version can handle large graphs as a single node. You could also explore generating a large graph for testing by combining various entities of social network data (e.g. what happens when every "Like", "Comment" and "Photo" on Facebook are treated as vertices inside a graph -- not just "People" and "Pages" - such graphs would be huge).

    References:

    [1] E. Yoneki and A. Roy: Scale-up Graph Processing: A Storage-centric View.  ACM SIGMOD - GRADES, 2013.

    [2] GraphLab: http://graphlab.org/

  13. Building Graph Query Function using Functional Programming

    Originator/Supervisor: Eiko Yoneki (with Karthik Nilakant)

    Keywords: Graph, Functional programming

    Demand to store and search of online data with graph structure is emerging. Such data range from online social networks to web links and it requires efficient query processing in a scalable manner. In this project, you will build a graph query function (layer) to achieve efficient graph data processing. The graph query function builds on a lazy graph loaded from multiple sources and performs queries at the same speed or faster than a relational database. The function should be written in Clojure or another functional language to support practical lazy loading from data source to an in-memory graph database. Efficient representations for graph queries should be also investigated. The evaluation includes comparisons with existing Graph Database and the query capability comparing with SQL. You can start the project by our existing work in this area.

    References:

    [1] Microsoft Research: Trinity project: Distributed graph database, http://research.microsoft.com/en-us/projects/trinity/

    [2] Neo Technology, Java graph database, http://neo4j.org/

  14. RasPi-Net: Building Stream Data Processing Platform over RasPiNET

    Originator/Supervisor: Eiko Yoneki

    Keywords: Raspberry Pi, Delay Tolerant Networks, Satellite Communication, Stream Processing

    We have built a decentralised Raspberry Pi network (RasPiNET [1]), which can be deployed in wild and remote regions as a standalone network. The gateway Raspberry Pi nodes are integrated with satellite communication devices, where the light version of Delay Tolerant Network (DTN) bundle protocol is embedded. RasPiNET could consist of 10-100 nodes. As an example, a remote sensing application could be written either in RasPi or Smart phones that can connect to RasPi. Collected data could be processed within RasPiNET to reduce data size that streams over the satellite communication to the base location. The crowd sourcing application can also run on top of RasPiNET. The goal of this project is to build a stream processing platform in both directions: from data collection from RasPiNET nodes to the data processing nodes possibly via a satellite gateway and from bulk of data delivery to the satellite gateway node to disseminate necessary information to RasPiNET nodes. A good filtering function and RasPiNET in-network data aggregation could be developed.

    References:

    [1] E. Yoneki: RasPiNET: Decentralised Communication and Sensing Platform with Satellite Connectivity.  ACM CHANTS, 2014.

    [2] Delay Tolerant Network Bundle Protocol: http://tools.ietf.org/html/rfc6255

    [3] RockBlock Technology:http://rockblock.rock7mobile.com/

  15. Raspberry-BSP: Cheap and Safe Bulk Synchronous Processing on Wimpy Nodes

    Originator/Supervisor: Eiko Yoneki  (with Karthik Nilakant)

    Keywords: Graph Processing, Data Parallel, Bulk Synchronous Processing

    This project is inspired by FAWN [1] and aims to replicate the benefit of an array of low power processors backed by persistent storage to graph mining. The aim is to build a software stack for Raspberry-PI that allows a set of such devices, each equipped with either an SD card or a USB flash drive to act as a node for Bulk Synchronous Processing of graphs (see Pregel [2] for an example of bulk synchronous processing). All the necessary data structures will reside on Flash with the small 256MB RAM on the Raspberry-PI acting as a scratchpad. The aim will be to show that when processing time, energy consumption and cost are taken together this solution is competitive to running BSP on a cluster of PCs.

    References:

    [1] D. Andersen, J. Franklin, A. Phanishayee, L. Tan and V. Vasudevan: FAWN: A Fast Array of Wimpy Nodes, Communications of the ACM, July 2011.

    [2] G. Malewicz, M. Austern, A. Bik, J. Dehnert, I. Horn, N. Leiser, and G. Czajkowski: Pregel: A System for Large-Scale Graph Processing, SIGMOD, 2010.

  16. Building Dynamic Time-Dependent Multicast Tree in Twitter Networks

    Originator/Supervisor: Eiko Yoneki (with Valentin Dalibard)

    Keywords: Joint Diagonalisation, Multicast Tree, Time-Dependent Spread Mode, Content Distribution Simulation, Twitter

    The content diffusion occurs spontaneously in online social networks based on the network topology built by the followers, the content cascades. This project aims at investigating the characteristics of such dynamic and temporal cascading trees, which may appear as multiple different spanning trees even from the same source node depending on the contents or depending on the time. The ultimate goal of the project is building multicast trees based on aggregating such spanning trees to accelerate the cascade process and potentially providing a more efficient content delivery process.

    The methodology that extracts such spanning trees can be obtained from our previous work on Joint diagonalisation (JD) [1]. JD is a technique used to estimate an average eigenspace of a set of matrices. JD on matrices of spanning trees of a network extracts multiple modes. Note that there is no single underlying static graph in most of real world networks. The average eigenspace may be used to construct a graph which represents the average spanning tree of the network or a representation of the most common propagation paths. Examining the distribution of deviations from the average reveals this distribution is multi-modal; thus indicating several modes in the underlying network. These modes are identified and are found to correspond to particular times.

    You can explore this project in two ways: 1) modelling multicast tree from aggregation of spanning tree in more theoretical manner or 2) using OSN data and demonstrate the extracted tree and efficiency of multicast tree for content dissemination in content distribution network simulation. 

    References:  

    [1] D. Fay, J. Kunegis, and E. Yoneki: Centrality and Mode Detection in Dynamic Contact Graphs; a Joint Diagonalisation Approach.  IEEE/ACM ASONAM, 2013

    [2] Meeyoung Cha, Hamed Haddadi, Fabricio Benevenuto, P. Krishna Gummadi: Measuring User Influence in Twitter: The Million Follower Fallacy. ICWSM 2010.

    [3] Evan T.R. Rosenman: Retweets, but Not Just Retweets: Quantifying and Predicting Influence on Twitter. Thesis, Harvard University.



More Systems Projects at the DTG Project Page

Contact: Rip Sohan

Keywords: 10G, Networking, Provenance, Kernel, User, DNS

The DTG has a number of interesting systems projects in the following systems areas: