home search a-z help
University of Cambridge Computer Laboratory
Thursday June 21st, 2007 - 4.30pm
Computer Laboratory > Research > Systems Research Group > NetOS > Seminars > Thursday June 21st, 2007 - 4.30pm

San Fermin: Aggregating Large Data Sets using Dynamic Binomial Trees

Justin Cappos

Content aggregation is an important sub-problem in distributed monitoring, distributed database queries, and software debugging. In this problem there are a large number of systems that have information and the requester is not interested in the result from each individual machine, but rather the aggregated results from all machines. Current solutions to this problem have looked at the case where the aggregate data is small (typically only a few bytes) and typically aggregate data by running a multicast tree in reverse.

This talk describes a novel algorithm called San Fermin used to aggregate large data sets. San Fermin returns the answer from more nodes, computes the result faster, and has better scalability than existing solutions. Our evaluation explores different aggregation techniques using mathematical modeling, simulation, and deployment of a prototype on PlanetLab. Evaluation shows that San Fermin is scalable as either the number of nodes or the data size increases. San Fermin is also amazingly resilient to failures, so that when 10% of the nodes fail during aggregation it still returns the answer from over 97% of the nodes.

Bio:

Justin Cappos is currently working on his Ph. D. at the University of Arizona with John Hartman and Beichuan Zhang. His research is focused on improving the security and efficiency of real world networks of computer systems. He has lead a number of projects including Stork.