Department of Computer Science and Technology

Security Group

2011 seminars

Expand all Collapse all

View original page

21 November 15:00Real-Time and Real Trustworthiness: Timing Analysis of a Protected OS Kernel / Bernard Blackham (NICTA)

FW26, Computer Laboratory, William Gates Builiding

Protected operating systems have been an elusive target of static worst-case execution time (WCET) analysis, due to a combination of their size, unstructured code and tight coupling with hardware. As a result, critical hard real-time systems are usually developed without memory protection, in order to provide guarantees on their response time.

In this talk, I will explore a WCET analysis of seL4, a third-generation microkernel. seL4 is the world’s first formally-verified operating-system kernel, featuring machine-checked correctness proofs of its complete functionality. This makes seL4 an ideal platform for security-critical systems. Adding temporal guarantees makes seL4 also a compelling platform for safety- and timing-critical systems. It enables hard real-time systems with less critical time-sharing components to be integrated on the same processor, supporting enhanced functionality while keeping hardware and development costs low.

The talk will focus on the more interesting aspects of the analysis, and in particular, properties of the seL4 code base which made life easier in the process.

This work was presented at: Real-time Systems Symposium 2011 (Vienna, Austria)

Bio: Bernard is a PhD candidate at the University of New South Wales and NICTA in Sydney, Australia. His PhD relates to real-time aspects of the seL4 microkernel. Bernard's research interests include static analysis, process checkpointing, and generally messing with anything executable. Bernard also trains the Australian team for the International Olympiad in Informatics.

View original page

17 November 16:00Quantifying Location Privacy / George Theodorakopoulos (EPFL, University of Derby)

SS03, Computer Lab, William Gates Building

The popularity of personal communication devices leads to serious concerns about privacy in general, and location privacy in particular.
As a response to these issues, a number of Location-Privacy Protection Mechanisms (LPPMs) have been proposed during the last decade. However, their assessment and comparison remains problematic because of the absence of a systematic method to quantify them. In particular, the assumptions about the attacker's model tend to be incomplete, with the risk of a possibly wrong estimation of the users' location privacy.

I will talk about how we address these issues by providing a formal framework for the analysis of LPPMs; it captures, in particular, the prior information that might be available to the attacker, and various attacks that he can perform. By formalizing the adversary's performance, we propose and justify the right metric to quantify location privacy. We find that popular privacy metrics, such as k-anonymity and entropy, do not correlate well with the success of the adversary in inferring users'
locations.

Joint work with R. Shokri, J.-Y. Le Boudec, and J.-P. Hubaux.


Bio: George Theodorakopoulos is a Lecturer/Senior Lecturer at the University of Derby. He received his B.Sc. at the National Technical University of Athens (NTUA), Greece, and his M.Sc. and Ph.D. at the University of Maryland, in 2002, 2004 and 2007, respectively. From
2007 to 2011, he was a senior researcher at EPFL working with Prof.
Jean-Yves Le Boudec.

His research is on network security, privacy, and trust. Together with his Ph.D. advisor, John S. Baras, he has received the best paper award at WiSe'04, the 2007 IEEE ComSoc Leonard Abraham prize, and he has co-authored the book "Path Problems in Networks" on algebraic
(semiring) generalizations of shortest path algorithms and their applications to networking problems.

View original page

31 October 13:00Web mining and privacy: foes or friends? / Bettina Berendt

SS03, William Gates Building

Web mining (i.e., data mining applied to Web content, link, or usage data) is often regarded as a premier foe of privacy, and techniques for "privacy-preserving data mining" are seen as remedies. In this talk, I want to challenge this view by investigating different notions of privacy and different forms and stages of Web mining. As part of this, I will highlight the importance of different perspectives and present tools we developed for analysing data in this way.

View original pageView slides/notes

25 October 16:15Facial Analysis for Lie Detection / Hassan Ugail (University of Bradford)

Lecture Theatre 2, Computer Laboratory, William Gates Building

This talk will centre around our recent work on computer based human facial analysis for lie detection. The talk will focus on how cues from both visual and thermal domain can be identified to detect potential deception. Discussions will also focus on the experimental setup through which sufficient data has been collected and analysed to determine the individual baseline which serves as the ground truth for interrogation. Further our current integrated setup for non-invasive lie detection will be outlined and future direction of this research will be discussed.

View original pageView slides

06 October 14:15Building Trusted Systems with Protected Modules / Bryan Parno (Microsoft Research)

Lecture Theatre 2, Computer Laboratory, William Gates Building

As businesses and individuals entrust more and more sensitive tasks (e.g., paying bills, shopping online, or accessing medical records) to computers, it becomes increasingly important to ensure this trust is warranted. However, users are understandably reluctant to abandon the low cost, high performance, and flexibility of today's general-purpose computers. In this talk, I will describe Flicker, an architecture for constructing protected modules. Flicker demonstrates that we can satisfy the need for features and security by constructing an on-demand secure execution environment, using a combination of software techniques and recent commodity CPU enhancements. This provides a solid foundation for constructing secure systems that must coexist with standard software; the developer of a security-sensitive code module need only trust her own code, plus as few as 250 lines of Flicker code, for the secrecy and integrity of her code's execution. However, for many applications, secrecy and integrity are insufficient; thus, I'll discuss techniques for providing practical state continuity for protected modules. To ensure the correctness of our design, we develop formal, machine-verified proofs of safety. To demonstrate practicality, we have implemented our architectures on Linux and Windows running on AMD and Intel.

Bio

Dr Bryan Parno, Microsoft Research Redmond, received the 2010 Doctoral Dissertation Award from ACM for "resolving the tension between adequate security protections and the features and performance that users expect in a digitized world" and has recently co-authored the book "Bootstrapping Trust in Modern Computers" with Jon McCune and Adrian Perrig.

2010 ACM doctoral dissertation award:

http://www.acm.org/press-room/news-releases/2011/dd-award-2010

Bootstrapping Trust in Modern Computers:

http://www.springerlink.com/content/k16537/

View original page

29 September 16:00Twitter bots / Miranda Mowbray (HP Labs Bristol)

FW26, Computer Laboratory, William Gates Builiding

A particular feature of some social networks, including Twitter, is that software programmes can act within them in a similar way to human beings – indeed, in some cases it may not be obvious whether you are communicating with a human being or a piece of software.

There has been a rapid increase in the amount of automated use of Twitter. I will give some examples of such use, and discuss some potential implications for Twitter data mining, and for security/privacy. My talk will include both some older results and results from some very recent data analysis.

View original page

08 September 13:30The IITM Model and its Application to the Analysis of Real-World Security Protocol / Ralf Küsters, University of Trier

Large lecture theatre, Microsoft Research Ltd, 7 J J Thomson Avenue (Off Madingley Road), Cambridge

A prevalent way in cryptography to design and analyze cryptographic protocols in a modular way is the simulation-based approach. Higher-level components of a protocol are designed and analyzed based on lower-level idealized components, called ideal functionalities. Composition theorems then allow to replace the ideal functionalities by their realizations, altogether resulting in a system without idealized components.

In this talk, I first provide some background on the simulation-based approach and then briefly introduce the Inexhaustible Interactive Turing Machine (IITM) model, a model which, compared to other models for simulation-based security, is particularly simple and expressive. Although modularity is key to tame the complexity of real-world security protocol analysis, simulation-based approaches have rarely been used to analyze such protocols. In the past few years, we have developed a framework for the faithful and modular analysis of real-world security protocols based on the IITM model. I will present this framework and also discuss what has hindered the use of the simulation-based approach before.

View original page

02 August 16:15The great censorship war of 2011: are we winning? / Mystery speaker

Lecture Theatre 2, Computer Laboratory, William Gates Building

A report from the front line

View original page

19 July 16:15Evolutionary Software Repair / Stephanie Forrest, University of New Mexico

Lecture Theatre 2, Computer Laboratory, William Gates Building

Bio: Stephanie Forrest is Professor of Computer Science at the University of New Mexico in Albuquerque, and she is Co-Chair of the Santa Fe Institute Science Board. Her research studies adaptive systems, including immunology, evolutionary computation, biological modeling, computer security, and software. Professor Forrest received M.S. and Ph.D. degrees in Computer and Communication Sciences from the University of Michigan and a B.A. from St. John's College. Before joining UNM in 1990 she worked for Teknowledge Inc. and was a Director's Fellow at the Center for Nonlinear Studies, Los Alamos National Laboratory. She currently serves on the Computing Research Association CCC Council.

View original page

17 May 16:15Practical Linguistic Steganography using Synonym Substitution / Ching-Yun (Frannie) Chang & Stephen Clark, University of Cambridge

Lecture Theatre 2, Computer Laboratory, William Gates Building

Linguistic Steganography is concerned with hiding information in a natural language text, for the purposes of sending secret messages. A related area is natural language watermarking, in which information is added to a text in order to identify it, for example for the purposes of copyright. Linguistic Steganography algorithms hide information by manipulating properties of the text, for example by replacing some words with their synonyms. Unlike image-based steganography, linguistic steganography is in its infancy with little existing work. In this talk we will motivate the problem, in particular as an interesting application for Natural Language Processing (NLP) and especially natural language generation. Linguistic steganography is a difficult NLP problem because any change to the cover text must retain the meaning and style of the original, in order to prevent detection by an adversary.

Our method embeds information in the cover text by replacing words in the text with appropriate substitutes. We use a large database of word sequences collected from the Web (the Google n-gram data) to determine if a substitution is acceptable, obtaining promising results from an evaluation in which human judges are asked to rate the acceptability of modified sentences.

View original page

09 May 14:00(Research) Influences of wind on radiowave propagation in foliated fixed wireless system / (Research) Information flow control for static enforcement of user-defined privacy policies / Sören Preibusch and Tien Han Chua

SS03, William Gates Building

Influences of wind on radiowave propagation in foliated fixed wireless system, Tien Han Chua

From field measurement data collected over a two-year period, the influences of wind speed and wind direction on temporal fading in foliated fixed wireless links will be presented. The physical wind-foliage interactions and radiowave propagation mechanisms which could contribute to such fading events will be discussed. Finally, the possibilities to model the temporal fading through ray tracing based on geometrical optics and uniform theory of diffraction will be investigated.

Information flow control for static enforcement of user-defined privacy policies, Sören Preibusch

Web sites for retailing or social networking could turn privacy into a competitive advantage as they implement superior data protection practices compared to alternative service providers. One important pre-requisite is the enforceability of privacy guarantees. In the past, information leaks at companies who promote themselves as privacy-friendly demonstrates that current certification practices seem insufficient.

Information flow control (IFC) allows software programmers and auditors to detect and prevent the sharing of information between different parts of a program which, as a matter of policy, should be kept logically separate. However, the lack of widespread use of IFC suggests technology and usability barriers to adoption.

I will review pragmatic issues and systematic limitations of using JIF, a programming language that provides IFC on top of Java. The emphasis will be on personal experiences and lessons learnt in implementing the first Web-based IFC case-study with customer-negotiated restrictions on data recipients and usage. As an outlook, I'll consider how combining server-side information flow control with client-side scripting could implement the sticky privacy policy paradigm.

View original page

05 May 14:15Introduction to MILS and the LynuxWorks Separation Kernel / Rance DeLong, LynuxWorks Inc.

Lecture Theatre 2, Computer Laboratory, William Gates Building

Abstract not available

View original page

04 May 15:30The Bluespec hardware definition language / Joe Stoy, Bluespec Inc.

Lecture Theatre 2, Computer Laboratory, William Gates Building

Abstract not available

View original pageView slides/notes

04 May 14:15Reflection on Java Security and Its Practical Impacts / Li Gong

Lecture Theatre 1, Computer Laboratory, William Gates Building

In this talk I look back to a (then) new Java security architecture that was designed 15 years ago and is now standard across all Java platforms,
and draw lessons from that experience. For example, design security technologies that are appropriate for the target set of "customers" (e.g., programmer or users?); manage the constant conflicts between the want (of the enforcers) to protect and the desire (of the enforced) for freedom; and why lasting impact is often practical rather than theoretical, given that no
useful security is absolute. This will not be a typical research talk, but I will throw in some anecdotal stories to (try) make it worthwhile.

Speaker's Bio: Li Gong was in the PhD program at the Computer Lab from 1987 till 1990. He had a flourishing research career before joining the newly formed JavaSoft in 1996 to become Chief Java Security Architect and led the design and implementation of a new Java security architecture that is now in common use today. His corporate career included general manager of Sun Microsystems China R&D center, general manager of the online division of MSN in China for Microsoft, and now CEO of Mozilla Online Ltd., the Beijing-based subsidiary of the Mozilla Corporation. He also has an entrepreneurial side and participated in a number of startups in the Sillicon Valley and in China.

He served as both Program Chair and General Conference Chair for ACM CCS, IEEE S&P, and IEEE CSFW. He was Associate Editor of ACM TISSEC and Associate Editor-in-Chief of IEEE Internet Computing. He held visiting positions at Cornell and Stanford, and was a Guest Chair Professor at Tsinghua University, Beijing. He has 14 issued US patents (2 of which were among the 7 patents that Oracle cited in the lawsuit against Google in August 2010), co-authored 3 books (published by Addison Wesley and O’Reilly) and many technical articles, and received the 1994 Leonard G. Abraham Award given by the IEEE Communications Society for “the most significant contribution to technical literature in the field of interest of the IEEE.”

View original page

03 May 15:45Architectures for Practical Client-Side Security / Virgil Gligor, Carnegie Mellon University

Lecture Theatre 2, Computer Laboratory, William Gates Building

Few of the security architectures proposed for the past four decades (e.g., fine-grain domains of protection, security kernels, virtual machines) have made a significant difference on client-side security. In this presentation, I examine some of the reasons for this and some of the lessons learned to date. Focus on client-side security is warranted primarily because it is substantially more difficult to achieve than server security in practice, since
clients interact with human users directly and have to support their security needs. I argue that system and application partitioning to meet user security
needs is now feasible [2,3,5], and that special focus must be placed on how to design and implement trustworthy communication between users and their
partitions and between partitions themselves.

Trustworthy communication goes beyond secure channels, firewalls, guards and filters. The extent to which one partition accepts input from or outputs to another depends on the trust established with the input provider and output receiver. It also depends on input-rate throttling and output propagation
control, which often require establishing some degree of control over remote communication end points. I illustrate some of the fundamental challenges of
trustworthy communication at the user level, and introduce the notion of optimistic trust with its technical requirements for deterrence for non-compliant input providers and output receivers. Useful insights for trustworthy communication are derived from the behavioral economics, biology
[1] and social [4] aspects of trust.

References

[1] E. Fehr, “On the Economics and Biology of Trust,” Journal of the European Economic Association, April – May 2009, pp. 235-266.

[2] B. Lampson, ``Usable Security: How to Get it,” Comm. of the ACM, vol. 52, no. 11, Nov. 2009.

[3] J. McCune, Y. Li, N. Qu, Z. Zhou, A. Datta, V. Gligor, and A. Perrig, ``TrustVisor: Efficient TCB Reduction and Attestation,” Proc. of IEEE Symp. on
Security and Privacy, Oakland, CA, May 2010.

[4] F. Stajano and P. Wilson, “Understanding Scam Victims: Seven Principles for Systems Security,” University of Cambridge Computing Laboratory,
UCAM-CL-TR-754, Aug. 2009.

[5] A. Vasudevan, B. Parno, N. Qu, V. Gligor and A. Perrig, ``Lockdown: A Safe and Practical Environment for Security Applications,” Technical Report,
CMU-CyLab-09-011, July 14, 2009.

View original page

03 May 14:45CTSRD: Capability CPUs revisited / Peter Neumann, SRI International / Robert Watson, University of Cambridge

Lecture Theatre 2, Computer Laboratory, William Gates Building

Abstract not available

View original page

03 May 14:15An overview of the DARPA CRASH research programme / Howie Shrobe, DARPA

Lecture Theatre 2, Computer Laboratory, William Gates Building

Abstract not available

View original page

28 April 13:15Using the Cambridge ARM model to verify the concrete machine code of seL4 / Magnus Myreen (University of Cambridge)

Computer Laboratory, William Gates Building, Room SS03

The L4.verified project has proved functional correctness of C code
which implements a general-purpose operating system. The C code is
about 10,000 lines long and is designed to run on ARM processors. The
200,000-line L4.verified proof currently bottoms out at the level of C
code, i.e. the C compiler is currently a trusted component in the
intended workflow.

In this talk, we will describe how we are using the Cambridge model of
the ARM instruction set architecture (ISA) to remove the C compiler
from the trusted computing base. That is, we are extending the
existing L4.verified proof downwards so that it bottoms out at a much
lower level, namely, the concrete ARM machine code which runs directly
on ARM hardware.

The L4.verified project and the Cambridge ARM project have for years
been developed independently of one another. The main challenge is
now: how do we bridge the gap between these separate projects? Our
solution is to apply a technology, which we call, decompilation into
logic. Our tool, a decompiler, translates ARM machine code into
functional programs that are automatically verified to be functionally
equivalent with respect to the Cambridge model of the ARM ISA. We
apply our decompiler to the output of the C compiler to turn the seL4
binary into a large functional program. A connection can then be
proved semi-automatically between this functional program and the
semantics of the C code used in the L4.verified proof.

This talk describes ongoing work which, when complete, will remove the
need to trust the C compiler and the C semantics. The new proof will
instead have the Cambridge ARM model as a trusted component.

This is joint work with Thomas Sewell, Michael Norrish and Gerwin
Klein of NICTA, Australia.

View original page

13 April 16:00Mobile Social Networks and Context-Awareness: Usage, Privacy and a Solution to Texting While Driving? / Janne Lindqvist (Carnegie Mellon University)

Lecture Theatre 2, Computer Laboratory, William Gates Building

Many location-sharing systems have been developed over the past 20 years, and only recently have these
systems started to be adopted by consumers. One of the major successes so far in terms of user-adoption is Foursquare, which reports having ca. 7.5 million users as of March 2011. We studied both qualitatively and quantitatively how and why people use Foursquare, and how they manage their privacy. We will report on our user studies and surprising uses of Foursquare. Furthermore, we will discuss our ongoing work in using context-awareness and location sharing to nudge
people not to use their mobile phones while driving.

Speaker's bio:
Janne Lindqvist is a Postdoctoral Fellow with the Human-Computer Interaction Institute at Carnegie Mellon University. Janne works at the intersection of mobile computing, systems security and human-computer interaction. His current projects include usable privacy interfaces for mobile phones, context-aware mobile mashups, and mitigating problems with mobile
phone usage while driving. Before joining the academia, Janne co-founded a wireless networks company, Radionet, which was represented in 24 countries before being sold to Florida-based Airspan Networks during 2005.

View original page

12 April 16:15What is Software Assurance? / John Rushby, SRI International

Lecture Theatre 2, Computer Laboratory, William Gates Building

Safety-critical systems must be supplied with strong assurance that they are, indeed, safe. Top-level safety goals are usually stated quantitatively--for example, "no catastrophic failure in the lifetime
of all airplanes of one type"--and these translate into probabilistic requirements for subsystems, and hence for software. In this way, we obtain quantitative reliability requirements for software: for example, the probability of failure in flight-critical software must not exceed 10-9 per hour.

But the methods by which assurance is developed for critical systems are mostly about correctness (inspections, formal verification, testing etc.) and these do not seem to support quantitative reliability claims. Furthermore, more stringent reliability goals require more extensive correctness-based assurance. How does more assurance of correctness deliver greater reliability?

I will resolve this conundrum by arguing that what assurance actually does is provide evidence for assessing a probability of "possible perfection." Possible perfection does relate to reliability and has
other attractive properties that I will describe. In particular, it allows assessment of the reliability of certain fault-tolerant architectures. I will explain how formal verification can allow assessment of a probability of perfection, and will discuss plausible values for this probability and consequences for correctness of verification systems themselves.

This is joint work with Bev Littlewood of City University, London UK.

View original page

06 April 14:00Netalyzr: Network Measurement as a Network Security Problem / Nicholas Weaver, ICSI and UC Berkeley

Lecture Theatre 2, Computer Laboratory, William Gates Building

Netalyzr, at http://netalyzr.net, is a widely used network measurement and debugging tool, with over 180,000 executions to date. Netalyzr is a signed Java applet coupled to a custom suite of test servers in order to detect and debug problems with DNS, NATs, hidden HTTP proxies, and other issues. Netalyzr has revealed many problems in the Internet landscape, ranging from broken NAT DNS resolvers, hidden
caches and malfunctioning proxies, to deliberate ISP manipulations of DNS results, including some ISPs which use DNS to man-in-the-middle search properties like Yahoo, Google, and Bing. Although Netalyzr is
a network measurement tool, writing it was a network security process, designed to detect unusual conditions by deliberately bending (or outright breaking) protocol specifications, using unintended features of Java, and a general dose of "sneaky".

This talk discusses the design of Netalyzr, interesting cases observed during development, and highlights some of the interesting results including HTTP caches, hidden proxies, chronic overbuffering, and DNS misbehaviors.

View original page

15 March 16:15Caveat coercitor: towards coercion-evident elections / Mark Ryan (University of Birmingham)

Lecture Theatre 2, Computer Laboratory, William Gates Building

It has proved very difficult, and is perhaps impossible, to design an electronic voting system which satisfies the three desired properties of voter-incoercibility, results-verifiability, and usability. Therefore, we
have looked at forgoing incoercibility, and replacing it with "coercion evidence" -- after an election, it will be possible for observers to see how much coercion has taken place, and therefore whether the results
constitute a mandate for the winner. The system we describe is intended to be practical to use.

The talk will include an introduction to the concerns and issues of electronic voting, as well as a brief survey of existing systems. The body of the talk ongoing, unpublished work. I will welcome comments during and after the seminar.

View original page

03 March 16:00Promoting location privacy... one lie at a time / Daniele Quercia (University of Cambridge)

SS03 of the Computer Lab

Nowadays companies increasingly aggregate location data from different sources on the Internet to offer location-based services such as estimating current road traffic conditions, and finding the best nightlife locations in a city. However, these services have also caused outcries over privacy issues. As the volume of location data being aggregated expands, the comfort of sharing one's whereabouts with the public at large will unavoidably decrease. Existing ways of aggregating location data in the privacy literature are largely centralized in that they rely on a trusted location-based service.
Instead, we propose a piece of software (SpotME) that can run on a mobile phone and allows privacy-conscious users of location-based services to report, in addition to their actual locations, also some erroneous locations. The erroneous locations are selected by a randomized response algorithm in a way that makes it possible to accurately collect and process aggregated location data without affecting the fidelity of the result. We evaluate the accuracy of SpotME in estimating the number of people in a certain location upon two very different realistic mobility traces: the mobility of vehicles in urban, suburban and rural areas, and the mobility of subway train passengers in Greater London. We find that erroneous locations have little effect on the estimations (in both traces, the error is below 18% for a situation in which more than 99% of the locations are erroneous), yet they guarantee that users cannot be localized with high probability. Also, the computational and storage overheads for a mobile phone running SpotME are negligible, and the communication overhead is limited (SpotME adds an overhead of 21 byte/s).

View original page

23 February 14:15Reasoning about Software Safety Integrity and Assurance / Tim Kelly, University of York

Lecture Theatre 1, Computer Laboratory

With increasing amounts of software being used within safety critical applications, there is growing concern as to how designers and regulators can justify that this is software is sufficiently safe for use. At the system level, it is reasonable and sensible to talk in terms of risk mitigation, and to establish arguments that the probability of occurrence of identified risks is acceptably low. Whilst it is not difficult to cascade these risk-based requirements to software, it becomes extremely difficult to reason about software system failure probabilistically (for all but trivial examples). Instead, qualitative arguments and evidence (concerning the satisfaction of specific software safety properties and requirements) are instead typically offered up. These can be test-based arguments, or analytic (e.g.) proof-based arguments. However, these arguments (even when deductive reasoning is employed) cannot be established with absolute certainty. There remains epistemic uncertainty surrounding such approaches: Has the software (and its interface with the real world) been modeled adequately? Can the abstractions used be justified? Are the tools used in the process qualified? This talk will examine the problems of exchanging safety arguments concerning real-world risk (associated with aleatoric uncertainty) for issues of confidence associated with software safety arguments (associated with epistemic uncertainty). We’ll present these concerns in the context of structured (but informal) argumentation approaches used within software safety justifications, and the guidance that we have developed for safety-critical industries as part of the Software Systems Engineering Initiative (www.ssei.org.uk).


Biography

Dr Tim Kelly is a Senior Lecturer within the Department of Computer Science at the University of York. He is Academic Theme Leader for Dependability within the Ministry of Defence funded Software Systems Engineering Initiative, and was Deputy Director of the Rolls-Royce Systems and Software Engineering University Technology Centre. His research interests include safety case management, software safety analysis and justification, software architecture safety, certification of adaptive and learning systems, and the dependability of “Systems of Systems”. He has supervised a number of research projects in these areas with funding and support from the European Union, EPSRC, Airbus, Railway Safety and Standards Board, Rolls-Royce BAE Systems and the Ministry of Defence. Dr Kelly has published over 140 papers on safety-critical systems development and assurance issues.

View original page

01 February 13:00A report on the IAB/W3C Internet Privacy Workshop / Dr. David Evans (Computer Laboratory)

Computer Laboratory, William Gates Building, Room FW11

Back in December I went to the IAB/W3C Internet Privacy Workshop (http://www.iab.org/about/workshops/privacy/) at MIT. I'll outline what was said and where the emphasis lay.
In summary: the browser is really important and authors of standards documents should include a section on relevance to privacy.