Rambles around computer science

Diverting trains of thought, wasting precious time

Thu, 28 May 2009

ICSE day 3, and subsequent talks

It's been a few days since the end of the conference already, so I should probably write up my reflections on the final day. Friday began with a very interesting keynote by Pamela Zave, arguing that more software engineers should be paying attention to the current work (and funding drive) on next-generation internet architectures. The rationale is that the networking community doesn't pay much attention to the concerns of application authors, and arguably don't necessarily come up with designs that have good engineering properties in the sense of compositionality, abstraction and so on. I liked the sentiment but was less sure about her argument... it was more at a tentative “possible approaches” stage than an inspirational, rabble-rousing keynote. I'm about to write up a summary of her argument and send it round the netos list at Cambridge to see what those hungry hounds make of it.

Later sessions were a mixed bag, as always, but contained some nice stuff. In the code generation session, some guys from MIT presented a lower-effort, higher-assurance way of generating correct implementations for “standard” methods found in OO languages (like equals() or hashCode()). I missed the start of the talk, so didn't quite follow the details, and am not sure how significant a contribution this is, but it was thought-provoking nonetheless: was it a poor-value decision for the Java and .NET library designers to mandate these methods, given that they're difficult to get right? What can we learn from this? Well, probably something, but I should read the paper first. Second in the code generation session was a paper from Peking University about applying some analyses to identify which string constants, in a large program, make it to user output devices (i.e. a certain set of UI library calls), accounting for substrings and concatenation and various other complications. It was a decent paper tackling a small but nontrivial problem, although again I wasn't sure whether there was a huge contribution in bolting together a few preexisting analyses for solving such a specific problem. And lastly there was a prize-winning paper about using genetic programming techniques to automatically synthesise a patch for fixing a newly discovered bug, given only a test case triggering the bug. It works by finding snippets of code that already exist elsewhere in the source, and randomly splicing them in near your bug site, or similarly randomly removing lines not found on test-passing paths, then applying an iterative process to get closer to a solution, using the test case and a control-path weighting derived from test case executions, to guide the mutations. This is very clever stuff and deservedly won a couple of prizes. If I had to make a criticism, then rather like the (admittedly awesome) Steve Reiss paper of the first day, these approaches feel somewhat unsatisfactory in that they're proposing the use of source code that is the output of essentially random processes. But of course, human programmers can be modelled as random processes fairly well, in that they naturally insert errors every so often for no clear reason, so rationally there's no reason to object to these approaches on those grounds.

The concurrency session was pretty interesting. First there was something from Hong Kong University of Science and Technology about using aspects to insert synchronization logic. There was more to it than that, and the talk was delivered fairly well, but for some reason I lost it early on so should really read the paper. Secondly was a prize-winning paper about Effective Static Deadlock Detection, by some Intel/UC Berkeley guys. This was a pretty neat tool which combines a bunch of well-known static analyses in surprisingly simple but neatly complementary ways to provide a decent static deadlock detector for Java programs. As often happens, the talk managed to make the work seem trivial, where in fact there's a major contribution simply in the problem set-up and approach (in this paper, the six necessary conditions for deadlock that their analyses target). Something which marks out the “software engineering” approach from a more “programming language” approach is that their tool is neither sound nor complete---it just does a decent job in practice for a decent range of input programs. Finally in this session was a talk by Danny Dig about a refactoring tool which can automatically make the necessary source-level updates for migrating a concurrent program to use a new set of concurrency-optimised library interfaces like those in java.util.concurrent. I found the example use-cases very limited. There were three: use of AtomicInteger instead of ++, use of the additional atomic operations in ConcurrentHashMap, and a fairly simplistic conversion of easily-detectable divide-and-conquer algorithms to use a parallel evaluation strategy. All fair enough, but it's not clear whether the refactoring support for these generalises in any way, and the detection of divide-and-conquer approach seemed terribly simplistic. (I'd argue that it's something best done on an intermediate representation, perhaps using some sort of CFG pattern-matching techniques much like a decompiler---rather than the easily-foiled source-level detection.)

In the Software Process session there was a paper from MSR attempting to answer the question of whether “distributed development harms software quality”. Unfortunately the presentation didn't really answer it---it could only say “not the way Microsoft did it for Windows Vista, but that's probably because they took several measures to minimise the problems, and we're not sure which ones are the important ones”. So all a bit unsatisfactory, though perhaps there's more in the paper. I then skipped over to the second Program Analysis session (happening concurrently) and caught a couple of talks there, but with it being the last session on the Friday, not much stayed with me, so I won't defame the authors by attempting to regurgitate any of their work here.

Reflecting on ICSE: it was good, although I didn't enjoy it quite as much as I enjoyed FSE in 2007, nor did I find quite so much exciting work in it. Perhaps it's because in those days I was a wide-eyed first-year student, whereas now I'm a hard-headed should-be-writing-a-thesis cynic. But I'd also venture to say that ICSE is slightly unwieldy in its size, and would be better with a smaller, slightly narrower program that reduces the concurrency factor in the session scheduling.

Following the conference I've been sticking around in Beautiful British Columbia (as the license plates rightly describe it) to give my talk about ongoing work on Cake. On Monday I went to UBC, hosted by Andrew Warfield, and today I've been at UVic hosted by Yvonne Coady after Andy very kindly put me in contact. Both talks were useful experiences, and between the two, the contrast could not have been more marked. After UBC I was left contemplating quitting my PhD. I turned up, gave my talk and felt like I did a reasonable job delivery-wise, but conspicuously absent was any sign from the audience of the slightest appreciation, or any nonnegative remark. Perhaps I'm paranoid, but the questions seemed to come with an undercurrent of contempt. The audience buggered off without any attempt at even a pretence of sociability, and I went home in a rather frayed state. Today was a much better story: following extrenely helpful pre-arrival e-mails, I received an almost (but not!) embarrassingly warm welcome at UVic. Even better, my ambition has been mostly restored thanks to the stunning generosity, enthusiasm and genuine helpfulness of Yvonne and the other UVic people. Victoria is a particularly beautiful place too, with a quaint character of its own, so it's very sad that I have to leave tomorrow. Hopefully I'll get time to poke around in the morning, and I'm also hoping it won't be terribly long before I come back here.

[/research] permanent link contact

Powered by blosxom

validate this page