Rambles around computer science

Diverting trains of thought, wasting precious time

Mon, 14 Mar 2011

Writing a Part II dissertation

Not that long after I did Part II of the Tripos in Cambridge, I wrote a piece about how to do a successful Part II project, based on bitter experience of having done a less-than-completely successful one. Now it's a few years later, and I've supervised four Part II dissertations and proof-read a couple more (and now left Cambridge!). I thought I'd write down some hints about what makes a good dissertation. As before, the pink book is well-meaning but misses a lot of this.

I should also add that this advice is not prescriptive---it's a mixture of my personal preferences as a reader of dissertations, and a conservative underapproximation of “known ways to write a good dissertation” according to my experience. Not being part of the examination process myself, I can't say it's the only way, nor that it's sufficient, nor even that it's always correct---only that these recommendations seem to have some correlation with good marks.

Introduction

This chapter is for you to identify the problem space of the work, and what specifically your work seeks to achieve in that space. You should outline the technical background of the work here, but only at a high level---save details for next section. The emphasis should be on context: motivating use-cases, and covering a selection of differing alternative approaches to the problem. Once you've covered that, you can give a specific taster of the one you adopted---probably in the form of a simplified summary, perhaps a preview-style example or two (of the work you did and its supported use-cases) and an outline of the rest of the dissertation.

Preparation

This chapter should consist mostly of technical background, at a level deeper than the introduction. Think of it as mirroring the preparatory reading (and perhaps experimentation) you had to do before beginning the project work proper. You should describe the specification of the system you built, outline its overall design, and mention any changes to proposal. It's particularly worth emphasising anything hard that you had to learn in order to undertake the project---such as unfamiliar tools, languages, libraries, programming styles, algorithms, bits of mathematics, and so on.

For pitching your work, my rule of thumb is to assume your reader knows basic computer science (say, an outline understanding of the first two years' Tripos content) but is a bit slow to catch on. In other words, try to make the reasoning in your write-up extremely clear and easy to follow. This is not because the examiners are slow to catch on. Rather, it's because they are probably in a hurry, and in order to counterbalance the fact that you're almost certain to underestimate the difficulty of understanding your own write-up.

Avoid assuming that the reader knows any particular language, tool or detailed technique that you use. For example, if your work is based on implementing some particular programming language, you can assume the reader has basic degree-level knowledge about programming languages in general, but you should use this chapter to give an outline introduction to this specific language and its peculiarities, with references to additional material.

This chapter is somewhat infamous for requiring that you write some “boring” stuff about development practices: what tools you used, your testing strategy, how you took backups, what development practices you followed, and so on. I say “infamous” because we researchers, when we are proof-reading dissertations, find this all a bit incongruous---most of a dissertation reads a bit like a research write-up, whereas this stuff doesn't. Since we're not accustomed to seeing this kind of thing written down, it thuds a bit. However, it's the researcher's perspective that's at fault here. Most dissertation authors are about to go off into industry, and from the perspective of professional software development, including this small amount of practice- and process-oriented content is a very reasonable requirement.

Implementation

To a first approximation, this chapter should be describing the clever data structures and algorithms that appeared in your project. Think “meaty”---a really good Implementation chapter will come across as chock-full of interesting, nontrivial technical material, clearly described and comprehensible. It should also have reasoned discussion of how your techniques work and why they are well-suited to the problem.

When explaining how your system works, there is a question of what level of explanation is appropriate, and how much detail you should go into. I'd advise that you completely avoid descending to syntactic code-level details (e.g. names of particular classes and methods). There could be an exception if some class or method was so conceptually significant and pervasive that mentioning it by name is the clearest presentation, but this is relatively rare. Think of this chapter as providing descriptions sufficient for a reader to go away and reimplement your system algorithm-for-algorithm, but not identifier-for-identifier. Including a UML class diagram, or similar, is okay---particularly when it's showing something interesting!---but I'd advise against going any deeper than that.

Instead of code-level detail, explain your implementation using pseudocode, data-structure diagrams, flow-charts, time-sequence diagrams, and the like. A good project will have meat here that is outside the syllabus, or at least drawn from the more specialised end of Part II courses, so you should make these especially clear and assume your reader is unfamiliar.

You may well have used some third-party software as part of your system. You should explain how it fits in. Moreover, it's okay (and generally good) to include a summary of how it works, even though you didn't write it. This is to show that you could have implemented it yourself if you'd had to, but that there was no good reason to!

I would personally avoid talking about evaluation harnesses, methods or support coding here. That is best saved for the Evaluation chapter.

Look at existing technical writing, perhaps research papers in reading lists of your lecture courses, for more ideas about how to present technical content clearly and at the right level.

Evaluation

Couch your evaluation in terms of claims that your work supports, or goals that you claim your work has fulfilled. The chapter then becomes an exercise in substantiating these claims.

Explain your method of evaluation. This requires answering at least three questions. What data are you collecting? How will you be interpreting it? Finally, how are your claims decided by this interpretation of your data? That final question is the trickiest to get right---usually it involves an argument that the interpretation of the data soundly represents the abstract truth of the claim in some way. (However, you don't need to belabour the point on that last issue---as long as your approach is not completely off-the-wall, it's generally enough to show that show that you've thought about the validity of your evaluation method and that it's not obviously fatally flawed.)

There should be results, or data. Since these will hopefully be quantitative, they are usually presented as tables and/or graphs. Pay attention to precise presentation of results: units, appropriate precision, error bars, discrete vs continuous graph conventions, etc.. For each piece of evidence you present, say why you're interested in it (first!) and (then) what it shows and your confidence in that.

Present results so that they can be reproduced. So if you give program execution times, say what the machine you were running on was. If you can abstract this to something of more robust meaning than numbers relevant only to your particular experimental rig, then do so, although unless the mapping is absolutely trivial, include the raw data too. One example: if you measure execution times of some low-level code in CPU microseconds, say, then it might be worth abstracting that to CPU cycles if you have the relevant data---but keep the original measurement in there too. More importantly, explain what you are doing! One of the easiest ways to write a bad Evaluation chapter about good work is not to explain the origin or significance of the numbers or graphs you're presenting.

Have something to compare your system against. If this really doesn't seem feasible, you can sometimes get around this by defining your own goals up-front (e.g. “at least 70% precision...”--- including some argued justification for the number!), then showing that you satisfy them. In this case, the goal should be mentioned in the Introduction or Preparation sections. It is nevertheless nicer to show that your system does better than some other system or approach, even if it's a somewhat naive or trivial one. For example, comparing a query-routing algorithm against a random algorithm is better than no comparison at all.

Conclusions

Say what you did, in summary. It's okay to repeat yourself from earlier sections. Also, take the opportunity to talk about future work: what you learned, what could be done better, how your system could be extended to do new and wonderful things, or what better ideas you've had since about how to tackle your chosen problem and its close relatives.

Bibliography

The bibliography should consist of references to archival material. There are some things you'll want to “reference” that are not archival material. For example, you will want to reference software packages you've used. I personally avoid trying to “cite” these in the bibliography, for example, although some authors do this. So that point, and much of the following, is mostly personal advice.

Since URLs are not (under current web technology) stable enough for archival use, avoid URLs except as a last resort, when they're the only way of identifying a published document. Unlike paper publications, web pages are living documents, so if you use a URL, include the date on which you looked at the content (e.g. “retrieved on 2010-07-30”) so that a reader can (hopefully) use an archiving search engine to see the precise page that you saw.

I prefer to use footnotes for referencing software by URL. (But generally use footnotes sparingly---only for places where they'd really interrupt the main flow of text, but are nevertheless relevant and interesting.)

This is very personal advice, but I'd say: in the main text, avoid using citation markers as nouns. A lot of writers do this, but I find it jarringly bad style. Instead, use the author's name. For example, don't write “as shown by [42]”, but instead write “as shown by Smith et al [42]”. If you're using author--year citation markers (which I generally prefer to plain numbers), there are usually different citation styles you can use to avoid repeating the author's name, for example “as shown by Smith et al [1990]” is preferable to “as shown by Smith et al [Smith 1990]”.

Style

Following are a few other pieces of advice on style that I've had cause to dispense on the past.

Don't feel the need to tell a complete, linear story. It's very tempting to write what you did as a narrative. However, this usually has two problems. Firstly, by emphasising the what, it's easy to understate the why, which can make the reader baffled by a succession of sentences saying “I did this, then I did this, then I did this...” but without much insight about the overall goal or the motivation behind each step. Secondly, some of the twists and turns that you took are probably not interesting at all---no matter how much of a struggle they were for you at the time---so can be left out. So, instead of a narrative, try to write instead in what I call a “progressive encoding” or “iterative deepening” style. Think about the most significant high-level points you want to make. As a rule of thumb, each of these should be stated concisely at the beginning of its own paragraph. The rest of the paragraph can fill in additional detail and back-story in successive sentences. This results in a structure where the fact and the explanation occur close together. Curiously, they usually turn up in reverse order (fact first, then explanation) and this is still very readable. It's a very common style in good scientific writing. It has the rhetorical benefit that you keep the reader interested. Making a contentious or surprising statement, then delving progressively deeper into a justification of it, is a trick that can keep the reader engaged when used judiciously.

As an example, instead of saying “I tried running the JVM with the default options, but this suffered from out-of-memory errors, so ran with an increased heap size of 384MB“, say “the JVM was configured with the larger heap size of 384MB. The smaller default heap size was discovered to be insufficient when constructing the intermediate representation for larger input files.” Again, you can hoist more significant facts sooner, and leave more peripheral matters until later in the paragraph. Note that there will still be a temporal progression to some extent, particularly between paragraphs. Note also that by this approach you will naturally end up leaving out the redundant twists and turns that are not interesting. If you think something interesting has been left out unfairly by this structuring, you can always put it in a paragraph towards the end of a section or chapter, clearly signposted as a curiosity rather than part of the main flow. (Sometimes, a paragraph will end with a bracketed note like this one, to further outline the distinction between more- and less-important content.)

Avoid editorialising, by which I mean expressing opinions (e.g. about particular designs, technologies, approaches, companies, products etc.) that you don't intend to scientifically validate.

Underneath any heading in the dissertation, it's good style to include at least a summary sentence before descending down a level of heading structure (e.g. between heading and subheading).

Finally, keep your writing grammatical. Every sentence should contain a verb!

That's all I have to offer---please do let me know any suggestions, queries, corrections, counterexamples or contrary experience you may have about anything in this post.

[Update, 2011-10-13: Andrew Moore supplies this very helpful set of notes from his duties as examiner in 2011!]

[/teaching] permanent link contact

Tue, 22 Feb 2011

Explaining covariance and contravariance by data flow

A few years ago when I was pressed by my students to give them a good explanation of Java's wildcard types, like Stack<? super Circle> or List<? extends Ellipse>, I came up with a simple code example concerning the direction of data flow. The idea seems both to be more general than I first thought, and to apply to more wide-ranging problems with subtyping relations than the case of Java wildcards. Since I've never seen it written down anywhere else, here it is.

In a nutshell, the idea is that direction of data flow is key. (If that was obvious to you already, you can probably stop reading now!) In object-oriented programming, all interactions across object interfaces can be seen as sending messages. In a procedural language like Java, each interaction will in general involve two messages---the call and the return value. In a parametrically polymorphic type system like modern Java's, one of the key roles of the type system is to check (conservatively, as always) that any invocation will not try to send the wrong kind of data, in either direction.

Suppose I have some code that works by pulling objects out of a queue. To work correctly, these objects have to be of class Circle or some subclass thereof. So, we need a queue whose designated element type is Circle or some subtype of Circle, i.e. ? extends Circle. We say that the type of the queue here is covariant in its element type, meaning the subtyping relationships go “the same way”.

Meanwhile, suppose the same code, after it processes the Circle, wants to push the object onto another queue for some other code to deal with. We want to ensure that this queue can accept the objects we give it, which might be Circles or instances of any subclass. So we need a guarantee that it can take any such objects. So, a queue of UnitCircle, say, would not be good enough---if we gave it a plain Circle this would be a type error. So our output queue has to have element type Circle or any supertype thereof, that is, element type ? super Circle. Here the type of the queue is contravariant in the element type, meaning the subtyping relationships go opposite ways.

I won't claim that this explanation covers all sane use-cases of wildcard types. On the other hand, if you know of any that aren't in some way similar, I'd like to hear about them. The idea is certainly more general than just code which processes items in queues, or just code that pushes data from one data structure to another.

This is also a neat way of considering a classic problem in typed object-oriented programming, called the “Circle--Ellipse problem”. The question is simple. We wish to define two mutable data types: Circle for representing circles and Ellipse for representing ellipses. Which way, if any, should the subtyping relation be between Circle and Ellipse?

The trick is to remember that a mutable object is not just a value: it is a container for a value that can be retrieved and replaced. In other words, we can put data in and get data out. The presence of these two distinct directions is the source of the problem. Put simply: when used in the “out” direction, for reading the parameters of the circle, Circle is a subtype of Ellipse (every valuation of a circle also represents an ellipse); but in the “in” direction, for updating the parameters stored in the object, Ellipse is a subtype of Circle, since we can give an ellipse any set of parameters that we might give a circle, whereas the converse is not true--circles cannot be updated with the parameters of ellipses having unequal minor and major radii.

An even simpler case which illustrates the in/out distinction is the usual subtyping rule for functions, which I will illustrate formal-style just because I can:

       s1 <= t1        t2 <= s2
    ------------------------------
          t1->t2  <=  s1->s2 

In English: a function expression's type t1->t2 is a subtype of another function expression's type s1->s2 if t2 (denoting the set of result values) is a subtype of its counterpart s2, and the type t1 (denoting the set of argument values) is a supertype of its counterpart s1. Intuitively (or Liskovly), this holds because to be a subtype, any term typed t1->t2 must be substitutable into any context where a term of type s1->s2 could be used, meaning the full range of argument values denoted by s1 must be acceptable as the call's content (and, optionally, other values may also be permitted), and the full range of return values denoted by t2 must be acceptable to the call context (and, optionally, (and possibly other argument values could be allowed also). I said “if” but turning the rule upside down is also valid, so “iff” would have been correct (but a stronger statement than the inference rule itself).

I'd like to add that this is the first time I've written down an inference rule in any of my writing.

None of this says anything about how Circle and Ellipse should be implemented. Abstractly, they share a lot, so in any sane implementation we would want to share whatever code we could. In many object-oriented programming languages, inheritance is provided as a code-sharing mechanism. Which way should the inheritance relationship go? It doesn't matter, as long we treat this as separate from the subtyping relationship.

Here is the real problem. Some languages, particularly Java, make this difficult for us by assuming that these two relationships should be the same. Most people who know a little about object-oriented programming will know that inheritance and subtyping are distinct concepts, although are often conflated. Historically, C++ can probably be blamed for this conflation, in that it popularised the overloading of a single mechanism (class derivation) for both. However, in C++'s defence, the fact that class inheritance is private by default is this way precisely because of this distinction. Sadly, most texts which teach the language use public inheritance nearly everywhere without ever explaining why this can be a bad idea. Java made public inheritance the default, so is far more culpable in my opinion.

Many people take this as an opportunity to lambast implementation inheritance. I won't do this. Inheritance is a useful language feature for capturing commonality of implementation. It's great that I can write a class C which shares a lot of implementation with a pre-existing class B, simply by describing what is different. It's very handy that there is no need to write forwarding functions to the code in B that can be used directly in C. It makes for readable code and discourages copy-and-paste code cloning.

Subtyping is more subtle, because it is a semantic property. It relates to the meaning of code, not the raw code itself. Things like direction of data flow don't turn up in the more syntactic matter of code re-use (inheritance's territory) but, as we saw, very much do in the semantic world of types.

I said I've never seen this explanation written down by anyone else. but actually, a bit of Googling reveals another blogger has covered similar ground. You can decide whose explanation you like best!

[/teaching] permanent link contact

Sat, 07 Nov 2009

Teaching despondency

I usually say I enjoy teaching---meaning supervising, since that's the main kind of teaching that I do. I certainly like the idea of teaching. When it goes well, it's satisfying. However, I've admitted to myself recently that I usually don't enjoy it in practice. It fragments my time badly, and as a result I usually don't prepare properly for it. It then puts me into the uncomfortable position of having the success of others dependent on my doing a good job, which would be fine if I was giving it my best, but it's really just a distraction from my main work.

To blame something aside from my own lack of conscientiousness, I also don't think the supervision format works very well for much computer science. I've already ranted here about how usually supervisions entail rushing through all the material, with students “sort of getting it” but mostly not gaining a visceral understanding. It's much harder to debug the students' mistakes and weaknesses in CS than in many other disciplines, particularly with on-paper exercises. It's doubly nerve-wracking when I'm teaching stuff that I don't fully understand myself, like a lot of concurrent programming material---my experience with concurrent programming is limited, and I've certainly never implemented a transaction commit manager or many of the other systems described in the course.

What's doubly disheartening is that there seems to be very little will to improve teaching in this department. The concurrent systems teaching has just taken a giant step backwards in reverting to a textbook-driven approach, whereas the previous course was much more a “look at the code, write the code”, and any new course should in fact have moved to something more practical, not less. The material could easily be made into something very hands-on, no more time-consuming, far easier to teach (running student code in test harnesses is far easier than trying to evaluate some jotted-down pseudocode) and far better for learning quality. There's just no will.As usual, support materials for supervisors are absolutely lacking, despite a decade-old resolution by the Head of Department (two heads ago) that every course should provide a supervisor's handbook. The very basic suggestion of a wiki for supervisors to share their original support materials, references, wisdom, was glibly rejected when I suggested it three years ago, and despite the promises that the department is “looking into ways” of improving things, hardly anything has changed. I will probably rant about this in the upcoming Teaching Committee meeting.

When I visited UMD recently, I caught the end of a first-year graduate-level class on type theory. It was a format that seemed like a far better compromise than our supervision system, and would work for a wide variety of courses. There were about ten students in the class---few enough to make the experience an interactive one, but sufficiently many to make it worth using an experienced faculty member rather than any old graduate student (including myself in that description). It was made very concrete by the use of the Coq proof assistant to mechanise the theory. Overall it just seemed to be offering a far better quality of learning. Cambridge needs to get over the complacency that results from attracting the best students---which it manages out of prestige and the peer-competitive admissions challenge, not because the quality of its courses is any better---and actually take seriously its moral obligations to provide high-quality teaching that answers the needs of the median student, not just the top five percent. Sigh. Rant over.

[/teaching] permanent link contact

Sat, 02 May 2009

Doing the mathsssss: supervisions

I just finished my supervising for the year (since I avoid supervising Easter term courses). As with so many things, there seems to be a major difference in how supervising CS is “supposed” to work and how it actually works. The usual rule of thumb is that there should be one contact hour of supervision for every three or four hours of lectures. What's “supposed” to happen is that the students do some self-study on the lectured material, attempt some exercises, hand in their answers, and let the supervisor go over the subset of material which they have revealed, by their attempts, that they didn't manage to understand by self-study. In my first year as an undergraduate, when I studied physics, physics supervisions followed exactly this pattern, and they were usually pleasant and relaxed, with time left over to talk about “interesting” discursive matters.

Unfortunately, during CS supervisions (both when I give them, and when I was the student), what usually happens is that the supervisor seems to go over almost everything in supervision. Why? Surely the students don't get nothing out of self-study? The whole supervising experience is much less pleasant because there's a constant feeling of a need to rush through the material to cover the necessary amount in an hour. Although the official advice that “it's better to cover a few things well than everything poorly” is undoubtedly true, it's hard to put this to the students, who can become anxious and/or somewhat indignant if their supervisor declines to cover a particular topic. So usually I end up rushing, and letting supervisions overrun, neither of which are particularly pleasant.

One problem might be that my students are (as my former self presumably was) lazy and don't put so much effort into self-study. Another is that, in a group of two or three, there's less overlap in what the students “didn't get” by themselves, so the union is close to the full set of course material. Related to this is the problem that, since computer science is mostly about concepts rather than techniques, learning failure modes are “softer” than, say, in physics---it's quite common for students to “almost get it”, but not quite well enough to avoid going over it in the supervision.

The problems I just mentioned are not easily fixable. However, another contributary one might be. CS students at Cambridge typically have (at least in their first two years) between 12 and (yes, sometimes) 15 hours of lectures per week. A cautious estimate would say that to keep pace with the lecture courses, a student needs two hours of self-study per hour of lecture time, so that's 24--30 hours of self study. There's typically three or four hours of supervision contact time. Suppose each supervisor sets three past-paper Tripos questions---these are supposed to take 36 minutes to answer for a fully-prepared student, so we have to add this on top of the self-study time: 108 minutes per supervision, so call it five to seven hours. Then there'll be at least two ongoing practical series or assignments, accounting for at least six hours per week in total.

What's the sum of all this? Well, it's between 50 and 62 hours... not a particularly pleasant workload for anyone, and especially not for students who, with good reason, will be wanting to have some time left over for socialising, getting involved with extracurricular interests and so on. Not to mention eating, sleeping and the rest of it. For a University consisting of, and allegedly run by, some very smart people, it's rather shameful that those in charge of teaching either haven't done, or are flagrantly disregarding, the basic arithmetic I've just describe.

More happily, I have a feeling that extending Full Term to nine weeks is being proposed at the moment, and would definitely be progress... but only if the extra time is used to spread out the workload rather than simply packing more in. I wouldn't count on this attitude prevailing, but I live in hope.

[/teaching] permanent link contact

Fri, 23 Jan 2009

There's something about CS

Within at least one University (the one I'm at), both teaching and research into computer science seems to have a very different feel and very different culture from that in other sciences. There are some obvious things: less regimented administrative structure, a fairly free-form (almost humanities-esque) “work anywhere, anytime” philosophy, and general laid-backness which springs understandably from the fact that most computer scientists are not Lab-bound, but instead working in the realm of “thought-stuff”.

It's not always a good thing though. We seem to have less teaching budget than other sciences, which might explain why practical classes are too few (though this appears to be improving slowly). In the research world, here's another peculiar way in which we lose out: unlike many other scientific disciplines, computer scientists are expected to typeset their own publications. As you probably know, this is not fun when you're camera-readying something for an exacting publisher (who may well not have bothered to provide a sufficiently complete LaTeX style file for their typesetting rules). Why do we have this responsibility, when in other disciplines (so I'm led to believe) the publishers take on much more responsibility for typesetting? Presumably because historically, computer scientists were almost universally also computer technicians of some sort or another, in the same way that the Computing Service here at the University of Cambridge used to be the same organisation as the Computer Laboratory. It's no longer true, of course, so a bit annoying that publishing anything means hours of LaTeX-twiddling pain.

There's a similarly bizarre phenomenon when doing any sort of practical computer science research: you are effectively your own technician. You're required to have a huge range of practical skills which you will not have learnt from being taught, nor even from practical classes. I'm thinking mainly about programming and Unix skills here. Everyone coming into my research group seems to have them, or at least the beginnings of them. I have them too, so I'm not complaining. But I imagine practically nobody learnt them by taking a degree, or even by doing a job. You could probably reach a reasonable level by combination of introductory programming courses and summer work, but it seems like a stretch. Instead, I'd bet that everyone coming into our group learnt them in their spare time by being a bit nerdy.

What other science displays this phenomenon? It can't be a good one, since it creates a big barrier to entry for people who would otherwise be good computer scientists, but aren't quite nerdy enough to have spent hundreds of hours of their life hacking around. This is perhaps blameable for the poor showing of women in CS research. Clearly, any practical CS researcher has to learn this stuff at some point, but given the high variance in nerdiness, will the non-nerds ever get enough experience to reach the required level? Summer jobs aren't long enough to teach programming -- I guess that any summer job which involves programming will expect you to have the skills already. So, clearly the universities have a big responsibility. But then, practical experience is a different animal from principled understanding. We clearly need some sort of scheme which ensures that incoming non-nerdy (but interested and bright) students can acquire the practical skills they need to do practical CS research, most likely by a combination of degree courses, summer work and summer/winter schools. I don't really have a solution, except “more of that”. Maybe systems research departments should have more encouragement or obligation to take undergraduate interns, and give them coaching (by paying PhD students, most likely)?

[/teaching] permanent link contact

Mon, 18 Aug 2008

This Is Cambridge

One of my main beefs with the Tripos is that it claims to “teach in three years what most Universities teach in four”. This is unfortunately a rather different statement from saying that the students learn in three years what they would learn in four at other Universities. Since exams are competitive affairs, there's no established mechanism for gauging the actual quality of learning that students acquire.

I get the impression that the exclamation “this is Cambridge!” is used as a blanket excuse for cramming lots of advanced material into courses. This is a shame, because such cramming can be counterproductive. The aforesaid statement can easily be used to gloss over any areas where the course genuinely fails to adequately serve most or all of its students. The response---“then they're not smart enough!”---is not good enough. Even if the top students do well by this approach, I'd rather have a teaching course that was optimised for the median student than for the top 5%. If a talented student isn't sufficiently stretched, there are always informal channels (taking [extra-curricular] Master's-level courses, research seminars, self-study) by which to gain more stimulation, and always higher courses of study (and/or research) to pursue afterwards. But if you're not hanging on even to the baseline of the course, you really are out at sea.

[/teaching] permanent link contact

Mon, 02 Oct 2006

Some thoughts and tips about Part II Projects

Part II projects are a decidedly tricky part of the Tripos course, and it's easy to unwittingly make mistakes early on which really compromise your chances later. It's also the case that the Lab's official advice, as contained in the "pink book" and meted out at the Project Briefing, is incomplete and in some cases misleading. Although much of it is good advice, a very few points are (in my humble opinion) not so good.

As someone who, despite being fairly conscientious and (I like to think) fairly competent, ended up with a decidedly average project mark when I took Part II in 2005, I've written up a few pieces of hard-learnt advice. Please don't take them as gospel: if you follow them and subsequently aren't as successful as you hoped to be, I won't be held responsible for your perceived failure. Nevertheless, I hope they're useful.

I hope some of the above proves helpful. If you have any questions, comments or disagreements, please contact me.

(You might cynically wonder why, after all these pearls of supposed wisdom, I'm not suggesting any project ideas this year. I guess it's because I consider finding suitable ideas a hard problem, and not one to which I'm prepared to claim any good solutions right now. Maybe next year....)

[Update, 2008-08-15: thanks to the netos list for a productive discussion about these topics. And this year, I have even put forward some suggestions, found on this page!]

[Update, 2011-10-13: Andrew Moore supplies this very helpful set of notes from his duties as examiner in 2011! It's worth reading ahead of time, but arguably even more useful when writing up---see my later blog post about that.]

[/teaching] permanent link contact

Wed, 19 Jul 2006

Thoughts on writing supervision exercises

As an undergrad I often bemoaned the fact that Computer Science lecturers at Cambridge, unlike their counterparts in Natural Sciences, did not generally provide exercises to accompany their lectures. Instead, most lecturers expect supervisors to set Tripos questions. I found this unhelpful, and still do, for reasons I will now explain.

For one, relevant Tripos questions are often few in number, either because a course is relatively new or because the syllabus has recently undergone significant changes. Secondly, even when a good number of questions are available, they often leave some parts of the course untouched -- but there is no guarantee that the upcoming year's questions will do the same. And finally, Tripos questions are often discouragingly difficult, or infuriatingly tedious, for a student who is learning the material for the first time.

All this has led me to conclude that questions which are good for examining a course are typically not good at helping students learn. (Note that I'm sidestepping the question of whether most Tripos questions actually are particularly good for anything.) So, as a supervisor, I've been having a go at writing some questions of my own. It's difficult! But nevertheless, I feel that the results are mostly half-decent as first attempts. The questions themselves are linked from my teaching section, but the following is a list of goals which I try to ensure are satisfied by the questions I write.

Clearly, some of the above goals are mutually reinforcing, while others are mutually conflicting. Striking the right balance is difficult, and one reason why good questions are more likely to be the result of a collaborative, incremental process than of a one-time effort. On that note, if you have any comments on my questions, please contact me!

[/teaching] permanent link contact


Powered by blosxom

validate this page