Here are my project suggestions for Part II or Masters students in the academic year 2012/2013. Some of the information on past years' suggestions may also be relevant. I have supervised over 30 Part II and Diploma projects in previous years, with several being singled out for special commendation by the examiners. I also co-authored five academic papers together with former project students of mine. In short, my project suggestions are likely to be challenging but I am fully committed to putting in a lot of effort to provide the best support I can to make sure the project is completed successfully. Who knows, you might end up having a lot of fun too!
The platforms of choice for implementation of most projects are Matlab and OpenCV, although Java might also be an option (especially for UI components). No previous experience of image and video processing is required, just enthusiasm. Most projects are challenging in that they relate to interesting research problems, but plenty of support will be available. Apart from an interest in the project, a reasonable grounding in continuous mathematics and probability theory would be helpful, as would proficiency with high level programming languages such as Java, C++, Python, or the Matlab environment. See also the prerequisites for my Part II Computer Vision course.
Some of the following descriptions are still a bit brief and open to interpretation, watch this space or (better) contact me to find out more. I may add or change project suggestions as the various deadlines approach. Some of the suggestions are based upon projects I have supervised in the past.
The Raspberry Pi is a very flexible, small, low-cost computer designed as an experimental platform for computing and electronics. Some people have started to look into its potential for innovative projects involving computer vision, and in fact the Raspberry Pi can even run OpenCV. It would therefore be fun to implement some interesting computer vision projects on this device, perhaps one of the projects suggested here, or one of my past years' suggestions.
Smart phones and tablets running the Android OS make an excellent platform for computer vision projects. Fortunately OpenCV has already been ported to Android, and there are many interesting computer vision student projects that have already been realised using Android.
Recreational SCUBA diving is a safe sport enjoyed by thousands around the world. One of the most important safety innovations has been the use of dive computers which constantly monitor variables such as depth, ascent/descent rate, time, and temperature. Their main purpose is the avoidance of decompression sickness ("the bends") which is caused by a build-up and subsequent release of (chiefly nitrogen) gases absorbed by various tissues and blood in the human body at depth.
Dive computers employ one or more decompression algorithms to ensure that the risk of decompression sickness is minimised. A short overview of such algorithms can be found here, and a much more thorough one can be found in the articles by Paul Chapman (see also here) and Stuart Morrison.
The implementation details of such algorithms are often specific to a given dive computer manufacturer and hence proprietary, but some open source implementations do exist, for example of VPM (Variable Permeability Model), this tool set for deep analysis of decompression algorithms, and JDeco (currently "pre-alpha" but with some code in the repositories).
Note: this project will be concerned with implementation and simulated comparative evaluation of decompression algorithms for recreational (and perhaps technical) diving, but due to the health risks involved the outcome of the project must most certainly NOT be relied upon for any actual diving.
In computer vision and image analysis, texture is defined by the existence of certain statistical correlations across image regions. Many different approaches to texture analysis and classification have been proposed, including GLMC (Grey-Level Co-occurrence Matrix), steerable filter banks, Gabor wavelet features, textons, and methods based on texture synthesis.
This project will implement some of these methods with the aim of computing texture-based image similarity measures. These will be applied to interesting problems such as content-based image retrieval, the analysis of patterns occurring in nature, and texture perception in humans and primates.
Optical character recognition (OCR) of printed documents now achieves very high levels of accuracy, but many challenging application areas remain, such as ANPR (automated number plate recognition) and the recognition of hand-written forms and documents.
The difficulties of segmenting joint-up letters and of handling slant, noise, and other sources of distortion are exploited by CAPTCHA systems (see http://en.wikipedia.org/wiki/CAPTCHA, http://www.captcha.net/). This project will investigate some techniques for handling common types of CAPTCHA using a range of image processing and computer vision techniques.
Optical Music Recognition (OMR) seeks to automatically analyse and recognise the information contained within an image of a music score. An overview of existing approaches can be found here, and open source implementations include Audiveris and OpenOMR.
The aim of this project is to design and implement an OMR system which, given an image of a printed music score, will automatically create a machine-based representation thereof in an open source music score format (and possibly render the music using MIDI). This will require steps such as image preprocessing, music symbol recognition, musical notation reconstruction, and final score rendering into a format such as MusicXML.
While technologies such as tagging are being used to track and thereby learn more about the behaviour of migratory megafauna, most field work continues to rely on visual identification to catalog and recognise particular specimens. The problems of visual identification are exacerbated in the case of large pelagic marine animals such as whales, manta rays, and sharks, many of whom exhibit characteristic markings that in principle allow individuals to be identified. To this day very little is understood about the roaming behaviour of many of the most magnificent animals in our oceans, and marine scientists and conservationists are having to deal with the task of matching observations against databases of hundreds or thousands of photographs, often taken thousands of miles apart and under very different conditions (lighting, visibility, viewing angle, distance from the subject, occlusions etc.).
This project will build on existing research into using computer vision techniques to extract, analyse, and match characteristic visual patterns in large marine animals. One such system has already been developed by me and has been deployed for research into manta rays, see Manta Matcher. other such systems include I3S, and published methods such as
While the main goal will be to produce a good dissertation, it would of course be desirable to take account of the needs of marine scientists working on projects such as manta ray ecology (see also this page) and the Foundation for the Protection of Marine Megafauna in order to make a contribution towards tools such as the ECOCEAN Whale Shark Photo-identification Library.