Computer Laboratory

Course pages 2016–17

Computer Vision

Lectures

Lectures for this course are delivered in the Engineering Department:

See Department of Engineering web page for 4F12

Moodle Site

Link to the Moodle site for ACS assignments

Practical exercises

Practical exercises will be introduced and assessed by teaching and research staff in the Computer Laboratory. The total amount of self-study time devoted to the course should be approximately 64 hours. The first two practical exercises should be completed in about 8-12 hours (exercises 1 and exercise 2) and the mini-project in about 40-50 hours.

Exercise 1

Step 1: Install and set up OpenCV (see notes below). Read some tutorials to gain basic familiarity with OpenCV and its core methods and capabilities. (about 1.5 hours)

Step 2: Use OpenCV to carry out camera calibration following the steps described in http://docs.opencv.org/2.4/doc/tutorials/calib3d/camera_calibration/camera_calibration.html (about 1.5 hours)

Step 3: Take a set of photos of the rubber stamp impressions in your ACS Research Skills logbook. Write code that makes use of OpenCV to load each image file in turn and detect the outer corners of the stamp. Then use the co-ordinates of those corners to produce (to a good approximation) a perspective corrected output image for each input image, i.e. the output should show a "top-down" view of stamp without major distortions (any "in-plane" rotation of the stamp image around the axis normal to the image plane may be ignored). (about 3 hours)

Step 4: Write the report. The report should have two sections of no more than two pages each:

a) Camera calibration: list the intrinsic and extrinsic camera parameters that were obtained during the camera calibration. Briefly describe, using the theory covered in the lecture course, the meaning of these parameters and how OpenCV determines their values.

b) Perspective correction: List the main OpenCV methods you employed and briefly describe their purpose and underlying algorithms. Under what conditions are 3, rather than 4 or more, points on the object sufficient to perform a perspective correction? How could knowledge of the intrinsic and extrinsic camera parameters lead to an improved solution?

As an example of writing style that is appropriate for straightforward reporting of a computer vision project, you may like to refer to the following paper by Chris Town:
http://onlinelibrary.wiley.com/doi/10.1002/ece3.587/full

Step 5: Submit the report as PDF, with an appendix containing your code written in any high level language with OpenCV bindings (C++, Python, Java or Matlab).

Exercise 2

1. Obtain data: [.5 h]

Download the provided sub-set of "Places" dataset described by Zhou et al. This includes a training set, testing set and class labels.

Temporarily hosted (this link will soon change) at: https://www.dropbox.com/sh/riw3s6nc8z7l6pw/AACnCnOEM1XGkMQKyDoRGTz1a?dl=0

Reference
B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva. Learning Deep Features for Scene Recognition using Places Database. Advances in Neural Information Processing Systems 27 (NIPS), 2014.

2. Feature extraction: [3h]

Choose a set of reasonable features to extract. These can include:

  • Appearance features
  • Image filters
  • Local features
  • Colour features

At least two different types of features need to be used and extracted from each of the images. Explain your choice of features. Include sample images for each feature extraction stage in your report.

3. Classification: [3h]

Using the training set, train a linear SVM using your features and calculate classification accuracy using different feature sets as follows:

  • Using single features separately.
  • Using a multimodal / combination of features.

LibSVM is an open source library for SVM implementation: http://www.csie.ntu.edu.tw/~cjlin/libsvm/

There is also an openCV tutorial for SVMs: http://docs.opencv.org/2.4/doc/tutorials/ml/introduction_to_svm/introduction_to_svm.html

4. Report: [2h]

In your report, describe your selection of features, training and classification results. Discuss classification results: why do you think some features had better or worse performance ?

5. Possible extensions:

  • Compare with more complex machine learning - e.g. Complex kernel SVMs
  • Evaluate the generalisability of your approach on images outside the provided dataset. Take a set of photos of different scenes (within the provided image classes) and use your previously trained model to classify them. (you can use photos from the internet as well)

Mini-Project

Proposal Format

The first step of the Mini-Project (submitted in Michaelmas) is to prepare a one-page proposal, addressing the following questions:

  1. Your personal starting point - what have you done in this area before?
  2. General approach: will your mini-project focus on mathematical theory, novel algorithms or engineering applications?
  3. What is the original research question or goal of your mini-project??
  4. Summarise the technique you plan to use, or theory you plan to apply, with reference / citations/
  5. Explain what materials you need access to, such as data sets, cameras, specialised libraries etc
  6. Estimate the computational resources you will need for the project, including memory, disk, CPU hours

Report Format

There is a degree of flexibility in the report format, as may be appropriate to the project topic you have undertaken. However, you should aim to emulate the format of a publication at any mainstream Computer Vision research venue, which would typically include the following elements:

  • The title of your report, and the author (you).
  • An abstract summarising the research contribution from your work.
  • Introduction, setting out the original research question or goal of your mini-project. This might address a question that has been raised in previous research, or an application problem with some justification of its significance.
  • A short review of relevant previous research that you are building on, with appropriate citations in the standard format of an appropriate Computer Vision research venue.
  • An overview of the original approach that you have taken in your work.
  • Details of implementation, experimental procedure, data sources and so on, as appropriate to the project you have done.
  • Results of your work, including representative images wherever useful.
  • Discussion and interpretation of your results, including the answer to (or reinterpretation of) your original research question, and a conclusion setting out the significance of your work in the context of previous research in Computer Vision.
  • Acknowledgement of any other resources or technical assistance you received in the course of carrying out the project.
  • A bibliography

Your report should be no longer than a typical conference paper in a Computer Vision research venue - typically limited to 8 pages where double-column formats are used, with a further page allowed for the bibliography. If you have additional material beyond these lengths that might be of value in assessing the work (e.g. further images, source code, experimental data, video files etc), this may be included as an appendix or separate electronic submission. Please contact the module convenor to arrange details.

Background on OpenCV

Exercises will be based on OpenCV (Open Source Computer Vision Library: http://opencv.org.

Open CV is an open-source BSD-licensed library that includes some hundreds of computer vision algorithms. OpenCV has C++, C, Python and Java interfaces and supports Windows, Linux, Mac OS, iOS and Android. You should choose the language and environment to use for your work on this course, taking into account your own experience and resources. Please consult one of the teaching staff is you are unsure about this choice.

Information on installation and setup of OpenCV for a variety of platforms is available at: http://docs.opencv.org/2.4/doc/tutorials/introduction/table_of_content_introduction/table_of_content_introduction.html

Information on the core functionality of OpenCV is available at: http://docs.opencv.org/2.4/doc/tutorials/core/table_of_content_core/table_of_content_core.html#table-of-content-core

Other resources include:

  • Robert Laganière, OpenCV 2 Computer Vision Application Programming Cookbook, PACKT publishing. (available online)
  • Daniel Lélis Baggio et al. Mastering OpenCV with practical computer vision projects. Packt Publishing Ltd.