skip to primary navigationskip to content

Department of Computer Science and Technology

Masters

 

Course pages 2022–23

Introduction to Computational Semantics

Principal lecturers: Dr Weiwei Sun, Prof Simone Teufel
Taken by: MPhil ACS, Part III
Code: L98
Term: Michaelmas
Hours: 16 (16 x 1 hour lectures)
Moodle, timetable

Aims

This is a lecture-style course that introduces students to various aspects of the semantics of Natural Languages (mainly English):

  • Lexical Semantics, with an emphasis on theory and phenomenology (4 sessions)
  • Compositional Semantics (9 sessions)
  • Discourse and pragmatics-related aspects of semantics (3 sessions)

Learning outcomes

  • Give an operational definition of what is meant by “meaning” (for instance, above and beyond syntax);
  • Name the types of phenomena in language that require semantic consideration, in terms of lexical, compositional and discourse/pragmatic aspects, in other words, argue why semantics is important;
  • Demonstrate an understanding of the basics of various semantic representations, including logic-based and graph-based semantic representations, their properties, how they are used and why they are important, and how they are different from syntactic representations;
  • Know how such semantic representations are derived during or after parsing, and how they can be analysed and mapped to surface strings;
  • Understand applications of semantic representations e.g. reasoning, validation, and methods how these are approached.
  • When designing NL tasks that clearly require semantic processing (e.g. knowledge-based QA), to be aware of and reuse semantic representations and algorithms when designing the task, rather than reinventing the wheel.

Practical advantages of this course for NLP students

  • Knowledge of underlying semantic effects helps improve NLP evaluation, for instance by providing more meaningful error analysis. You will be able to link particular errors to design decisions inside your system.
  • You will learn methods for better benchmarking of your system, whatever the task may be. Supervised ML systems (in particular black-box systems such as Deep Learning) are only as clever as the datasets they are based on. In this course, you will learn to design datasets so that they are harder to trick without real understanding, or critique existing datasets.
  • You will be able to design tests for ML systems that better pinpoint which aspects of language an end-to-end system has “understood”.
  • You will learn to detect ambiguity and ill-formed semantics in human-human communication. This can serve to write more clearly and logically.
  • You will learn about decomposing complex semantics-reliant tasks sensibly so that you can reuse the techniques underlying semantic analyzers in a modular way. In this way, rather than being forced to treat complex tasks in an end-to-end manner, you will be able to profit from partial explanations and a better error analysis already built into the system.

Syllabus

  1. Introduction
  2. Event structure
  3. Discourse references
  4. Word sense
  5. Truth-condition semantics
  6. Compositionality and syntax-semantics interface
  7. Weakly compositional phenomena 
  8. Compositional distributional semantics
  9. Clause union
  10. Scope
  11. Graph-based meaning representation
  12. Information Structure
  13. Speech act
  14. Presupposition
  15. Coherence
  16. Meaning representation-mediated NLU 

Assessment

  • Two short tasks (10% each)
  • Project (80%)

Further Information

Due to infectious respiratory diseases, the method of teaching for this module may be adjusted to cater for physical distancing and students who are working remotely. Unless otherwise advised, this module will be taught in person.