Learners' Guide: What Was Lectured and What Do I Need to Know ?
Please read the Syllabus for this course. That is the definitive/baseline guide.
Further information is on this current page and will be updated as we go.
Memorise the details of the IEEE single-precision floating point but treat all denormal numbers as zero.
Understand the worst case error propagation rules and be able to use them on some examples.
Regarding the algorithms lectured, detailed coding is unlikely to be asked about but candidates must have
full knowledge of their purpose and general behaviour:
- Long multiplication and division: you should understand the cost of several algorithms and be aware of the optimisations possible when one argument is a constant.
- Base Conversions: the general approach for each of the four and why they differ.
- Iterations and Newton Raphson: NR's formula, basis, convergence and be able to apply it. Also, to generate stopping conditions appropriate to a given iteration in general.
- Cordic: how the algorithm works (a decomposition
into angles with easy to multiply tangents) and answer a question about it given a reminder of the code.
- Knots, Splines and Quadrature: ...
- Chebyshev Basis: to know that a good choice of basis vectors or knot positions
will give a better result than a simplistic truncation of Taylor or
evenly-spaced (cardinal) interpolation. Anything else needed for
examination questions related to this will be provided in the
question. The Wikipedia page on Chebyshev Nodes ties up some of the theory
between knot positioning and orthogonal polynomials, but is beyond the scope of this course.
- Gaussian Elimination: a matrix phrasing of the standard
technique to solve simulataneous linear equations and the need for
pivoting and forwards and backwards substitution (the minor
difference between Doolitle and Crout was not lectured). You should
understand that pivoting adds an intelligent search aspect to the
algorithm, making it a numerically sound means of finding a matrix
inverse, and also understand its complexity is only cubic (whereas
the standard/school approach using Cramer's rule is expensive
(exponential infact)).
- Cholesky Decomposition: that it is a sort of square-root
of a symmetric matrix and it saves effort in Gaussian elimination for
multiple right-hand-sides by not having to generate and save separate
L and U matrices. One matrix serves for both. The class of matrices
where it works is relatively common (but positive definite details are
not lectured).
- FDTD Simulation: Definition of a state vector (values that need to be saved from one time step to the next). What a forward difference is, the sort of error they introduce and that better approaches exist but require more design effort. The existence of backward stencils, Gear's approach. Runge Kutta and so on may me mentioned but is not examinable. You should be able to sketch out code for a basic simulation of a space rocket or Red Ball-style inertial masses simulation. You should be able to estimate how the time step and elemental quantisations affect the modelling error for some simple setups and come up with a control algorithm for dynamically adjusting step size.
- Circuit/Flow Equations: Steady-state Nodal Analysis: to understand the three flow quantities (conductance, potential and flow rate) and how to phrase these in a matrix form GV=I suitable for solving.
- Circuit/Flow Equations: Non-linear and time-varying components: that these can be handled by introducing a conductance equal to dI/dV and a flow generator
to offset the origin. And that the non-linear requires iteration within a time step whereas time-varying components themselves are handled with forward differences.
- Part 11: Extra Topics: none of this section was lectured and this content is non-examinable this year.
How much of the final section of the course will be lectured is currently uncertain: this page will report that when known.
Minor Notes
In part 10, I have used the terms bucket, capacitor and tank interchangeably, but colloquial electronic speak, the term 'tank' normally denotes a parallel combination of a capacitor and an inductor. Also part 10 is actually describing a core mechanism from the way the SPICE simulator works, but I did not use that term.
END