16-811: Math Fundamentals for Robotics, Fall 2025

Brief Summaries of Recent Lectures




Num Date Summary
01 26.August

We discussed course mechanics for about half the lecture.

We looked at the Jacobian of a simple 2-link manipulator. The Jacobian is a matrix that relates differential joint motions to differential motions of the arm's end-effector. Using a virtual work argument we showed how the transpose of this matrix relates desired Cartesian forces imparted by the end-effector to the joint torques necessary to generate those forces. A summary of our discussion appears here.

We further considered the case of multiple fingers gripping an object (or multiple legs standing on some surface) and wrote down a linear equation of the form

-F = Wc

to describe the contact forces c needed to oppose a generalized force F (force and torque) acting on some object. Here W is the so-called wrench matrix that models forces and torques induced by contact forces. A summary of that discussion appears here.

We discussed the importance of bases in representing linear transformations, and in particular, how the matrix used to represent a given transformation changes as one changes bases.

A basis for a vector space is a set of linearly independent vectors that spans the vector space. This means that every vector in the vector space can be written in a unique way as a finite linear combination of basis vectors.

The representation of a linear function by a matrix depends on two bases: one for the input and one for the output. As we will see in future lectures, choosing these bases well allows one to represent the linear function by a diagonal matrix. Diagonal matrices describe decoupled linear equations, which are very easy to solve.

For next time, please go back to your linear algebra course and remind yourself of the method one uses to generate linear coordinate transformations, represented as matrices: The columns of such a matrix are the vectors of one basis expressed as linear combinations of the other basis's vectors. This example illustrates some of the relevant points.

Briefly, at the end of the lecture, we reviewed some facts from linear algebra. We defined the column space, row space, and null space of a matrix (and linear functions more generally). As an example, we considered the matrix linked here.

02 28.August

We started the lecture with a review of some facts from last time. Subsequently, we discussed some conditions under which a square matrix is not invertible.

During much of the lecture we discussed Gaussian Elimination. We computed the PA = LDU decomposition for some matrices. Such a decomposition exists whenever the columns of A are linearly independent (and may exist more generally for other matrices).

For an invertible matrix A, the PA = LDU decomposition makes solving Ax = b simple:

  • First, solve Ly = Pb for y. This is easy because L is lower-triangular, so one can basically just "read off" the components of y using substitution.

  • Second, solve Ux = D-1 y for x. This is easy because U is upper-triangular, so one can again "read off" the components of x, now using backward substitution. Also note that D-1 stands for inverse of the matrix D. This matrix is easy to compute: Its entries are all 0, except on the diagonal. Where D has entry d on the diagonal, D-1 has entry 1/d.
  • Here are the examples we considered in lecture.

    Near the end of lecture, we started our discussion of diagonalization based on eigenvectors. We will work through an example next week. This method serves as a springboard for Singular Value Decomposition (SVD), which we will also discuss next week.

    03 2.September

    Today, we first looked at diagonalization based on eigenvectors, then used that as springboard for Singular Value Decomposition.

    Here is a summary of the eigenvector discussion.

    The reason one wants to diagonalize a matrix is that solving

    Ax = b

    is simple if A is diagonal. For then one has a set of independent linear systems that are solved independently as xi = bi / aii.

    For many physical systems, one can obtain a basis of eigenvector that permits diagonalization of the matrix. And in some cases, the basis may even be "nice", by which we mean that the vectors have unit length and are pairwise perpendicular. This isn't always possible, but if the matrix A has real entries and satisfies ATA = AAT, then it is possible. In that case,

    A = S Λ S-1,

    with the columns of S being the eigenvectors of A and with Λ a diagonal matrix consisting of the corresponding eigenvalues.

    Some cases are particularly nice. For instance, if A is symmetric, then the eigenvalues are real and S is orthogonal. That means STS = I = SST, so the inverse of S is very easy to compute: S-1 = ST.

    (This idea generalizes to the complex setting; look up "unitary" matrix.)

    So, solving Ax = b for x now amounts to (1) solving Λy = STb for y, which is easy since Λ is diagonal, then (2) converting back to x-coordinates, by x=Sy. Or, on one line: x = S Λ-1 S-1 b.


    In the second part of lecture we began our exploration of Singular Value Decomposition (SVD):

    A = U Σ VT.

    The main idea is that one can employ two possibly different coordinate transformations for the input and output spaces (domain and range), even if they are the same space, to obtain a simple diagonal representation of any matrix A (in particular, the matrix need not be square). (We considered this introductory example.)

    Each coordinate transformation is given by an orthogonal matrix, thus amounting to a rotation (possibly with a reflection). SVD chooses these coordinates so that the first k columns of the output coordinate transformation U constitute an orthonormal basis for the column space of A and all but the first k columns of the input coordinate transformation V constitute an orthonormal basis for the null space of A. Here k is the rank of A. The diagonal matrix Σ has k nonzero entries, all positive. (Aside: If some of these are nearly zero, it can be convenient to artificially set them to be zero, since the corresponding row space vectors are almost vectors in the null space of A.)

    The SVD decomposition facilitates solving linear equations. We wrote down a formal solution, which we refer to as the SVD solution:

    x  =  V (1/Σ) UT b.

    A key insight is to realize that pre-multiplying a vector by an orthogonal matrix amounts to taking dot products of that vector with elements of an orthonormal basis.

    With that intuition, one can see that x lies in the row space of A and minimizes ‖Ax - b‖ (minimized over all possible x). In particular, if b is in the column space of A, then x is an exact solution satisfying Ax = b. There could be more than one x that minimizes ‖Ax - b‖ (in particular, there could be more one exact solution to Ax = b). This occurs when A has a non-trivial null space; the SVD solution x is the solution with minimum norm, i.e., the solution closest to the origin.

    Here is a summary of one path to these insights.

    We will discuss SVD further in the next lecture.

    04 4.September

    We reviewed some of the material from the previous lecture, including some additional details posted previously. We worked through an example.

    We sketched the internals of the SVD algorithm, without detail, except to say that the algorithm uses Householder reflections.
    We defined Householder reflections (see also page 22 of the linear algebra notes). We worked out some properties.
    This example illustrates the use of Householder reflections as a method for zeroing-out entries when diagonalizing a a matrix (see also page 21 of the linear algebra notes for the encompassing context).

    At the end of lecture, we mentioned that the "pseudo-inverse" solution one sees in some settings is the same as the SVD solution. Here is a brief summary.

    We very briefly mentioned the trace operator Tr for square matrices, along with the fact that Tr(AB) = Tr(BA) for any pair of square matrices A and B of the same dimension.





    Back to the webpage for 16-811
    website stats

    AltStyle によって変換されたページ (->オリジナル) /