Rank One Updates
In this post, I’m going to go over some examples of rank-one updates of matrices. To compute rank-one updates, we rely on the Sherman-Morrison-Woodbury theorem. From the previous post on [Blockwise Matrix Inversion]({% post_url 2018-05-08-blockwise-matrix-inversion %}), recall that, given a matrix and its inverse
we have that
Expanding this further, the Woodbury formula proves the following identity
Given an initial matrix
For example, if we want to update our matrix
where the updated inverse is defined so long as the quadratic form
Rank-One Updates for Linear Models
Recall the Normal equations for linear models:
and
where
Assume that we observe a set of observations,
and directly compute
from which we can easily compute our new coefficient estimates with $$.
Importantly, in the case of regression, for example, this means that we can update our linear model via simple matrix calculations, rather than having to refit the model from scratch to incorporate our new data. In the next few posts, I’ll go over an example of an implementation of rank-updating methods that I’ve been using in lab to study brain dynamics.