While most of my day-to-day research entails writing Python code, I also make heavy use of pre-written software. Most software comes pre-compiled, but whenever possible, I like to get access to the source code. I’m going to refer to some modifications I made to pre-existing packages – you can find those in my repository here.
I’ve been toying around with openCV for generating MRI images with synethetic motion injected into them. I’d never used this library before, so I tested a couple examples. Below I detail a few tools that I found interesting, and that can quickly be used to generate image transformations.
I recently attended Neurohackademy 2018, hosted by the University of Washington’s eScience Institute, and organized by Dr. Ariel Rokem and Dr. Tal Yarkoni. This was a 2-week long event, beginning with a series of daily lectures, and ending with a fast-paced, high-intensity scramble to put together a beta (but working) version of some project coupling neuroimaging with software development.
I just learned about TravisCI (actually, about continuous integration (CI) in general) after attending Neurohackademy 2018. We learned about CI from the perspective of ensuring that your code builds properly when you update files in your packages, incorporate new methods, refactor your code, etc.
In putting together this blog, I wanted to be able to talk about various mathematical topics that I found interesting, which inevitably lead to using LaTex in my posts. I’m currently using Atom as my editor (having converted from Sublime), and needed to install a bunch of packages first.
In my previous post on dynamic mode decomposition, I discussed the foundations of DMD as a means for linearizing a dynamical system123. In this post, I want to look at a way in which we can use rank-updates to incorporate new information into the spectral decomposition of our linear operator, $A$, in the event that we are generating online measurements from our dynamical system4 – see the citation below if you want a more-detailed overview of this topic along with open source code for testing this method.
In the next two posts, I want to talk briefly about an algorithm called Dynamic Mode Decomposition (DMD). DMD is a spatiotemporal modal decomposition technique that can be used to identify spatial patterns in a signal (modes), along with the time course of these spatial patterns (dynamics).
In this post, I’ll be covering the basics of Multivariate Normal Distributions, with special emphasis on deriving the conditional and marginal distributions. Given a random variable under the usual Gauss-Markov assumptions, with $y_{i} \sim N(\mu, \sigma^{2})$ with $e \sim N(0,\sigma^{2})$, and $N$ independent samples $y_{1}…y_{n}$, we can define vector $\mathbf{y} = [y_{1}, y_{2},…y_{n}] \sim N_{n}(\mathbf{\mu},\sigma^{2}I)$ with $\mathbf{e} \sim N_{n}(\mathbf{0},\sigma^{2}I)$.
In this post, I’m going to go over some examples of rank-one updates of matrices. To compute rank-one updates, we rely on the Sherman-Morrison-Woodbury theorem. From the previous post on [Blockwise Matrix Inversion]({% post_url 2018-05-08-blockwise-matrix-inversion %}), recall that, given a matrix and its inverse
I’m taking a Statistics course on the theory of linear models, which covers Gauss-Markov models and various extensions of them. Sometimes, when dealing with partitioned matrices, and commonly Multivariate Normal Distributions, we’ll often need to invert matrices in a blockwise manner.