I’m working with some multi-dimensional float-valued data – I’ll call a single instance of this data $X \in \mathbb{R}^{n \times k}$. I have multiple samples $X_{1}, X_{2}…X_{t}$, and want to compare these subspaces – namely, I want to compute the distance between pairs of subspaces.
What follows are the contents of part of a lab meeting presentation I gave recently. The topic of the meeting was “Python for Neuroimaging”, where I covered basic software development tools that brain imaging scientists might be interested in.
I’m fortunate enought to work in a lab with some high-level computing infrastructure. We have a cluster of machines using the Sun Grid Engine (SGE) software system for distributed resource management. The other day, I was searching for how to wrap my Python scripts with qsub so that I could submit a batch of jobs to our cluster.
I’m applying some methods developed in this paper for testing purposes in my own thesis research. Specifically, I have some float-valued data, $F$, that varies along the cortical surface of the brain. Visually, I can see that there are areas where these scalar maps change abruplty.
The other day, one of my friends and colleagues (I’ll refer to him as “Dr. A”) asked me if I knew anything about assessing biomarker diagnostic power. He went on to describe his clinical problem, which I’ll try to recant here (but will likely mess up some of the relevant detail – his research pertains to generating induced pluripotent cardiac stem cells, which I have little to no experience with):
I wanted to make a quick note about something I found incredibly helpful the other day. Lists (or ArrayLists, as new Computer Science students are often taught in their CS 101 courses), as a data strucure are fundamentally based on arrays, but with additional methods associated with them.
Here, we’ll look at various applications of the Delta Method, especially in the context of variance stabilizing transformations, along with looking at the confidence intervals of estimates. The Delta Method is used as a way to approximate the Standard Error of transformations of random variables, and is based on a Taylor Series approximation.
For one of the projects I’m working on, I have an array of multivariate data relating to brain connectivity patterns. Briefly, each brain is represented as a surface mesh, which we represent as a graph $G = (V,E)$, where $V$ is a set of $n$ vertices, and $E$ are the set of edges between vertices.
I’m going over Chapter 5 in Casella and Berger’s (CB) “Statistical Inference”, specifically Section 5.5: Convergence Concepts, and wanted to document the topic of convergence in probability with some plots demonstrating the concept. From CB, we have the definition of convergence in probability: a sequence of random variables $X_{1}, X_{2}, … X_{n}$ converges in probability to a random variable $X$, if for every $\epsilon > 0$,
In this post, I’m going to briefly cover the relationship between the Poisson distribution and the Multinomial distribution. Let’s say that we have a set of independent, Poisson-distributed random variables $Y_{1}, Y_{2}… Y_{k}$ with rate parameters $\lambda_{1}, \lambda_{2}, …\lambda_{k}$.