A Globally Optimal Solution to Low Rank Regression

In the Spring of my first year at UW (Spring 2016), I took a course on convex optimization algorithms. I was interested in low rank regression due to a research project I was working on. I played around with the formulation

 \min_W \|Y - XW\|_F^2\ \ \text{s.t.}\ \text{rank}(W) \leq k

It turns out that there is a closed form solution to this non-convex problem! The solution is based on the k-rank approximation theorem by Eckart, Young, and Mirsky. It also turns out that someone else independently discovered this in 2012 (check it out), and they analyzed the statistical properties of it. Anyways, I wrote a write up with proofs which you can find here: mid_quarter_report. I also included a closed form solution to the slightly more general problem

 \min_W \|Y - BW\Phi\|_F^2\ \ \text{s.t.}\ \text{rank}(W) \leq k

I have not seen this result published anywhere.


In-Depth Variational Inference Tutorial

In the Fall of my first quarter (Fall 2015) as a grad student at UW, I was working on a project that involved stochastic variational methods on time series models. But I hadn’t learned variational inference, so I spent a few weeks reading up on it and deriving some variational methods for some basic graphical models. This really helped me learn the material well, and I decided to write an in-depth tutorial to help others cross the bridge I crossed. Most tutorials/introductions I’ve seen about this topic skip a lot of details. While it’s great to spent time filling in those details on your own (which is what I did), I figured there should be something out there that does have all the microscopic details, so I wrote this tutorial.

Here is the PDF: In-Depth Variational Inference Tutorial. There is a corresponding github repository to accompany the tutorial which can be found here.

Please feel free to comment on this and provide feedback!