Here I’ll walk through a cool algorithm that adaptively approximates the column space of a matrix using random projections.
I study computational models of neural adaptation and gain control at NYU, supervised by Eero Simoncelli and David Heeger. I spend most of my time training and simulating neural networks, and formulating unsupervised-learning objective functions.
I'm a born-and-raised Canadian; and before coming to the US, I completed my BSc in Physiology and Physics at McGill University. I received my MSc from the University of Western Ontario, studying normalization models of attention in prefrontal cortex under the supervision of Julio Martinez-Trujillo.
My technical interests include machine learning in computer vision, numerical linear algebra, and scientific computing. Outside of research, I enjoy playing jazz guitar, cycling, and running.
pronouncing my family name, Dương: Vietnamese is a tonal language so it's hard to describe via text, but saying "Yuh-ng" will get you close. Most pronounce it like "Dew-ong"/"Dwong", which is also fine.
This post covers normalizing flows, and the RealNVP invertible neural network. It’s only 150 lines of code total!
This short post will cover graphical intuition and PyTorch code for two different kinds of whitening: batch and instance.
A tutorial for an algorithm I implemented in our plenoptic PyTorch package package to synthesize eigendistortions.
A Python implementation of the elegant algorithm introduced by Iain Murray et al. (2010).
What’s the best way to quantify and visualize distance between two positive definite matrices? Julia code included.
PyTorch implementation and explanation of SGD MCMC sampling w/ Langevin or Hamiltonian dynamics.