Can we rely on VAEs to generate reproducible latents?
I'm a PhD student at NYU studying computational models of neural population adaptation, supervised by Eero Simoncelli and David Heeger. My dissertation focuses on adaptive gain control in recurrent neural networks, and develops (variational) auto-encoder objectives for modeling neural data.
In the Summer of 2022, I was a PhD Intern on the Open Codecs team at Google, Mountain View, CA, hosted by Bohan Li. My research project was on adaptive auto-encoders and efficient nonlinear transforms for next-generation video compression.
I'm a born-and-raised Canadian 🍁. I received my BSc in Physiology and Physics at McGill University, where I began working on what would eventually become my MSc obtained from the University of Western Ontario (our lab migrated), modeling neural correlates of visuo-spatial attention in prefrontal cortex under the supervision of Julio Martinez-Trujillo.
Outside of research, I enjoy playing jazz guitar, cycling, and running.
pronouncing my family name, Dương: Vietnamese is a tonal language so it's hard to describe via text, but saying "Yuh-ng" will get you close. Most pronounce it like "Dew-ong"/"Dwong", which is also fine.
Writing L1 and L2 vector norms with reverse- and forward-mode autodiff.
Training a multilayer perceptron built in pure C++.
Building a trainable multilayer perceptron in pure C++.
Creating a bare-bones linear algebra library to train a neural net.
Poisson-Identifiable Variational Autoencoder w/ PyTorch implementation.
How to reverse linked lists w/ hands tied behind your back while blindfolded underwater.
Python implementation of adaptive spiking neural net proposed in Gutierrez and Deneve eLife 2019.