Kui Tang

Machine Teacher

I aim to develop fast, principled, and practical optimization algorithms for computationally challenging machine learning problems. Machine learning will play an increasingly necessary role in social and technological progress, as the quantity and complexity of data, and our ambition to make use thereof, increase without bound.

In this vein, I have worked on global optimization of Markov random fields, variational Bayesian nonparametrics, and exact Hamiltonian Monte Carlo, which I've applied to problems in energy, neuroscience, and computer vision.

I am a senior at Columbia, where my advisor is Tony Jebara. Before getting into machine learning, I worked on computer architecture with Martha Kim, focusing on performance optimization of multithreaded code. I have also collaborated with Liam Paninski, Lauren Hannah, Melanie Kambadur, Adrian Weller, and Ari Pakman.

Papers

Also on DBLP and Google Scholar.

[5] K. Tang, A. Weller and T. Jebara. "Network Ranking with Bethe Pseudomarginals". NIPS Workshop on Discrete Optimization in Machine Learning, Dec. 2013. [pdf] [bib]

[4] K. Choromanski, T. Jebara and K. Tang. "Adaptive Anonymity via b-Matching". Neural Information Processing Systems (NIPS), Dec. 2013.

[3] M. Kambadur, K. Tang, J. Lopez, and M. Kim. "Parallel scaling properties from a basic block view". International Conference on Measurement and Modeling of Computer Systems (SIGMETRICS), Jun. 2013. [pdf] [bib]

[2] M. Kambadur, K. Tang, and M. Kim. "Parallel Block Vectors: Collection, Analysis, and Uses". IEEE Micro, 33(3):86-94, 2013. [pdf] [bib]

[1] M. Kambadur, K. Tang, and M. Kim. "Harmony: Collection and Analysis of Parallel Block Vectors". International Symposium for Computer Architecture (ISCA), Jun. 2012. [pdf] [bib]

Code

mexcpp: Easier MATLAB and C++ Integration
This library makes it easier to write MEX extensions by providing a natural C++ syntax to read and write MATLAB structures with no runtime penalty thanks to template metaprogramming. Arays, cells, and structures are all supported. [github]
Adaptive Anonymity via b-Matching
Safely release a sensitive database for analysis by censoring selected entries. We compute the censored entries by b-matching, which results in more released data compared to the state of the art, while maintaining the same privacy guarantee. See our NIPS 2013 paper. [coming soon].
Harmony
Profile multithreaded programs: measure the number of times each line of code was run and how many threads ran it at the same time. This can be used to detect concurrency bottlenecks and analyze global program efficiency. See our ISCA and Micro papers. [homepage]

Expository

Notes on various mathematical topics, course projects, and unpublished manuscripts.

Review Notes
To review for finals, I like to condense the content of a course into 10-20 pages of notes. The following may be useful references:
Quasi Monte Carlo Inference for Log Gaussian Cox Processes
(With Jessica Forde and Liam Paninski, 2013) We give a self-contained introduction to quasi Monte Carlo, a deterministic substitute to Monte Carlo, and then use it for a Bayesian inference for a computationally difficult problem, including embedding quasi Monte Carlo samples into a Metropolis-Adjusted Langevin algorithm. Encouraging convergence rate speedup results are presented.[paper] [slides]
Inferring Direct and Indirect Functional Connectivity Between Neurons From Multiple Neural Spike Train Data
(With Ben Shababo and Frank Wood, 2012) We aim to model the functional connectivity of neural microcircuits by accounting for (1) unobserved neurons and (2) long-run connections, two problems not yet adequately addressed in the literature. Encouraging simulation and lab-data results are analyzed.[paper]
Personalized Emotion Classification with Latent Dirichlet Allocation
(2012) Work on sentiment analysis of text has focused on scalar values of sentiment. We instead represent emotions multidimensionally as topic vectors and learn a personalized emotion lexicon. We take a semi-supervised approach with latent Dirichlet allocation (LDA) and a derivative, SubjLDA, initializing our model with a psychologically validated seed corpus. We demonstrate significant improvements to the baseline. [paper]

Talks

Some talks I have given, with video if available.

Statistical Machine Learning with Bayesian Networks
(Columbia Data Science Society, November 2013; hackNY Masters, New York University, September 2013.) I teach the fundamentals of Bayesian networks, MCMC inference, and probabilistic programming to undergraduates and masters students with little previous exposure to statistics or machine learning. [IPython notebook] [github]
The Computational Complexity of Probabilistic Inference Using Bayesian Belief Networks
(Machine Learning Reading Group, Columbia University, June 2012.) I walk through the proof by G.F. Cooper that posterior inference in Bayesian networks is NP-hard. The proof reduces from 3SAT using an elegant gadget graph. [slides]