I am a postdoctoral researcher with Bernhard Schölkopf. My research topics include kernel methods, nonparametric hypothesis testing, approximate Bayesian inference, and generative models. I am particularly interested in the problem of comparing two probability distributions on the basis of their samples (i.e., two-sample testing). A computationally efficient distance measure for two distributions has many practical applications beyond just comparing distributions. For instance, such a distance can be used to create a dependence measure between two random vectors. A dependence measure in turn enables development of algorithms for clustering, feature selection, and dimensionality reduction, to name a few. See Jitkrittum et al., 2016 (NeurIPS), Jitkrittum et al., 2017 (ICML), and Jitkrittum et al., 2017 (NeurIPS, best paper).
More recently, I have been working on a new method for comparing relative goodness of fit of two models. Given two models (two generative adversarial networks, for instance) and a reference sample, the goal is to determine which of the two models fits the sample better. This problem can be formulated as a hypothesis test. See the paper Jitkrittum et al., 2018 (NeurIPS). Several further extensions of this seting exist. A preliminary work for comparing two latent variable models is available here. When the number of candidate models is more than two, there is no unique way to address this setting. We studied two approaches in our recent paper (to appear in NeurIPS 2019 soon): 1. based on post selection inference, 2. based on multiple correction.
In the direction of generative modelling, a problem that has not received much intention is the task of generating images that are similar to an input set of images, a task we call content-addressable image generation. A key challenge here is that the input is a set of arbitrary size, and that there is no order over the input images. In our ICML 2019 paper, we proposed a procedure (based on kernel mean matching) that allows a pre-trained generative model to perform content-addressable generation without retraining.
Proceedings of the 36th International Conference on Machine Learning (ICML), 97, pages: 3140-3151, Proceedings of Machine Learning Research, (Editors: Chaudhuri, Kamalika and Salakhutdinov, Ruslan), PMLR, June 2019, *equal contribution (conference)
Advances in Neural Information Processing Systems 31, pages: 816-827, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32nd Annual Conference on Neural Information Processing Systems, December 2018 (conference)
eiJitkrittum, W., Kanagawa, H., Sangkloy, P., Hays, J., Schölkopf, B., Gretton, A.
Informative Features for Model ComparisonAdvances in Neural Information Processing Systems 31, pages: 816-827, (Editors: S. Bengio and H. Wallach and H. Larochelle and K. Grauman and N. Cesa-Bianchi and R. Garnett), Curran Associates, Inc., 32nd Annual Conference on Neural Information Processing Systems, December 2018 (conference)
Our goal is to understand the principles of Perception, Action and Learning in autonomous systems that successfully interact with complex environments and to use this understanding to design future systems