Preprints
GeNet: Deep Representations for Metagenomics
Mateo Rojas-Carulla, Ilya Tolstikhin, Guillermo Luque, Nicholas Youngblut, Ruth Ley, Bernhard Schölkopf
On the Latent Space of Wasserstein Auto-Encoders
Paul K. Rubenstein, Bernhard Schoelkopf, Ilya Tolstikhin.
From optimal transport to generative modeling: the VEGAN cookbook
Olivier Bousquet, Sylvain Gelly, Ilya Tolstikhin, Carl-Johann Simon-Gabriel, Bernhard Schoelkopf.
Probabilistic Active Learning of Functions in Structural Causal Models
Paul K. Rubenstein, Ilya Tolstikhin, Philipp Hennig, Bernhard Schoelkopf.
Minimax Lower Bounds for Realizable Transductive Classification
Ilya Tolstikhin, David Lopez-Paz. (First inequality of (4) and that of (5) are wrong)
Conference papers (chronologically ordered)
Differentially Private Database Release via Kernel Mean Embeddings
Matej Balog, Ilya Tolstikhin, Bernhard Schoelkopf.
International Conference on Machine Learning (ICML), 2018.
Wasserstein Auto-Encoders, [GitHub]
Ilya Tolstikhin, Olivier Bousquet, Sylvain Gelly, Bernhard Schoelkopf.
ICLR 2018 (full oral).
AdaGAN: Boosting Generative Models, [GitHub]
Ilya Tolstikhin, Sylvain Gelly, Olivier Bousquet, Carl-Johann Simon-Gabriel, Bernhard Schoelkopf.
NIPS 2017.
Consistent Kernel Mean Estimation for Functions of Random Variables
Adam Scibior, Carl-Johann Simon-Gabriel, Ilya Tolstikhin, Bernhard Schoelkopf.
NIPS 2016.
Minimax Estimation of Maximum Mean Discrepancy with Radial Kernels
Ilya Tolstikhin, Bharath Sriperumbudur, Bernhard Schoelkopf.
NIPS 2016.
Permutational Rademacher Complexity: a New Complexity Measure for Transductive Learning
Ilya Tolstikhin, Nikita Zhivotovskiy, and Gilles Blanchard.
Algorithmic Learning Theory (ALT), 2015.
Towards a Learning Theory of Cause-Effect Inference
David Lopez-Paz, Krikamol Muandet, Bernhard Schölkopf, Iliya Tolstikhin.
International Conference on Machine Learning (ICML), 2015.
Localized Complexities for Transductive Learning
Ilya Tolstikhin, Gilles Blanchard, and Marius Kloft.
Conference on Learning Theory (COLT), 2014. (Full oral presentation)
Note: There was a minor mistake in the assumptions of the Corollary 15.
PAC-Bayes-Empirical-Bernstein Inequality
Ilya Tolstikhin, Yevgeny Seldin.
Advances in Neural Information Processing Systems (NIPS), 2013. (Spotlight presentation / acceptance ratio = 5%)
Localized excess risk bounds in combinatorial theory of overfitting, (in Russian)
Ilya Tolstikhin.
9th International Conference on Intelligent Information Processing (IIP), 2012.
Ilya Tolstikhin.
8th International Conference on Intelligent Information Processing (IIP), 2010.
Exact generalization error bound for one particular model of classifiers , (in Russian)
Ilya Tolstikhin.
17th International student, postgraduate and young scientist conference "Lomonosov", 2010.
Journal papers (chronologically ordered)
Minimax Estimation of Kernel Mean Embeddings
Ilya Tolstikhin, Bharath Sriperumbudur, Krikamol Muandet.
Journal of Machine Learning Research (JMLR), to appear 2017.
On two approaches to concentration for sampling without replacement , (in Russian)
Tolstikhin Ilya.
Combinatorial bounds on probability of overfitting based on clustering and coverage of classifiers , (in Russian)
Alexander Frey, Tolstikhin Ilya.
Machine Learning and Data Analysis (JMLDA), 2013.
Others
Zivkovic I., Tolstikhin I., Schölkopf B., Scheffler K.
33rd Annual Scientific Meeting of the European Society for Magnetic Resonance in Medicine and Biology (ESMRMB), 2016.
Неравенства концентрации вероятностной меры в трансдуктивном обучении и PAC-Байесовском анализе
(Concentration inequalities applied to transductive learning and PAC-Bayesian analysis).
[Text],
[Synopsis]
(in Russian, translation to English not in progress...)
Computing Centre of Russian Academy of Sciences, 2014.
Abstract: The dissertation examines the role of concentration inequalities in efforts to improve performance bounds of supervised learning algorithms. The motivation to obtain tight generalization error and excess risk bounds in statistical learning theory comes from the belief that a deep understanding of a learning process might lead us to new useful ideas and more accurate algorithms. First part of the work studies concentration inequalities for one particular setting of dependent random variables: when they are sampled without replacement from the given finite population. We provide two novel Bernstein-style concentration inequalities for suprema of empirical processes and sampling without replacement. While these new inequalities may potentially have broad applications, we exemplify their significance in the second part of the work by studying the transductive setting of statistical learning theory. For which we provide an excess risk bound based on the localized complexity of the hypothesis class which holds under very mild assumptions. Finally, the third part of the work studies the PAC-Bayesian analysis, which is a general tool for data-dependent analysis in machine learning. We derive a new PAC-Bayes-Empirical-Bernstein inequality which is a powerful Bernstein-style concentration inequality depending only on empirical quantities. We show that in a number of interesting situations our new PAC-Bayes-Empirical-Bernstein bound can be significantly tighter than the state-of-the-art results.
Wasserstein Auto-Encoders: from optimal transport to generative modeling and beyond, [Slides], [Talk]
Implicit generative models: dual vs. primal approaches, [Slides]
Machine Learning Summer School, 2017, Tuebingen.
Workshop on Stochastic Processes and Probabilistic Models in Machine Learning, 2017, Moscow.
Statistical Causal Learning
Bocconi Summer School on Advanced Statistics and Probability. July 10-22, 2017, Como, Italy.
Together with David Lopez-Paz and Bernhard Schoelkopf.
Consistent Kernel Mean Estimation for Functions of Random Variables
Dagstuhl workshop, "New Directions for Learning with Kernels and Gaussian Processes", Schloss Dagstuhl, Germany, 2016.
On some properties of MMD and its relation to other distances
Dagstuhl workshop, "Foundations of Unsupervised Learning", Schloss Dagstuhl, Germany, 2016.
Minimax Estimation of Kernel Mean Embeddings [Poster]
Spring School "Structural Inference", Brodten, Germany, 2016.
Global and Local Complexity Measures for Transductive Learning [Talk], [Poster], [Slides]
Yandex School of Data Analysis Conference, “Machine Learning: Prospects and Applications”, Berlin, Germany, 2015.
Sampling without replacement: reduction to i.i.d. VS direct approach [Slides],
Dagstuhl workshop, “Machine Learning with Interdependent and Non-Identically Distributed Data”, Schloss Dagstuhl, Germany, 2015.
Localized Complexities for Transductive Learning, COLT 2014, [Slides], [Poster], [Talk (videolectures.net)].
Ilya Tolstikhin, Gilles Blanchard, Marius Kloft.
New Concentration Inequalities for Sampling without Replacement and an Application to Transductive Learning [Slides]
MPI for Intelligent Systems, Tubingen, 2014.
PAC-Bayes-Empirical-Bernstein Inequality, NIPS 2013, [Spotlight], [Poster], [Talk (videolectures.net)].
Ilya Tolstikhin, Yevgeny Seldin.
PAC-Bayesian Inequalities for Martingales, GRAAL, Laval University, 2013, [Slides].
PAC-Bayes-Empirical-Bernstein Inequality, GRAAL, Laval University, 2013, [Slides].
Talks in Russian
Dissertation defense, Computing Centre of Russian Academy of Sciences, October 16th, 2014, [Slides], [Video in Russian]
Localized Complexities and Fast Rates in Statistical Learning Theory, Joint MIPT/IUM seminar on Stochastic Analysis, 2014, [Slides], [Video in Russian]
PAC-Bayesian Inequalities, Joint MIPT/IUM seminar on Stochastic Analysis, 2013, [Slides]
Concentration Inequalities for Sampling without Replacement, Joint MIPT/IUM seminar on Stochastic Analysis, 2013, [Slides]
Instructor for the course “Machine Learning Theory” (together with Ruth Urner) |
2016 - 2017 |
Teaching Assistant for the course “Machine Learning” |
2013 - 2013 |
Tutorials for the course “Machine Learning” |
2012 - 2013 |
Tutorials for the course “Machine Learning” |
2011 - 2012 |