Preprints
Minimax Estimation of Kernel Mean Embeddings
Ilya Tolstikhin, Bharath Sriperumbudur, Krikamol Muandet.
Minimax Lower Bounds for Realizable Transductive Classification
Ilya Tolstikhin, David Lopez-Paz.
Conference papers (chronologically ordered)
Consistent Kernel Mean Estimation for Functions of Random Variables
Adam Scibior, Carl-Johann Simon-Gabriel, Ilya Tolstikhin, Bernhard Schoelkopf.
Accepted, NIPS 2016.
Minimax Estimation of Maximal Mean Discrepancy with Radial Kernels
Ilya Tolstikhin, Bharath Sriperumbudur, Bernhard Schoelkopf.
Accepted, NIPS 2016.
Permutational Rademacher Complexity: a New Complexity Measure for Transductive Learning
Ilya Tolstikhin, Nikita Zhivotovskiy, and Gilles Blanchard.
Algorithmic Learning Theory (ALT), 2015.
Towards a Learning Theory of Cause-Effect Inference
David Lopez-Paz, Krikamol Muandet, Bernhard Schölkopf, Iliya Tolstikhin.
International Conference on Machine Learning (ICML), 2015.
Localized Complexities for Transductive Learning
Ilya Tolstikhin, Gilles Blanchard, and Marius Kloft.
Conference on Learning Theory (COLT), 2014. (Full oral presentation)
Note: There was a minor mistake in the assumptions of the Corollary 15.
PAC-Bayes-Empirical-Bernstein Inequality
Ilya Tolstikhin, Yevgeny Seldin.
Advances in Neural Information Processing Systems (NIPS), 2013. (Spotlight presentation / acceptance ratio = 5%)
Localized excess risk bounds in combinatorial theory of overfitting, (in Russian)
Ilya Tolstikhin.
9th International Conference on Intelligent Information Processing (IIP), 2012.
Ilya Tolstikhin.
8th International Conference on Intelligent Information Processing (IIP), 2010.
Exact generalization error bound for one particular model of classifiers , (in Russian)
Ilya Tolstikhin.
17th International student, postgraduate and young scientist conference "Lomonosov", 2010.
Journal papers (chronologically ordered)
Combinatorial bounds on probability of overﬁtting based on clustering and coverage of classifiers , (in Russian)
Alexander Frey, Tolstikhin Ilya.
Machine Learning and Data Analysis (JMLDA), 2013.
PhD Thesis
Неравенства концентрации вероятностной меры в трансдуктивном обучении и PAC-Байесовском анализе
(Concentration inequalities applied to transductive learning and PAC-Bayesian analysis).
[Text],
[Synopsis]
(in Russian, translation to English not in progress...)
Computing Centre of Russian Academy of Sciences, 2014.
Abstract: The dissertation examines the role of concentration inequalities in efforts to improve performance bounds of supervised learning algorithms. The motivation to obtain tight generalization error and excess risk bounds in statistical learning theory comes from the belief that a deep understanding of a learning process might lead us to new useful ideas and more accurate algorithms. First part of the work studies concentration inequalities for one particular setting of dependent random variables: when they are sampled without replacement from the given finite population. We provide two novel Bernstein-style concentration inequalities for suprema of empirical processes and sampling without replacement. While these new inequalities may potentially have broad applications, we exemplify their significance in the second part of the work by studying the transductive setting of statistical learning theory. For which we provide an excess risk bound based on the localized complexity of the hypothesis class which holds under very mild assumptions. Finally, the third part of the work studies the PAC-Bayesian analysis, which is a general tool for data-dependent analysis in machine learning. We derive a new PAC-Bayes-Empirical-Bernstein inequality which is a powerful Bernstein-style concentration inequality depending only on empirical quantities. We show that in a number of interesting situations our new PAC-Bayes-Empirical-Bernstein bound can be significantly tighter than the state-of-the-art results.
Minimax Estimation of Kernel Mean Embeddings [Poster]
Spring School "Structural Inference", Brodten, Germany, 2016.
Global and Local Complexity Measures for Transductive Learning [Talk], [Poster], [Slides]
Yandex School of Data Analysis Conference, “Machine Learning: Prospects and Applications”, Berlin, Germany, 2015.
Sampling without replacement: reduction to i.i.d. VS direct approach [Slides],
Dagstuhl workshop, “Machine Learning with Interdependent and Non-Identically Distributed Data”, Schloss Dagstuhl, Germany, 2015.
Localized Complexities for Transductive Learning, COLT 2014, [Slides], [Poster], [Talk (videolectures.net)].
Ilya Tolstikhin, Gilles Blanchard, Marius Kloft.
New Concentration Inequalities for Sampling without Replacement and an Application to Transductive Learning [Slides]
MPI for Intelligent Systems, Tubingen, 2014.
PAC-Bayes-Empirical-Bernstein Inequality, NIPS 2013, [Spotlight], [Poster], [Talk (videolectures.net)].
Ilya Tolstikhin, Yevgeny Seldin.
PAC-Bayesian Inequalities for Martingales, GRAAL, Laval University, 2013, [Slides].
PAC-Bayes-Empirical-Bernstein Inequality, GRAAL, Laval University, 2013, [Slides].
Talks in Russian
Dissertation defense, Computing Centre of Russian Academy of Sciences, October 16th, 2014, [Slides], [Video in Russian]
Localized Complexities and Fast Rates in Statistical Learning Theory, Joint MIPT/IUM seminar on Stochastic Analysis, 2014, [Slides], [Video in Russian]
PAC-Bayesian Inequalities, Joint MIPT/IUM seminar on Stochastic Analysis, 2013, [Slides]
Concentration Inequalities for Sampling without Replacement, Joint MIPT/IUM seminar on Stochastic Analysis, 2013, [Slides]
Teaching Fellow (Seminar on Machine Learning) Moscow Institute of Physics and Technology Department of Innovation and High Technology |
2011 - 2012 |
Teaching Fellow (Seminar on Machine Learning) Lomonosov Moscow State University Department of Computational Mathematics and Cybernetics (Seminar webpage is available in russian) |
2012 - 2013 |
Teaching Assistant (Machine Learning) |
2013 - 2013 |