Fundamental Machine Learning Theorems
The goal of the course is to learn how to rigorously formulate machine learning problems and to show the role of the mathematical approach. The skills acquired in this course are the basis for writing Bachelor’s, Master’s and Ph.D works in machine learning and applied mathematics. The course includes discussions of axiomatic systems, theorems and their proofs that are relevant in machine learning. Their influence on practical applications is discussed.
- Gauss-Markov theorem
- Singular Value Decomposition theorem
- Principal Component Analysis and Karhunen-Loeve Decomposition
- Karush-Kuhn-Tucker theorem and optimization
- Kolmogorov and Arnold theorems, Tsybenko universal approximator theorem, deep neural network theorem
- The free lunch theorem in machine learning, Wolpert
- Metric spaces: RKHS Aronzhine, Mercer’s theorem
- Scheme theorem, Holland
- Convolution theorem, Parseval’s theorem
- PAC-learning, compression implies learning
- Representation theorem
- Theorems about boosting, bootstrap
- Variational approximation
- Takens’ embedding theorem
- Green, Stokes, deRham theorems in geometric learning
Presentation of the formulations, proofs and methods of application, significance, of theorems.
The quality of presentations and the quality of questions to the speaker from each student.
Algebra and analysis of the third year of MIPT.