Machine learning, réseaux de neurones et apprentissage profond

P. Gallinari - B. Wilbertz

Programme

  1. Machine learning

  2. Introduction to supervised learning (Problem formulation, bias-variance tradeoff, valuation metrics, cross-validation, bootstrapping, data pre-processing!)

  3. Linear and non-linear regression models (least-square, partial least square, lasso, part, k-nearest neighbors, svm)

  4. Decision tree based models (Cart, Random Forest, Gradient Boosting (esp XGBoost, Catboost, LightGBM))

  5. Model ensembling (Stacking, blending, …)

  6. Model interpretability (Partial Dependency Plots, ICEPlots, LIME and Shap)

  7. Neural networks

  8. The birth of neural networks: the Perceptron and Adaline models. Basic statistical predictive models: linear regression and logistic regression. This is an intuitive introduction to the problems of data analysis, to the general field of machine learning and a definition of the formal framework to be used throughout the course. The main concepts of the Neural Network domain will be introduced via simple algorithms developed in the sixties. This will allow us to introduce, using easily understandable concepts, the notions of Neural network, adaptive algorithm, generalization. A second part of this session will be dedicated to basic predictive models used in statistics for regression and classification.

  9. Optimization basics: gradient and stochastic gradient methods. Optimization methods are at the core of the learning algorithms for Neural networks. We will introduce the general ideas of gradient methods and focus then on stochastic optimization via stochastic gradient methods. We will introduce several heuristics which are currently used for training large deep architectures.

  10. Multilayer Perceptrons. Generalization properties and complexity control.

  11. Deep learning : This is an introduction to the most famous nonlinear NN models developed in the 90s and to the problems of generalization and complexity control which are essential for Deep Learning architectures.

  12. Introduction to deep learning: auto-associators and Convolutional Neural Networks Introduction to the concepts of Deep Learning via two examples of Deep Neural Networks.

  13. Dealing with sequences: recurrent Neural Networks Recurrent Neural Networks are today the key technology in domains like speech recognition, language processing, translation and more generally sequence processing. We introduce the main concepts of these methods on different variants of this family of models.

  14. Unsupervised learning: generative models, Generative Adversarial Networks and Variational Auto-Encoders. This is a more advanced course on the problem of non-parametric density estimation via Deep NN.

  15. Unsupervised learning: generative models - follow up of course 6. III.5. Applications in the domains of vision, natural language processing, complex signal analysis. We illustrate the algorithms introduced so far using a series of application examples in different domains.

Références

  • Deep Learning, An MIT Press book, by Ian Goodfellow, Yoshua Bengio and Aaron Courville, 2017.

  • Hastie, Trevor ; Tibshirani, Robert ; Friedman, Jerome: The Elements of Statistical Learning. 2nd Edition, New York : Springer, 2009

  • Kuhn, Max and Kjell. Johnson, Applied Predictive Modeling. New York: Springer, 2013