Séminaire Equations aux dérivées partielles
organisé par l'équipe Modélisation et contrôle
-
Ivan Dokmanić
A spring-block theory of feature learning in deep neural networks
3 décembre 2024 - 14:00Salle de conférences IRMA
A central question in deep learning is how deep neural networks (DNNs) learn features. DNN layers progressively collapse data into a regular low-dimensional geometry. This collective effect of nonlinearity, noise, learning rate, width, depth, and numerous other parameters, has eluded first-principles theories which are built from microscopic neuronal dynamics. We discovered a noise–nonlinearity phase diagram that highlights where shallow or deep layers learn features more effectively. I will describe a macroscopic mechanical theory of feature learning that accurately reproduces this phase diagram, offering a clear intuition for why and how some DNNs are ``lazy'' and some are ``active'', and relating the distribution of feature learning over layers with test accuracy. Joint work with Cheng Shi and Liming Pan. -
Yvonne Alama Bronsard
TBA
16 janvier 2025 - 14:00Salle de conférences IRMA
TBA -
Elise Grosjean
TBA
4 février 2025 - 14:00Salle de conférences IRMA
-
Simon Schneider
TBA
25 février 2025 - 14:00A confirmer