Deep learning has become a prominent method for many applications, for instance computer vision or neural language processing. Mathematical understanding of these methods is yet incomplete. A recent approach has been to view a neural network as a discretized version of an ordinary differential equation. I will start by providing an overview of this emerging field and discuss new results regarding Recurrent Neural Networks, a common type of neural networks for time series. Joint work with Adeline Fermanian (Sorbonne University), Pierre Marion (Sorbonne University) and Jean-Philippe Vert (Google Research).
|