Nonlinear Science Webinar
October 20, 2021 - 3:00pm to 4:00pm
Dept. of Chemical and Biological Engineering, Univ. of Wisconsin-Madison
1. Speaker Alec Linot: Modeling chaotic spatiotemporal dynamics with a minimal representation using Neural ODEs
Solutions to dissipative partial differential equations that exhibit chaotic dynamics often evolve to attractors that exist on finite-dimensional manifolds. We describe a data-driven reduced order modelling (ROM) method to find the coordinates on this manifold and find an ordinary differential equation (ODE) in these coordinates. The manifold coordinates are found by reducing the system dimension via an undercomplete autoencoder – a neural network that reduces then expands dimension – and an ODE is learned in this coordinate system with a Neural ODE. Learning an ODE, instead of a discrete time-map, allows us to evolve trajectories arbitrarily far forward, and allows for training on unevenly and/or widely spaced data in time. We test on the Kuramoto-Sivashinsky equation for domain sizes that exhibit spatiotemporally chaos, and find the ROM gives accurate short- and long-time statistics with training data separated up to 0.7 Lyapunov times.
2. Speaker Kevin Zeng: Deep Reinforcement Learning Using Data-Driven Reduced-Order Models Discovers and Stabilizes Low Dissipation Equilibria
Deep reinforcement learning (RL), a data-driven method capable of discovering complex control strategies for high-dimensional systems, requires substantial interactions with the target system, making it costly when the system is computationally or experimentally expensive (e.g. flow control). We mitigate this challenge by combining dimension reduction via an autoencoder with a neural ODE framework to learn a low-dimensional dynamical model, which we substitute in place of the true system during RL training to efficiently estimate the control policy. We apply our method to data from the Kuramoto-Sivashinsky equation. With a goal of minimizing dissipation, we extract control policies from the model using RL and show that the model-based strategies perform well on the full dynamical system and highlight that the RL agent discovers and stabilizes a forced equilibrium solution, despite never having been given explicit information about this state’s existence.