title

Deep Learning to Discover Coordinates for Dynamics: Autoencoders & Physics Informed Machine Learning

description

Joint work with Nathan Kutz: https://www.youtube.com/channel/UCoUOaSVYkTV6W4uLvxvgiFA
Discovering physical laws and governing dynamical systems is often enabled by first learning a new coordinate system where the dynamics become simple. This is true for the heliocentric Copernican system, which enabled Kepler's laws and Newton's F=ma, for the Fourier transform, which diagonalizes the heat equation, and many others. In this video, we discuss how deep learning is being used to discover effective coordinate systems where simple dynamical systems models may be discovered.
Citable link for this video at: https://doi.org/10.52843/cassyni.4zpjhl
@eigensteve on Twitter
eigensteve.com
databookuw.com
Some useful papers:
https://www.pnas.org/content/116/45/22445 [SINDy + Autoencoders]
https://www.nature.com/articles/s41467-018-07210-0 [Koopman + Autoencoders]
https://arxiv.org/abs/2102.12086 [Koopman Review Paper]

detail

{'title': 'Deep Learning to Discover Coordinates for Dynamics: Autoencoders & Physics Informed Machine Learning', 'heatmap': [{'end': 761.44, 'start': 709.684, 'weight': 0.781}], 'summary': 'Delves into utilizing deep learning for coordinating systems and dynamics, featuring discussions on challenges in modeling dynamical systems, advances in autoencoder networks, and applying neural networks in nonlinear dynamics with potential applications in various fields.', 'chapters': [{'end': 121.979, 'segs': [{'end': 39.81, 'src': 'embed', 'start': 9.008, 'weight': 0, 'content': [{'end': 9.508, 'text': 'Welcome back.', 'start': 9.008, 'duration': 0.5}, {'end': 22.891, 'text': "So today I'm really excited to tell you about a new kind of emerging field of machine learning where we're essentially learning coordinate systems and dynamics at the same time for complex systems.", 'start': 10.188, 'duration': 12.703}, {'end': 34.674, 'text': "So especially I'm gonna talk about deep learning to discover coordinate systems where you can get simple or parsimonious representations of the dynamics in those coordinates.", 'start': 23.871, 'duration': 10.803}, {'end': 39.81, 'text': "So in particular, I'm going to mostly be talking about autoencoder networks.", 'start': 36.105, 'duration': 3.705}], 'summary': 'Emerging field of machine learning: learning coordinate systems and dynamics simultaneously for complex systems, focusing on autoencoder networks.', 'duration': 30.802, 'max_score': 9.008, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/KmQkDgu-Qp0/pics/KmQkDgu-Qp09008.jpg'}, {'end': 106.573, 'src': 'embed', 'start': 55.83, 'weight': 1, 'content': [{'end': 70.351, 'text': "we're going to essentially discover a coordinate embedding Phi and a decoder Psi so that you can map into a latent space Z that has the minimal essential information that you need to describe your system.", 'start': 55.83, 'duration': 14.521}, {'end': 72.373, 'text': 'And specifically,', 'start': 71.372, 'duration': 1.001}, {'end': 81.839, 'text': "we're going to try to learn these autoencoder networks so that we can represent the dynamics in that latent space very efficiently or simply.", 'start': 72.373, 'duration': 9.466}, {'end': 90.885, 'text': "So in this latent space z, again, that's like the minimal description of your system through this autoencoder network.", 'start': 83, 'duration': 7.885}, {'end': 97.729, 'text': "We're looking for a dynamical system z dot equals f of z, where f is as simple as possible.", 'start': 91.205, 'duration': 6.524}, {'end': 101.011, 'text': 'And so this is really an important dual problem.', 'start': 98.67, 'duration': 2.341}, {'end': 106.573, 'text': 'So on the one hand, you have the coordinate system discovery, the phi and psi coordinates.', 'start': 101.091, 'duration': 5.482}], 'summary': 'Discover coordinate embedding and decoder to efficiently represent system dynamics in a minimal latent space.', 'duration': 50.743, 'max_score': 55.83, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/KmQkDgu-Qp0/pics/KmQkDgu-Qp055830.jpg'}], 'start': 9.008, 'title': 'Machine learning for coordinate systems and dynamics', 'summary': 'Explores using deep learning to simultaneously learn coordinate systems and dynamics, with emphasis on utilizing autoencoder networks to efficiently represent system dynamics in a minimal latent space.', 'chapters': [{'end': 121.979, 'start': 9.008, 'title': 'Machine learning for coordinate systems and dynamics', 'summary': 'Discusses using deep learning to learn coordinate systems and dynamics simultaneously, particularly focusing on autoencoder networks to efficiently represent system dynamics in a minimal latent space.', 'duration': 112.971, 'highlights': ['Using deep learning to discover coordinate systems and dynamics simultaneously for complex systems, particularly focusing on autoencoder networks to efficiently represent system dynamics in a minimal latent space.', 'Learning coordinate embedding Phi and a decoder Psi to map a high-dimensional state X into a minimal latent space Z that describes the system efficiently.', 'Seeking to find a dynamical system z dot equals f of z that is as simple as possible to accurately describe how the system evolves in time through the discovered coordinate system.']}], 'duration': 112.971, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/KmQkDgu-Qp0/pics/KmQkDgu-Qp09008.jpg', 'highlights': ['Using deep learning to discover coordinate systems and dynamics simultaneously for complex systems, particularly focusing on autoencoder networks to efficiently represent system dynamics in a minimal latent space.', 'Learning coordinate embedding Phi and a decoder Psi to map a high-dimensional state X into a minimal latent space Z that describes the system efficiently.', 'Seeking to find a dynamical system z dot equals f of z that is as simple as possible to accurately describe how the system evolves in time through the discovered coordinate system.']}, {'end': 339.62, 'segs': [{'end': 213.963, 'src': 'embed', 'start': 144.766, 'weight': 0, 'content': [{'end': 151.887, 'text': "And what you would like to be able to do is, from this movie or this megapixel image that's evolving in time,", 'start': 144.766, 'duration': 7.121}, {'end': 160.37, 'text': 'you would like to be able to discover automatically, without a human expert in the loop, that there is a key variable, theta,', 'start': 151.887, 'duration': 8.483}, {'end': 161.551, 'text': 'that describes the system.', 'start': 160.37, 'duration': 1.181}, {'end': 166.674, 'text': 'the kind of minimal descriptive variable for the motion of this pendulum is the angle theta.', 'start': 161.551, 'duration': 5.123}, {'end': 175.521, 'text': "And you'd also like to discover that the dynamical system governing the motion of this pendulum, the evolution of theta,", 'start': 167.495, 'duration': 8.026}, {'end': 178.503, 'text': 'is given by a simple differential equation here.', 'start': 175.521, 'duration': 2.982}, {'end': 182.273, 'text': "So that's kind of at a very high level what we'd like to be able to do.", 'start': 179.471, 'duration': 2.802}, {'end': 186.995, 'text': 'My colleague Nathan Kutz calls this GoPro physics sometimes.', 'start': 182.293, 'duration': 4.702}, {'end': 190.897, 'text': "So you'd like to be able to have just a movie watching the world.", 'start': 187.015, 'duration': 3.882}, {'end': 196.26, 'text': 'learning the physics, learning how clouds are evolving, learning how pendula swing, things like that.', 'start': 190.897, 'duration': 5.363}, {'end': 198.861, 'text': 'learning how balls drop according to F equals ma.', 'start': 196.26, 'duration': 2.601}, {'end': 201.903, 'text': "OK So that's kind of what we want to do.", 'start': 199.922, 'duration': 1.981}, {'end': 206.505, 'text': 'But this is a difficult task to learn these coordinates and to learn these dynamics.', 'start': 202.363, 'duration': 4.142}, {'end': 213.963, 'text': "Good, And so you know, I'm gonna zoom out a little bit and talk about just kind of general challenges we have.", 'start': 207.246, 'duration': 6.717}], 'summary': 'Automatically discover key variable theta in evolving images and learn dynamical system governing motion.', 'duration': 69.197, 'max_score': 144.766, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/KmQkDgu-Qp0/pics/KmQkDgu-Qp0144766.jpg'}, {'end': 274.859, 'src': 'embed', 'start': 233.863, 'weight': 2, 'content': [{'end': 242.651, 'text': 'These are all dynamical systems that change in time according to some rules on the right-hand side, some physics, f of x.', 'start': 233.863, 'duration': 8.788}, {'end': 247.595, 'text': "And so the big challenges that we have, one of them is often we don't know this model f of x.", 'start': 242.651, 'duration': 4.944}, {'end': 251.818, 'text': "We don't have a good model for the dynamics, so we need to discover that model.", 'start': 247.595, 'duration': 4.223}, {'end': 254.541, 'text': "In the case of the brain, that's a good example.", 'start': 252.099, 'duration': 2.442}, {'end': 257.695, 'text': 'Nonlinearity is another key challenge.', 'start': 255.754, 'duration': 1.941}, {'end': 263.916, 'text': 'So even a small amount of nonlinearity really hampers our ability to simulate, estimate,', 'start': 257.774, 'duration': 6.142}, {'end': 267.377, 'text': 'predict and control these systems in the real world from limited measurements.', 'start': 263.916, 'duration': 3.461}, {'end': 274.859, 'text': 'And again, this is somewhere where finding good coordinate transformations can make a big difference in simplifying nonlinear dynamics.', 'start': 268.437, 'duration': 6.422}], 'summary': 'Challenges include discovering models and handling nonlinearity in dynamical systems.', 'duration': 40.996, 'max_score': 233.863, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/KmQkDgu-Qp0/pics/KmQkDgu-Qp0233863.jpg'}, {'end': 320.325, 'src': 'embed', 'start': 294.631, 'weight': 5, 'content': [{'end': 301.496, 'text': 'And so we want to essentially leverage the fact that patterns exist in that data to find those low dimensional representations,', 'start': 294.631, 'duration': 6.865}, {'end': 305.739, 'text': 'those kind of auto encoder coordinates I showed you, so that we can discover the dynamics.', 'start': 301.496, 'duration': 4.243}, {'end': 309.662, 'text': 'And maybe even handle some of the non linearity.', 'start': 306.72, 'duration': 2.942}, {'end': 315.406, 'text': "okay. so that's kind of a very high level overview of some of the things we want to do with these auto encoder networks.", 'start': 309.662, 'duration': 5.744}, {'end': 320.325, 'text': "Okay, and again, I'm gonna use the example of fluids.", 'start': 317.242, 'duration': 3.083}], 'summary': 'Leverage data patterns to find low-dimensional representations and handle nonlinearity for discovering dynamics using auto encoder networks.', 'duration': 25.694, 'max_score': 294.631, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/KmQkDgu-Qp0/pics/KmQkDgu-Qp0294631.jpg'}], 'start': 121.979, 'title': 'Modelling dynamical systems', 'summary': 'Discusses challenges in automatically discovering dynamical systems from evolving images and modeling dynamical systems, including difficulties in determining the model f(x), impact of nonlinearity on control, and handling high dimensionality in real-world data.', 'chapters': [{'end': 213.963, 'start': 121.979, 'title': 'Discovering dynamical systems from video', 'summary': 'Discusses the challenge of automatically discovering key variables and dynamical systems from evolving images, using the example of a pendulum motion, aiming to enable automatic learning of physical laws from visual data without human intervention.', 'duration': 91.984, 'highlights': ['Automatically discovering the minimal descriptive variable (angle theta) and the governing differential equation for the motion of a pendulum from evolving images, without human intervention, is a primary goal (quantifiable: automated discovery of key variables and dynamics).', "The overarching aim is to enable automatic learning of physical laws from visual data, such as understanding the evolution of clouds, pendulum motion, and the application of laws like F equals ma, with the term 'GoPro physics' being used to describe this concept (quantifiable: automatic learning of physical laws from visual data without human intervention).", 'Discussion on the challenges of learning these coordinates and dynamics from visual data, which is a difficult task (quantifiable: highlighting the difficulty of learning coordinates and dynamics from visual data).']}, {'end': 339.62, 'start': 213.963, 'title': 'Challenges in modelling dynamical systems', 'summary': 'Discusses the challenges of modeling dynamical systems, including the difficulties in determining the model f(x), the impact of nonlinearity on simulation and control, and the need to handle high dimensionality in real-world data, such as fluid flows and brain dynamics.', 'duration': 125.657, 'highlights': ["The challenge of not knowing the model f(x) for dynamical systems, requiring the discovery of the model. One of the challenges is often we don't know this model f of x. We don't have a good model for the dynamics, so we need to discover that model.", 'The impact of nonlinearity on the ability to simulate, estimate, predict, and control dynamical systems, hindering these processes from limited measurements. Nonlinearity hampers our ability to simulate, estimate, predict and control these systems in the real world from limited measurements.', 'The challenge of high dimensionality in real-world data, necessitating the discovery of low dimensional representations to find the dynamics. We very, very frequently face in the real world is high dimensionality... so we want to essentially leverage the fact that patterns exist in that data to find those low dimensional representations, those kind of auto encoder coordinates I showed you, so that we can discover the dynamics.']}], 'duration': 217.641, 'thumbnail': '', 'highlights': ['Automatically discovering the minimal descriptive variable (angle theta) and the governing differential equation for the motion of a pendulum from evolving images, without human intervention, is a primary goal (quantifiable: automated discovery of key variables and dynamics).', "The overarching aim is to enable automatic learning of physical laws from visual data, such as understanding the evolution of clouds, pendulum motion, and the application of laws like F equals ma, with the term 'GoPro physics' being used to describe this concept (quantifiable: automatic learning of physical laws from visual data without human intervention).", "The challenge of not knowing the model f(x) for dynamical systems, requiring the discovery of the model. One of the challenges is often we don't know this model f of x. We don't have a good model for the dynamics, so we need to discover that model.", 'The impact of nonlinearity on the ability to simulate, estimate, predict, and control dynamical systems, hindering these processes from limited measurements. Nonlinearity hampers our ability to simulate, estimate, predict and control these systems in the real world from limited measurements.', 'Discussion on the challenges of learning these coordinates and dynamics from visual data, which is a difficult task (quantifiable: highlighting the difficulty of learning coordinates and dynamics from visual data).', 'The challenge of high dimensionality in real-world data, necessitating the discovery of low dimensional representations to find the dynamics. We very, very frequently face in the real world is high dimensionality... so we want to essentially leverage the fact that patterns exist in that data to find those low dimensional representations, those kind of auto encoder coordinates I showed you, so that we can discover the dynamics.']}, {'end': 712.849, 'segs': [{'end': 398.281, 'src': 'embed', 'start': 372.622, 'weight': 0, 'content': [{'end': 389.798, 'text': 'you can essentially compute the singular value decomposition of a data matrix and you can decompose this movie into a linear combination of a very small number of modes or kind of eigenstates that capture most of the energy or variance of this system.', 'start': 372.622, 'duration': 17.176}, {'end': 398.281, 'text': 'And the way I like to think about this, this is a data-driven generalization of the Fourier transform.', 'start': 391.551, 'duration': 6.73}], 'summary': 'Singular value decomposition captures movie data energy with a small number of modes.', 'duration': 25.659, 'max_score': 372.622, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/KmQkDgu-Qp0/pics/KmQkDgu-Qp0372622.jpg'}, {'end': 498.351, 'src': 'embed', 'start': 471.608, 'weight': 1, 'content': [{'end': 477.133, 'text': "And so what we're going to do is generalize this linear coordinate embedding.", 'start': 471.608, 'duration': 5.525}, {'end': 479.935, 'text': 'And again, this is a coordinate system to represent your dynamics.', 'start': 477.333, 'duration': 2.602}, {'end': 486.06, 'text': "We're going to generalize this coordinate system by making it now a deep neural network.", 'start': 480.436, 'duration': 5.624}, {'end': 493.086, 'text': "So instead of one latent space layer, now we're going to have many, many hidden layers for the encoder, many hidden layers for the decoder.", 'start': 486.08, 'duration': 7.006}, {'end': 498.351, 'text': 'And our activation units, our nodes, are going to have nonlinear activation functions.', 'start': 494.047, 'duration': 4.304}], 'summary': 'Generalizing linear coordinate embedding to deep neural network with multiple hidden layers and nonlinear activation functions.', 'duration': 26.743, 'max_score': 471.608, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/KmQkDgu-Qp0/pics/KmQkDgu-Qp0471608.jpg'}, {'end': 727.837, 'src': 'embed', 'start': 696.576, 'weight': 2, 'content': [{'end': 700.838, 'text': 'So getting the right coordinate systems is often essential to learning the right dynamics.', 'start': 696.576, 'duration': 4.262}, {'end': 705.761, 'text': "And that's been born true over and over throughout the history of physics.", 'start': 701.078, 'duration': 4.683}, {'end': 709.664, 'text': "Good, so that's what we're trying to do here.", 'start': 707.881, 'duration': 1.783}, {'end': 712.849, 'text': "We're gonna learn the coordinate system that simplifies the dynamics.", 'start': 709.684, 'duration': 3.165}, {'end': 719.521, 'text': "And now I'm gonna walk you through a few examples of how we've been doing this and what some of the challenges and opportunities are.", 'start': 713.11, 'duration': 6.411}, {'end': 727.837, 'text': 'So one of my favorite networks here is one that was developed by Kathleen Champion when she was a PhD student with Nathan and me.', 'start': 721.109, 'duration': 6.728}], 'summary': 'Choosing the right coordinate system is essential for simplifying dynamics in physics.', 'duration': 31.261, 'max_score': 696.576, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/KmQkDgu-Qp0/pics/KmQkDgu-Qp0696576.jpg'}], 'start': 339.62, 'title': 'Deep learning for coordinate systems', 'summary': 'Explores singular value decomposition (svd) and its role in motivating deep autoencoder networks, generalizing linear coordinate embedding to a deep neural network, and simplifying dynamics through coordinate systems, aiding in discovering basic laws of physics.', 'chapters': [{'end': 470.688, 'start': 339.62, 'title': 'Singular value decomposition and deep autoencoder networks', 'summary': 'Explores the concept of singular value decomposition (svd) and its application as a data-driven generalization of the fourier transform, highlighting its role in motivating deep autoencoder networks and its comparison to principal components analysis as a shallow linear autoencoder network.', 'duration': 131.068, 'highlights': ['Singular value decomposition (SVD) is a data-driven generalization of the Fourier transform, tailored to specific problems and data, allowing the computation of a small number of modes capturing most of the energy or variance of the system. The SVD enables the computation of a small number of modes capturing most of the energy or variance of the system, serving as a data-driven generalization of the Fourier transform.', 'Principal components analysis (PCA) can be viewed as a shallow linear autoencoder network, aiming to find a latent representation Z to compress and recover as much information about the high dimensional state as possible. PCA serves as a shallow linear autoencoder network to find a latent representation Z for compressing and recovering information about the high-dimensional state.', 'The SVD can be abstracted as a very simple neural network, although it is more efficient to compute using QR factorization or classical numerical linear algebra techniques. The SVD can be abstracted as a very simple neural network, but it is recommended to compute it using QR factorization or classical numerical linear algebra techniques for efficiency.']}, {'end': 606.654, 'start': 471.608, 'title': 'Generalizing linear coordinate embedding', 'summary': 'Discusses generalizing linear coordinate embedding to a deep neural network with nonlinear activation functions, allowing a massive reduction in degrees of freedom needed to describe the latent space z, and training autoencoder networks to efficiently represent data and predict the dynamics in the latent space.', 'duration': 135.046, 'highlights': ['Training autoencoder networks to efficiently represent data and predict the dynamics in the latent space The chapter emphasizes training autoencoder networks not only to efficiently represent data in x but also to predict the dynamics in the latent space accurately and efficiently forward in time.', 'Generalizing linear coordinate embedding to a deep neural network with nonlinear activation functions The chapter discusses generalizing linear coordinate embedding to a deep neural network with many hidden layers for the encoder and decoder, as well as nonlinear activation functions, allowing a massive reduction in degrees of freedom needed to describe the latent space Z.', 'Learning a nonlinear manifold parameterized by coordinates z for efficient data representation The chapter explains that by making the coordinate system a deep neural network with nonlinear activation functions, it allows learning a nonlinear manifold parameterized by coordinates z, enabling efficient data representation.']}, {'end': 712.849, 'start': 606.654, 'title': 'Learning simplified dynamics through coordinate systems', 'summary': 'Highlights the importance of choosing the right coordinate system to simplify the dynamics of complex systems, using the example of the geocentric and heliocentric views of the solar system to illustrate how a change in coordinates can lead to much simpler dynamical systems and aid in discovering basic laws of physics.', 'duration': 106.195, 'highlights': ['Choosing the right coordinate system simplifies complex dynamics, as demonstrated by the shift from geocentric to heliocentric views of the solar system, making the dynamical system much simpler and amenable to discovering basic laws of physics.', 'The importance of finding the right coordinate system has been consistently true throughout the history of physics, emphasizing the essential role of coordinate systems in learning the right dynamics.', 'Understanding the physics of high dimensional inputs evolving in time is facilitated by the choice of the appropriate coordinate system, which can lead to simpler and more interpretable differential equations for describing dynamical systems.']}], 'duration': 373.229, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/KmQkDgu-Qp0/pics/KmQkDgu-Qp0339620.jpg', 'highlights': ['Singular value decomposition (SVD) is a data-driven generalization of the Fourier transform, tailored to specific problems and data, allowing the computation of a small number of modes capturing most of the energy or variance of the system.', 'Generalizing linear coordinate embedding to a deep neural network with nonlinear activation functions allows a massive reduction in degrees of freedom needed to describe the latent space Z.', 'Choosing the right coordinate system simplifies complex dynamics, as demonstrated by the shift from geocentric to heliocentric views of the solar system, making the dynamical system much simpler and amenable to discovering basic laws of physics.']}, {'end': 1362.851, 'segs': [{'end': 753.774, 'src': 'embed', 'start': 728.618, 'weight': 0, 'content': [{'end': 738.671, 'text': 'And this is essentially combining CINDI or the sparse identification of nonlinear dynamics to learn a dynamical system in the latent space of an autoencoder.', 'start': 728.618, 'duration': 10.053}, {'end': 742.004, 'text': 'And I think this is a really clever network design.', 'start': 739.622, 'duration': 2.382}, {'end': 746.708, 'text': "So essentially what Kathleen is doing is she's learning these encoder and decoder networks.", 'start': 742.084, 'duration': 4.624}, {'end': 748.309, 'text': 'These are big deep neural networks.', 'start': 746.728, 'duration': 1.581}, {'end': 753.774, 'text': "And there's additional loss functions that we use in our neural network training,", 'start': 749.39, 'duration': 4.384}], 'summary': 'Combining cindi to learn a dynamical system in the latent space of an autoencoder, using deep neural networks and additional loss functions.', 'duration': 25.156, 'max_score': 728.618, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/KmQkDgu-Qp0/pics/KmQkDgu-Qp0728618.jpg'}, {'end': 869.384, 'src': 'embed', 'start': 837.213, 'weight': 4, 'content': [{'end': 842.378, 'text': 'So, essentially, you can compute chain rules on different sub networks and make sure that the dynamics,', 'start': 837.213, 'duration': 5.165}, {'end': 846.562, 'text': 'the time derivatives across those different pieces of the network are consistent.', 'start': 842.378, 'duration': 4.184}, {'end': 854.352, 'text': "That essentially makes sure that you don't get into weird issues where you just try to shrink the Z as small as possible,", 'start': 847.202, 'duration': 7.15}, {'end': 857.235, 'text': 'which would in principle also make C small.', 'start': 854.352, 'duration': 2.883}, {'end': 861.341, 'text': 'So this is a really cool network that Kathleen developed.', 'start': 858.379, 'duration': 2.962}, {'end': 869.384, 'text': 'She was able to learn lots of very, very sparse parsimonious dynamical systems in these latent spaces for complex systems.', 'start': 861.741, 'duration': 7.643}], 'summary': 'Compute chain rules on sub networks to ensure consistent dynamics and avoid issues in shrinking parameters. kathleen developed a network to learn sparse parsimonious dynamical systems in latent spaces.', 'duration': 32.171, 'max_score': 837.213, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/KmQkDgu-Qp0/pics/KmQkDgu-Qp0837213.jpg'}, {'end': 1106.962, 'src': 'embed', 'start': 1075.127, 'weight': 3, 'content': [{'end': 1080.069, 'text': "what we're essentially doing is learning a nonlinear analog of the dynamic mode decomposition.", 'start': 1075.127, 'duration': 4.942}, {'end': 1085.511, 'text': "We're learning a nonlinear coordinate system that'll take an original nonlinear system and make it look linear.", 'start': 1080.469, 'duration': 5.042}, {'end': 1093.412, 'text': "And so that's exactly what many, many groups in the community have done in the last few years.", 'start': 1087.225, 'duration': 6.187}, {'end': 1097.796, 'text': "So I'm showing you work by Bethany Lush, who was a postdoc with Nathan Kutz and myself.", 'start': 1093.572, 'duration': 4.224}, {'end': 1106.962, 'text': 'This is her deep Koopman autoencoder where she takes her input and essentially through some hidden layers learns a coordinate system,', 'start': 1098.217, 'duration': 8.745}], 'summary': 'Learning a nonlinear coordinate system to make nonlinear systems look linear, based on work by bethany lush.', 'duration': 31.835, 'max_score': 1075.127, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/KmQkDgu-Qp0/pics/KmQkDgu-Qp01075127.jpg'}, {'end': 1151.255, 'src': 'embed', 'start': 1119.225, 'weight': 1, 'content': [{'end': 1126.327, 'text': 'So she needed to kind of add a few innovations to this network to parametrize these linear dynamics by the frequency of the system.', 'start': 1119.225, 'duration': 7.102}, {'end': 1135.315, 'text': 'There have been many, many other groups that have developed similar deep Koopman autoencoder networks, that essentially learn these coordinates,', 'start': 1127.628, 'duration': 7.687}, {'end': 1139.018, 'text': 'where your dynamics can be simply or linearly represented.', 'start': 1135.315, 'duration': 3.703}, {'end': 1151.255, 'text': 'And I want to point out that those different approaches to these kind of Koopman networks or these non-linear analogs of dynamic mode decomposition come in many shapes and sizes.', 'start': 1140.752, 'duration': 10.503}], 'summary': 'Innovations added to network for linear dynamics; various groups developed similar networks for learning coordinates.', 'duration': 32.03, 'max_score': 1119.225, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/KmQkDgu-Qp0/pics/KmQkDgu-Qp01119225.jpg'}, {'end': 1204.67, 'src': 'embed', 'start': 1158.357, 'weight': 5, 'content': [{'end': 1164.859, 'text': "It's about 100 pages, and it's all about how you find these coordinate systems with neural networks and with other approaches.", 'start': 1158.357, 'duration': 6.502}, {'end': 1170.3, 'text': "There's two big philosophies that the community has arrived at.", 'start': 1166.399, 'duration': 3.901}, {'end': 1175.982, 'text': 'One of them is the constrictive autoencoder that I showed you before.', 'start': 1170.94, 'duration': 5.042}, {'end': 1181.343, 'text': 'where you have a high dimensional state, you choke it down to a lower dimensional latent space,', 'start': 1175.982, 'duration': 5.361}, {'end': 1188.685, 'text': 'where you can model the dynamics linearly and then you can decode that latent space to your original state x.', 'start': 1181.343, 'duration': 7.342}, {'end': 1199.608, 'text': 'But what many people in the community have also done is take your system state and actually lift it to a higher dimensional latency variable z.', 'start': 1189.825, 'duration': 9.783}, {'end': 1204.67, 'text': 'So you can take, even if you have a high dimensional state x, you can lift it to an even higher dimensional state.', 'start': 1199.608, 'duration': 5.062}], 'summary': 'Neural networks establish coordinate systems, using both lower and higher dimensional latent spaces for system states.', 'duration': 46.313, 'max_score': 1158.357, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/KmQkDgu-Qp0/pics/KmQkDgu-Qp01158357.jpg'}, {'end': 1325.877, 'src': 'embed', 'start': 1293.084, 'weight': 7, 'content': [{'end': 1299.509, 'text': 'So the basic idea of Koopman is that if you have some original dynamical system like this Duffing oscillator with three fixed points,', 'start': 1293.084, 'duration': 6.425}, {'end': 1304.891, 'text': 'you can essentially find new coordinate systems or coordinate transformations.', 'start': 1300.35, 'duration': 4.541}, {'end': 1314.734, 'text': 'where you expand the region of linear validity, you kind of expand where linearization is valid through a non-linear coordinate system.', 'start': 1304.891, 'duration': 9.843}, {'end': 1315.614, 'text': 'so here, for example,', 'start': 1314.734, 'duration': 0.88}, {'end': 1325.877, 'text': 'i might take my local linearization around this third fixed point and i might find a local coordinate system where all of the dynamics in this pink region are approximately linear.', 'start': 1315.614, 'duration': 10.263}], 'summary': 'Koopman theory expands linearization validity through nonlinear coordinate systems for dynamical systems like the duffing oscillator.', 'duration': 32.793, 'max_score': 1293.084, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/KmQkDgu-Qp0/pics/KmQkDgu-Qp01293084.jpg'}], 'start': 713.11, 'title': 'Advances in autoencoder networks', 'summary': 'Explores the development of cindi autoencoder for sparse dynamics, deep koopman autoencoder networks, and coordinate systems in neural networks. these advancements enable the linear representation of nonlinear dynamical systems, with implications for various fields and result in sparse and interpretable dynamical systems.', 'chapters': [{'end': 857.235, 'start': 713.11, 'title': 'Cindi autoencoder for sparse dynamics', 'summary': 'Discusses the development of a cindi autoencoder by kathleen champion, which uses sparse identification of nonlinear dynamics to learn a dynamical system in the latent space of an autoencoder, resulting in a sparse and interpretable dynamical system.', 'duration': 144.125, 'highlights': ['Kathleen Champion developed a CINDI autoencoder that combines sparse identification of nonlinear dynamics to learn a dynamical system in the latent space of an autoencoder.', 'The autoencoder aims to find the sparsest combination of terms that describe the dynamics in the latent space, resulting in an interpretable and simple dynamical system.', 'Additional loss functions, including cindy losses and cindy regularization, were introduced to ensure the convergence of the CINDI autoencoder to the right solution and make the dynamical system as sparse as possible.', 'The network incorporates different loss functions to compute chain rules on different subnetworks, ensuring consistent time derivatives across the network.', 'The goal is to make the dynamical system interpretable and simple, akin to F equals ma, while ensuring consistent time derivatives across different parts of the network.']}, {'end': 1158.317, 'start': 858.379, 'title': 'Deep koopman autoencoder', 'summary': 'Discusses the development of deep koopman autoencoder networks to learn coordinate systems, enabling the linear representation of nonlinear dynamical systems, with implications for fluid mechanics and other fields.', 'duration': 299.938, 'highlights': ['Deep Koopman autoencoder networks are developed to learn coordinate systems and achieve linear representation of nonlinear dynamical systems. This approach allows for the linearization of dynamics, which enables the use of textbook methods for linear estimators and controllers.', 'The encoder and decoder networks undergo significant strain when aiming to remove all nonlinear terms and achieve truly linear systems. Developing coordinate systems to remove all nonlinear terms and achieve truly linear systems poses challenges in training the encoder and decoder networks, making it a much more expensive and difficult process.', 'The use of deep neural networks with nonlinear activation functions facilitates the learning of a nonlinear coordinate system analogous to the dynamic mode decomposition, providing a means to make original nonlinear systems look linear. By employing deep neural networks with nonlinear activation functions, it becomes possible to learn a nonlinear coordinate system that can transform original nonlinear systems into linear representations, akin to the dynamic mode decomposition.']}, {'end': 1362.851, 'start': 1158.357, 'title': 'Coordinate systems in neural networks', 'summary': 'Discusses two philosophies in finding coordinate systems with neural networks: constrictive autoencoders and lifting system states to higher dimensional latency variables. it explores how these approaches provide linear representations of dynamics and relates them to existing machine learning literature and physical intuition.', 'duration': 204.494, 'highlights': ['Two philosophies in finding coordinate systems with neural networks: constrictive autoencoders and lifting system states to higher dimensional latency variables.', 'The constrictive autoencoder approach involves choking down a high dimensional state to a lower dimensional latent space, allowing for linear modeling of dynamics and decoding to the original state X.', 'The lifting system states approach involves lifting a high dimensional state to an even higher dimensional state, modeling the evolution with linear dynamics and mapping back to X.', 'Operating in a high dimensional space can make nonlinear processes look more linear, consistent with existing machine learning literature.', 'Constrictive autoencoders are more consistent with physical intuition, as they are interpretable and can help tease out relationships in a low dimensional latent space.', 'The Koopman review paper explores finding new coordinate systems or coordinate transformations to expand the region of linear validity in dynamical systems.', 'Koopman introduces the concept of finding local coordinate systems where dynamics are approximately linear and global coordinates that appear linear, like the hamiltonian energy of the system.', 'The Hamiltonian energy of the system is itself one of the eigenfunctions and behaves linearly in time, representing linear dynamics.', 'The chapter is still exploring a fourth perspective related to coordinate systems in dynamical systems.']}], 'duration': 649.741, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/KmQkDgu-Qp0/pics/KmQkDgu-Qp0713110.jpg', 'highlights': ['Kathleen Champion developed a CINDI autoencoder that combines sparse identification of nonlinear dynamics to learn a dynamical system in the latent space of an autoencoder.', 'Deep Koopman autoencoder networks are developed to learn coordinate systems and achieve linear representation of nonlinear dynamical systems.', 'The autoencoder aims to find the sparsest combination of terms that describe the dynamics in the latent space, resulting in an interpretable and simple dynamical system.', 'The use of deep neural networks with nonlinear activation functions facilitates the learning of a nonlinear coordinate system analogous to the dynamic mode decomposition.', 'The network incorporates different loss functions to compute chain rules on different subnetworks, ensuring consistent time derivatives across the network.', 'The constrictive autoencoder approach involves choking down a high dimensional state to a lower dimensional latent space, allowing for linear modeling of dynamics and decoding to the original state X.', 'The lifting system states approach involves lifting a high dimensional state to an even higher dimensional state, modeling the evolution with linear dynamics and mapping back to X.', 'The Koopman review paper explores finding new coordinate systems or coordinate transformations to expand the region of linear validity in dynamical systems.']}, {'end': 1614.296, 'segs': [{'end': 1419.793, 'src': 'embed', 'start': 1376.677, 'weight': 0, 'content': [{'end': 1392.522, 'text': 'And that is something that Henning Lange has recently looked into is essentially building deep neural networks that can rescale kind of time and space to make your nonlinear oscillators look like a single linear oscillator.', 'start': 1376.677, 'duration': 15.845}, {'end': 1397.464, 'text': 'And in very complex examples like this shear layer, fluid flow,', 'start': 1393.362, 'duration': 4.102}, {'end': 1405.246, 'text': "he's been able to find really really simple low dimensional representations that very accurately represent this nonlinear oscillator system.", 'start': 1397.464, 'duration': 7.782}, {'end': 1409.067, 'text': 'Okay, a couple of other examples before I conclude.', 'start': 1406.565, 'duration': 2.502}, {'end': 1415.451, 'text': 'So those deep Koopman neural networks I was telling you about for ordinary differential equations.', 'start': 1409.967, 'duration': 5.484}, {'end': 1419.793, 'text': 'Craig Jin and Bethany Lush recently extended that to partial differential equations.', 'start': 1415.451, 'duration': 4.342}], 'summary': "Henning lange's deep neural networks simplify complex systems like shear layer fluid flow, achieving low-dimensional representations.", 'duration': 43.116, 'max_score': 1376.677, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/KmQkDgu-Qp0/pics/KmQkDgu-Qp01376677.jpg'}, {'end': 1476.709, 'src': 'embed', 'start': 1448.161, 'weight': 2, 'content': [{'end': 1456.228, 'text': 'And with our Koopman network applied to partial differential equations, we can essentially learn that linearizing transform in an automated way,', 'start': 1448.161, 'duration': 8.067}, {'end': 1462.192, 'text': 'without knowing any kind of first principles, physics or even having access of the governing equations.', 'start': 1456.228, 'duration': 5.964}, {'end': 1467.817, 'text': 'So just from data, we can learn these linearizing transforms in PDEs.', 'start': 1462.413, 'duration': 5.404}, {'end': 1476.709, 'text': 'We also have a method, this is, with Craig Jin and Dan Shea, essentially to find these deep embeddings,', 'start': 1469.383, 'duration': 7.326}], 'summary': 'Koopman network learns linearizing transforms in pdes from data', 'duration': 28.548, 'max_score': 1448.161, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/KmQkDgu-Qp0/pics/KmQkDgu-Qp01448161.jpg'}, {'end': 1532.52, 'src': 'embed', 'start': 1488.018, 'weight': 3, 'content': [{'end': 1496.346, 'text': "and essentially what we can do is through a very similar non-linear auto encoder network we can learn these non-linear analogs of green's functions,", 'start': 1488.018, 'duration': 8.328}, {'end': 1500.79, 'text': 'which would have lots of applications in, like non-linear beam theory or, you know,', 'start': 1496.346, 'duration': 4.444}, {'end': 1505.895, 'text': 'aircraft wings that can deform massively past where the linear approximation is valid.', 'start': 1500.79, 'duration': 5.105}, {'end': 1510.781, 'text': 'Okay, so I want to just kind of tie this up and conclude here.', 'start': 1507.358, 'duration': 3.423}, {'end': 1518.268, 'text': "So we've talked about how there is this joint problem of learning coordinates and dynamics.", 'start': 1511.282, 'duration': 6.986}, {'end': 1527.157, 'text': 'This is one of the most exciting areas of machine learning research for physics-informed machine learning or for physics discovery with machine learning.', 'start': 1518.729, 'duration': 8.428}, {'end': 1532.52, 'text': "where oftentimes you know if i don't know what the right coordinate system is to measure my system.", 'start': 1527.857, 'duration': 4.663}], 'summary': "Non-linear autoencoder learns analogs of green's functions for physics applications.", 'duration': 44.502, 'max_score': 1488.018, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/KmQkDgu-Qp0/pics/KmQkDgu-Qp01488018.jpg'}, {'end': 1614.296, 'src': 'embed', 'start': 1592.23, 'weight': 5, 'content': [{'end': 1597.354, 'text': 'Or you have some conservation law that you know is going to be satisfied, like conservation of momentum or energy.', 'start': 1592.23, 'duration': 5.124}, {'end': 1602.1, 'text': 'Colleagues are already building these networks that have those baked in.', 'start': 1598.956, 'duration': 3.144}, {'end': 1610.831, 'text': "And so there's tons of opportunity to put partially known physics into your system and to learn entirely new system,", 'start': 1602.361, 'duration': 8.47}, {'end': 1612.593, 'text': "physics that we didn't know before.", 'start': 1610.831, 'duration': 1.762}, {'end': 1614.296, 'text': 'All right, thank you very much.', 'start': 1613.314, 'duration': 0.982}], 'summary': 'Machine learning networks can incorporate known physics for new insights.', 'duration': 22.066, 'max_score': 1592.23, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/KmQkDgu-Qp0/pics/KmQkDgu-Qp01592230.jpg'}], 'start': 1363.791, 'title': 'Applying neural networks in nonlinear dynamics', 'summary': "Explores using deep neural networks to rescale nonlinear oscillators, discover non-linear analogs of green's functions, and apply physics-informed machine learning for physics discovery, with potential applications in various fields including partial differential equations, beam theory, and aircraft wings deformation.", 'chapters': [{'end': 1467.817, 'start': 1363.791, 'title': 'Rescaling nonlinear oscillators with neural networks', 'summary': "Discusses the use of deep neural networks to rescale time and space, making nonlinear oscillators appear as linear oscillators, with examples including the transformation of the nonlinear berger's equation into the linear heat equation and the application of koopman networks to learn linearizing transforms in partial differential equations from data.", 'duration': 104.026, 'highlights': ['Henning Lange has built deep neural networks to rescale time and space, making nonlinear oscillators look like a single linear oscillator, with simple low dimensional representations found for complex examples like the shear layer fluid flow.', "Craig Jin and Bethany Lush extended deep Koopman neural networks to partial differential equations, revealing coordinate systems that make spatial temporal evolving systems look approximately linear, enabling the transformation of the nonlinear Berger's equation into the linear heat equation.", 'Koopman networks can learn linearizing transforms in partial differential equations in an automated way from data, without requiring knowledge of first principles physics or access to governing equations.']}, {'end': 1510.781, 'start': 1469.383, 'title': "Non-linear analog of green's functions", 'summary': "Discusses the discovery of non-linear analogs of green's functions through a method involving a non-linear auto encoder network, with potential applications in non-linear beam theory and aircraft wings deformation beyond linear approximation.", 'duration': 41.398, 'highlights': ["By using a non-linear auto encoder network, they can discover non-linear analogs of Green's functions, which has applications in non-linear beam theory and aircraft wings deformation beyond linear approximation.", "Green's functions are very useful for linear boundary value problems in partial differential equations.", "The method aims to find deep embeddings to discover nonlinear analogs of Green's functions with Craig Jin and Dan Shea."]}, {'end': 1614.296, 'start': 1511.282, 'title': 'Physics-informed machine learning', 'summary': 'Discusses the joint problem of learning coordinates and dynamics for physics discovery with machine learning, exploring the dual optimization problem of learning coordinate embedding and finding a latent space with simple dynamics, with widely explored approaches to sparsity and linearity in the dynamics.', 'duration': 103.014, 'highlights': ['The dual optimization problem of learning coordinate embedding phi and finding a latent space z with simple dynamics is a key focus in machine learning research for physics discovery, with widely explored approaches to sparsity and linearity in the dynamics.', 'Networks are being developed to learn embeddings for complex systems, with examples including sparse and nonlinear dynamics using the Cindy approach, and finding coordinate embeddings where dynamics are linear, which are both widely explored in the literature.', 'Opportunities exist to incorporate known physics constraints, such as symmetries and conservation laws, into the learning process, leading to the discovery of entirely new physics and systems.', 'There is significant potential to integrate partially known physics into machine learning systems and to learn entirely new physics that was previously unknown, indicating a broad scope for exploration and development in the field.']}], 'duration': 250.505, 'thumbnail': 'https://coursnap.oss-ap-southeast-1.aliyuncs.com/video-capture/KmQkDgu-Qp0/pics/KmQkDgu-Qp01363791.jpg', 'highlights': ['Henning Lange built deep neural networks to rescale time and space, making nonlinear oscillators look like a single linear oscillator.', 'Craig Jin and Bethany Lush extended deep Koopman neural networks to partial differential equations, revealing coordinate systems that make spatial temporal evolving systems look approximately linear.', 'Koopman networks can learn linearizing transforms in partial differential equations in an automated way from data.', "Using a non-linear auto encoder network, they can discover non-linear analogs of Green's functions, which has applications in non-linear beam theory and aircraft wings deformation beyond linear approximation.", 'The dual optimization problem of learning coordinate embedding phi and finding a latent space z with simple dynamics is a key focus in machine learning research for physics discovery.', 'Opportunities exist to incorporate known physics constraints, such as symmetries and conservation laws, into the learning process, leading to the discovery of entirely new physics and systems.']}], 'highlights': ['Using deep learning to discover coordinate systems and dynamics simultaneously for complex systems, particularly focusing on autoencoder networks to efficiently represent system dynamics in a minimal latent space.', 'Automatically discovering the minimal descriptive variable (angle theta) and the governing differential equation for the motion of a pendulum from evolving images, without human intervention, is a primary goal (quantifiable: automated discovery of key variables and dynamics).', 'Singular value decomposition (SVD) is a data-driven generalization of the Fourier transform, tailored to specific problems and data, allowing the computation of a small number of modes capturing most of the energy or variance of the system.', 'Kathleen Champion developed a CINDI autoencoder that combines sparse identification of nonlinear dynamics to learn a dynamical system in the latent space of an autoencoder.', 'Henning Lange built deep neural networks to rescale time and space, making nonlinear oscillators look like a single linear oscillator.']}