### Tue 28 May 15:00: Spectral inclusions and approximations of finite and infinite banded matrices

We derive inclusion sets and approximations to spectrum and pseudospectrum of banded, in general non-normal, matrices of finite or infinite size. In the infinite case (bi- or semi-infinite), the matrix acts as a bounded linear operator on the corresponding l^2 space, and we moreover bound and approximate its essential spectrum.

Our inclusion sets come as unions of pseudospectra of certain submatrices of chosen size. Via this choice, we can balance accuracy against numerical cost. The philosophy is to split one global spectral problem into several local problems of moderate size.

- Speaker: Marko Lindner
- Tuesday 28 May 2024, 15:00-16:00
- Venue: Centre for Mathematical Sciences, MR19.
- Series: Applied and Computational Analysis; organiser: Matthew Colbrook.

### Tue 28 May 15:00: Spectral inclusions and approximations of finite and infinite banded matrices

We derive inclusion sets and approximations to spectrum and pseudospectrum of banded, in general non-normal, matrices of finite or infinite size. In the infinite case (bi- or semi-infinite), the matrix acts as a bounded linear operator on the corresponding l^2 space, and we moreover bound and approximate its essential spectrum.

Our inclusion sets come as unions of pseudospectra of certain submatrices of chosen size. Via this choice, we can balance accuracy against numerical cost. The philosophy is to split one global spectral problem into several local problems of moderate size.

- Speaker: Marko Lindner
- Tuesday 28 May 2024, 15:00-16:00
- Venue: Centre for Mathematical Sciences, MR19.
- Series: Applied and Computational Analysis; organiser: Matthew Colbrook.

### Thu 13 Jun 15:00: Finite Element Exterior Calculus for Hamiltonian PDEs

We consider the application of finite element exterior calculus (FEEC) methods to a class of canonical Hamiltonian PDE systems involving differential forms. Solutions to these systems satisfy a local multisymplectic conservation law, which generalizes the more familiar symplectic conservation law for Hamiltonian systems of ODEs, and which is connected with physically-important reciprocity phenomena, such as Lorentz reciprocity in electromagnetics. We characterize hybrid FEEC methods whose numerical traces satisfy a version of the multisymplectic conservation law, and we apply this characterization to several specific classes of FEEC methods, including conforming Arnold–Falk–Winther-type methods and various hybridizable discontinuous Galerkin (HDG) methods. Interestingly, the HDG -type and other nonconforming methods are shown, in general, to be multisymplectic in a stronger sense than the conforming FEEC methods. This substantially generalizes previous work of McLachlan and Stern [Found. Comput. Math., 20 (2020), pp. 35–69] on the more restricted class of canonical Hamiltonian PDEs in the de Donder–Weyl grad-div form.

- Speaker: Ari Stern (Washington University in St. Louis)
- Thursday 13 June 2024, 15:00-16:00
- Venue: Centre for Mathematical Sciences, MR14.
- Series: Applied and Computational Analysis; organiser: Matthew Colbrook.

### Thu 13 Jun 15:00: Finite Element Exterior Calculus for Hamiltonian PDEs

We consider the application of finite element exterior calculus (FEEC) methods to a class of canonical Hamiltonian PDE systems involving differential forms. Solutions to these systems satisfy a local multisymplectic conservation law, which generalizes the more familiar symplectic conservation law for Hamiltonian systems of ODEs, and which is connected with physically-important reciprocity phenomena, such as Lorentz reciprocity in electromagnetics. We characterize hybrid FEEC methods whose numerical traces satisfy a version of the multisymplectic conservation law, and we apply this characterization to several specific classes of FEEC methods, including conforming Arnold–Falk–Winther-type methods and various hybridizable discontinuous Galerkin (HDG) methods. Interestingly, the HDG -type and other nonconforming methods are shown, in general, to be multisymplectic in a stronger sense than the conforming FEEC methods. This substantially generalizes previous work of McLachlan and Stern [Found. Comput. Math., 20 (2020), pp. 35–69] on the more restricted class of canonical Hamiltonian PDEs in the de Donder–Weyl grad-div form.

- Speaker: Ari Stern (Washington University in St. Louis)
- Thursday 13 June 2024, 15:00-16:00
- Venue: Centre for Mathematical Sciences, MR14.
- Series: Applied and Computational Analysis; organiser: Matthew Colbrook.

### Thu 13 Jun 15:00: Finite Element Exterior Calculus for Hamiltonian PDEs

We consider the application of finite element exterior calculus (FEEC) methods to a class of canonical Hamiltonian PDE systems involving differential forms. Solutions to these systems satisfy a local multisymplectic conservation law, which generalizes the more familiar symplectic conservation law for Hamiltonian systems of ODEs, and which is connected with physically-important reciprocity phenomena, such as Lorentz reciprocity in electromagnetics. We characterize hybrid FEEC methods whose numerical traces satisfy a version of the multisymplectic conservation law, and we apply this characterization to several specific classes of FEEC methods, including conforming Arnold–Falk–Winther-type methods and various hybridizable discontinuous Galerkin (HDG) methods. Interestingly, the HDG -type and other nonconforming methods are shown, in general, to be multisymplectic in a stronger sense than the conforming FEEC methods. This substantially generalizes previous work of McLachlan and Stern [Found. Comput. Math., 20 (2020), pp. 35–69] on the more restricted class of canonical Hamiltonian PDEs in the de Donder–Weyl grad-div form.

- Speaker: Ari Stern (Washington University in St. Louis)
- Thursday 13 June 2024, 15:00-16:00
- Venue: Centre for Mathematical Sciences, MR12.
- Series: Applied and Computational Analysis; organiser: Matthew Colbrook.

### Thu 13 Jun 15:00: Finite Element Exterior Calculus for Hamiltonian PDEs

- Speaker: Ari Stern (Washington University in St. Louis)
- Thursday 13 June 2024, 15:00-16:00
- Venue: Centre for Mathematical Sciences, MR12.
- Series: Applied and Computational Analysis; organiser: Matthew Colbrook.

### Thu 06 Jun 15:00: Singular flows, zeroth order pseudodifferential operators and spectra

The propagation of internal gravity waves in stratified media (such as those found in ocean basins and lakes) leads to the development of attractors. These structures accumulate much of the wave energy and can make the fluid flow highly singular. These questions have been the subject of fascinating recent analytical developments by de Verdiere & Saint-Raymond, and Zworski and co-workers, who examine a simplified model which retains many of the important features. These are related to a certain zeroth-order pseudodifferential operator.

In this talk, we first review the physical phenomenon, and the (highly simplified) model evolution problem. We next describe a high-accuracy computational method to solve the evolution problem, whose long-term behaviour is known to be non-square-integrable. Then, we use similar tools to discretize the corresponding eigenvalue problem. Since the eigenvalues are embedded in a continuous spectrum, their computation is based on viscous approximations. We also study the long-term evolution of the dynamics of the system. This is joint work with Javier Almonacid.- Speaker: Nilima Nigam (Simon Fraser)
- Thursday 06 June 2024, 15:00-16:00
- Venue: Centre for Mathematical Sciences, MR12.
- Series: Applied and Computational Analysis; organiser: Matthew Colbrook.

### Thu 06 Jun 15:00: Singular flows, zeroth order pseudodifferential operators and spectra

The propagation of internal gravity waves in stratified media (such as those found in ocean basins and lakes) leads to the development of attractors. These structures accumulate much of the wave energy and can make the fluid flow highly singular. These questions have been the subject of fascinating recent analytical developments by de Verdiere & Saint-Raymond, and Zworski and co-workers, who examine a simplified model which retains many of the important features. These are related to a certain zeroth-order pseudodifferential operator.

In this talk, we first review the physical phenomenon, and the (highly simplified) model evolution problem. We next describe a high-accuracy computational method to solve the evolution problem, whose long-term behaviour is known to be non-square-integrable. Then, we use similar tools to discretize the corresponding eigenvalue problem. Since the eigenvalues are embedded in a continuous spectrum, their computation is based on viscous approximations. We also study the long-term evolution of the dynamics of the system. This is joint work with Javier Almonacid.- Speaker: Nilima Nigam (Simon Fraser)
- Thursday 06 June 2024, 15:00-16:00
- Venue: Centre for Mathematical Sciences, MR12.
- Series: Applied and Computational Analysis; organiser: Matthew Colbrook.

### Thu 06 Jun 15:00: Singular flows, zeroth order pseudodifferential operators and spectra

The propagation of internal gravity waves in stratified media (such as those found in ocean basins and lakes) leads to the development of attractors. These structures accumulate much of the wave energy and can make the fluid flow highly singular. These questions have been the subject of fascinating recent analytical developments by de Verdiere & Saint-Raymond, and Zworski and co-workers, who examine a simplified model which retains many of the important features. These are related to a certain zeroth-order pseudodifferential operator.

In this talk, we first review the physical phenomenon, and the (highly simplified) model evolution problem. We next describe a high-accuracy computational method to solve the evolution problem, whose long-term behaviour is known to be non-square-integrable. Then, we use similar tools to discretize the corresponding eigenvalue problem. Since the eigenvalues are embedded in a continuous spectrum, their computation is based on viscous approximations. We also study the long-term evolution of the dynamics of the system. This is joint work with Javier Almonacid.- Speaker: Nilima Nigam (Simon Fraser)
- Thursday 06 June 2024, 15:00-16:00
- Venue: Centre for Mathematical Sciences, MR14.
- Series: Applied and Computational Analysis; organiser: Matthew Colbrook.

### Thu 06 Jun 15:00: Singular flows, zeroth order pseudodifferential operators and spectra

- Speaker: Nilima Nigam (Simon Fraser)
- Thursday 06 June 2024, 15:00-16:00
- Venue: Centre for Mathematical Sciences, MR14.
- Series: Applied and Computational Analysis; organiser: Matthew Colbrook.

### Thu 16 May 16:00: Efficient Computation through Tuned Approximation

Numerical software is being reconstructed to provide opportunities to tune dynamically the accuracy of computation to the requirements of the application, resulting in savings of memory, time, and energy. Floating point computation in science and engineering has a history of “oversolving” relative to requirements or worthiness for many models. So often are real datatypes defaulted to double precision that GPUs did not gain wide acceptance in simulation environments until they provided in hardware operations not required in their original domain of graphics. However, driven by performance or energy incentives, much of computational science is now reverting to employ lower precision arithmetic where possible. Many matrix operations considered at a blockwise level allow for lower precision and, in addition, many blocks can be approximated with low rank near equivalents. This leads to smaller memory footprint, which implies higher residency on memory hierarchies, leading in turn to less time and energy spent on data copying, which may even dwarf the savings from fewer and cheaper flops. We provide examples from several application domains, including a look at campaigns in geospatial statistics and seismic processing that earned Gordon Bell Prize finalist status in, resp., 2022 and 2023.

- Speaker: David Keyes (KAUST)
- Thursday 16 May 2024, 16:00-17:00
- Venue: Centre for Mathematical Sciences, MR14.
- Series: Applied and Computational Analysis; organiser: Hamza Fawzi.

### Thu 16 May 16:00: Efficient Computation through Tuned Approximation

Numerical software is being reconstructed to provide opportunities to tune dynamically the accuracy of computation to the requirements of the application, resulting in savings of memory, time, and energy. Floating point computation in science and engineering has a history of “oversolving” relative to requirements or worthiness for many models. So often are real datatypes defaulted to double precision that GPUs did not gain wide acceptance in simulation environments until they provided in hardware operations not required in their original domain of graphics. However, driven by performance or energy incentives, much of computational science is now reverting to employ lower precision arithmetic where possible. Many matrix operations considered at a blockwise level allow for lower precision and, in addition, many blocks can be approximated with low rank near equivalents. This leads to smaller memory footprint, which implies higher residency on memory hierarchies, leading in turn to less time and energy spent on data copying, which may even dwarf the savings from fewer and cheaper flops. We provide examples from several application domains, including a look at campaigns in geospatial statistics and seismic processing that earned Gordon Bell Prize finalist status in, resp., 2022 and 2023.

- Speaker: David Keyes (KAUST)
- Thursday 16 May 2024, 16:00-17:00
- Venue: Centre for Mathematical Sciences, MR14.
- Series: Applied and Computational Analysis; organiser: Hamza Fawzi.

### Thu 23 May 15:00: A Converging Discrete Geometric Calculus on the Space of Curves

The talk will take into account the space of curves as a Riemannian manifold with a metric, measuring the squared $L^2$ norm of arc length derivatives of curve variations. Based on a suitable time discretization it will be described how to interpolate pairs of curves, smoothly extrapolate paths in this space, and how to approximate the associated covariant derivative as well as the curvature tensor. The convergence of the discrete calculus to the corresponding continuous calculus will be demonstrated.

- Speaker: Martin Rumpf (Universität Bonn)
- Thursday 23 May 2024, 15:00-16:00
- Venue: Centre for Mathematical Sciences, MR14.
- Series: Applied and Computational Analysis; organiser: Carola-Bibiane Schoenlieb.

### Thu 23 May 15:00: A Converging Discrete Geometric Calculus on the Space of Curves

The talk will take into account the space of curves as a Riemannian manifold with a metric, measuring the squared $L^2$ norm of arc length derivatives of curve variations. Based on a suitable time discretization it will be described how to interpolate pairs of curves, smoothly extrapolate paths in this space, and how to approximate the associated covariant derivative as well as the curvature tensor. The convergence of the discrete calculus to the corresponding continuous calculus will be demonstrated.

- Speaker: Martin Rumpf (Universität Bonn)
- Thursday 23 May 2024, 15:00-16:00
- Venue: Centre for Mathematical Sciences, MR14.
- Series: Applied and Computational Analysis; organiser: Carola-Bibiane Schoenlieb.

### Thu 02 May 15:00: Greedy-LASSO, Greedy-Net: Generalization and unrolling of greedy sparse recovery algorithms

Sparse recovery generally aims at reconstructing a sparse vector, given linear measurements performed via a mixture (or sensing) matrix, typically underdetermined. Greedy (and thresholding) sparse recovery algorithms have known to serve well as a suitable alternative for convex optimization techniques, in particular in low sparsity regimes. In this talk, I take orthogonal matching pursuit (OMP) as an example, and establish a connection between OMP and convex optimization decoders in one side and neural networks on the other side. To achieve the former, we adopt a loss function-based perspective and propose a framework based on OMP that leads to greedy algorithms for a large class of loss functions including the well-known (weighted-)LASSO family, with explicit formulas for the choice of the ``greedy selection criterion”. We show numerically that these greedy algorithms inherit properties of their ancestor convex decoder. In the second part of the talk, we leverage ``softsoring”, to resolve the non-differentiability issue of OMP due to (arg)sorting, in order to derive a differentiable version of OMP that we call ``Soft-OMP”, which we demonstrate numerically and theoretically that is a good approximation for OMP . We then unroll iterations of OMP onto layers of a neural network with weights as semantic trainable parameters that capture the structure within the data. Doing so, we also connect our approach to learning weights in weighted sparse recovery. I will conclude the talk by presenting implications of our framework for other greedy algorithms such as CoSaMP and IHT , and highlight some open problems. This is joint work with Simone Brugiapaglia (Concordia University) and Matthew Colbrook (University of Cambridge).

- Speaker: Sina Mohammadtaheri
- Thursday 02 May 2024, 15:00-16:00
- Venue: Centre for Mathematical Sciences, MR12.
- Series: Applied and Computational Analysis; organiser: Matthew Colbrook.

### Thu 02 May 15:00: Greedy-LASSO, Greedy-Net: Generalization and unrolling of greedy sparse recovery algorithms

Sparse recovery generally aims at reconstructing a sparse vector, given linear measurements performed via a mixture (or sensing) matrix, typically underdetermined. Greedy (and thresholding) sparse recovery algorithms have known to serve well as a suitable alternative for convex optimization techniques, in particular in low sparsity regimes. In this talk, I take orthogonal matching pursuit (OMP) as an example, and establish a connection between OMP and convex optimization decoders in one side and neural networks on the other side. To achieve the former, we adopt a loss function-based perspective and propose a framework based on OMP that leads to greedy algorithms for a large class of loss functions including the well-known (weighted-)LASSO family, with explicit formulas for the choice of the ``greedy selection criterion”. We show numerically that these greedy algorithms inherit properties of their ancestor convex decoder. In the second part of the talk, we leverage ``softsoring”, to resolve the non-differentiability issue of OMP due to (arg)sorting, in order to derive a differentiable version of OMP that we call ``Soft-OMP”, which we demonstrate numerically and theoretically that is a good approximation for OMP . We then unroll iterations of OMP onto layers of a neural network with weights as semantic trainable parameters that capture the structure within the data. Doing so, we also connect our approach to learning weights in weighted sparse recovery. I will conclude the talk by presenting implications of our framework for other greedy algorithms such as CoSaMP and IHT , and highlight some open problems. This is joint work with Simone Brugiapaglia (Concordia University) and Matthew Colbrook (University of Cambridge).

- Speaker: Sina Mohammadtaheri
- Thursday 02 May 2024, 15:00-16:00
- Venue: Centre for Mathematical Sciences, MR12.
- Series: Applied and Computational Analysis; organiser: Matthew Colbrook.

### Thu 02 May 14:00: Greedy-LASSO, Greedy-Net: Generalization and unrolling of greedy sparse recovery algorithms

Sparse recovery generally aims at reconstructing a sparse vector, given linear measurements performed via a mixture (or sensing) matrix, typically underdetermined. Greedy (and thresholding) sparse recovery algorithms have known to serve well as a suitable alternative for convex optimization techniques, in particular in low sparsity regimes. In this talk, I take orthogonal matching pursuit (OMP) as an example, and establish a connection between OMP and convex optimization decoders in one side and neural networks on the other side. To achieve the former, we adopt a loss function-based perspective and propose a framework based on OMP that leads to greedy algorithms for a large class of loss functions including the well-known (weighted-)LASSO family, with explicit formulas for the choice of the ``greedy selection criterion”. We show numerically that these greedy algorithms inherit properties of their ancestor convex decoder. In the second part of the talk, we leverage ``softsoring”, to resolve the non-differentiability issue of OMP due to (arg)sorting, in order to derive a differentiable version of OMP that we call ``Soft-OMP”, which we demonstrate numerically and theoretically that is a good approximation for OMP . We then unroll iterations of OMP onto layers of a neural network with weights as semantic trainable parameters that capture the structure within the data. Doing so, we also connect our approach to learning weights in weighted sparse recovery. I will conclude the talk by presenting implications of our framework for other greedy algorithms such as CoSaMP and IHT , and highlight some open problems. This is joint work with Simone Brugiapaglia (Concordia University) and Matthew Colbrook (University of Cambridge).

- Speaker: Sina Mohammadtaheri
- Thursday 02 May 2024, 14:00-15:00
- Venue: Centre for Mathematical Sciences, MR14.
- Series: Applied and Computational Analysis; organiser: Matthew Colbrook.

### Thu 02 May 14:00: Greedy-LASSO, Greedy-Net: Generalization and unrolling of greedy sparse recovery algorithms

- Speaker: Sina Mohammadtaheri
- Thursday 02 May 2024, 14:00-15:00
- Venue: Centre for Mathematical Sciences, MR14.
- Series: Applied and Computational Analysis; organiser: Matthew Colbrook.

### Thu 25 Apr 15:00: Machine Learning and Dynamical Systems Meet in Reproducing Kernel Hilbert Spaces with Insights from Algorithmic Information Theory

Since its inception in the 19th century, through the efforts of Poincaré and Lyapunov, the theory of dynamical systems has addressed the qualitative behavior of systems as understood from models. From this perspective, modeling dynamical processes in applications demands a detailed understanding of the processes to be analyzed. This understanding leads to a model, which approximates observed reality and is often expressed by a system of ordinary/partial, underdetermined (control), deterministic/stochastic differential or difference equations. While these models are very precise for many processes, for some of the most challenging applications of dynamical systems, such as climate dynamics, brain dynamics, biological systems, or financial markets, developing such models is notably difficult. On the other hand, the field of machine learning is concerned with algorithms designed to accomplish specific tasks, whose performance improves with more data input. Applications of machine learning methods include computer vision, stock market analysis, speech recognition, recommender systems, and sentiment analysis in social media. The machine learning approach is invaluable in settings where no explicit model is formulated, but measurement data are available. This is often the case in many systems of interest, and the development of data-driven technologies is increasingly important in many applications. The intersection of the fields of dynamical systems and machine learning is largely unexplored, and the objective of this talk is to show that working in reproducing kernel Hilbert spaces offers tools for a data-based theory of nonlinear dynamical systems.

In the first part of the talk, we introduce simple methods to learn surrogate models for complex systems. We present variants of the method of Kernel Flows as simple approaches for learning the kernel that appear in the emulators we use in our work. First, we will discuss the method of parametric and nonparametric kernel flows for learning chaotic dynamical systems. We’ll also explore learning dynamical systems from irregularly sampled time series and from partial observations. We will introduce the methods of Sparse Kernel Flows and Hausdorff-metric based Kernel Flows (HMKFs) and apply them to learn 132 chaotic dynamical systems. We draw parallels between Minimum Description Length (MDL) and Regularization in Machine Learning (RML), showcasing that the method of Sparse Kernel Flows offers a natural approach to kernel learning. By considering code lengths and complexities rooted in Algorithmic Information Theory (AIT), we demonstrate that data-adaptive kernel learning can be achieved through the MDL principle, bypassing the need for cross-validation as a statistical method. Finally, we extend the method of Kernel Mode Decomposition to design kernels in view of detecting critical transitions in some fast-slow random dynamical systems.

Then, we introduce a data-based approach to estimating key quantities which arise in the study of nonlinear autonomous, control, and random dynamical systems. Our approach hinges on the observation that much of the existing linear theory may be readily extended to nonlinear systems – with a reasonable expectation of success – once the nonlinear system has been mapped into a high or infinite dimensional Reproducing Kernel Hilbert Space. We develop computable, non-parametric estimators approximating controllability and observability energies for nonlinear systems. We apply this approach to the problem of model reduction of nonlinear control systems. It is also shown that the controllability energy estimator provides a key means for approximating the invariant measure of an ergodic, stochastically forced nonlinear system. Finally, we show how kernel methods can be used to approximate center manifolds, propose a data-based version of the center manifold theorem, and construct Lyapunov functions for nonlinear ODEs.

- Speaker: Boumediene Hamzi
- Thursday 25 April 2024, 15:00-16:00
- Venue: Centre for Mathematical Sciences, MR2.
- Series: Applied and Computational Analysis; organiser: Matthew Colbrook.

### Thu 25 Apr 15:00: Machine Learning and Dynamical Systems Meet in Reproducing Kernel Hilbert Spaces with Insights from Algorithmic Information Theory

Since its inception in the 19th century, through the efforts of Poincaré and Lyapunov, the theory of dynamical systems has addressed the qualitative behavior of systems as understood from models. From this perspective, modeling dynamical processes in applications demands a detailed understanding of the processes to be analyzed. This understanding leads to a model, which approximates observed reality and is often expressed by a system of ordinary/partial, underdetermined (control), deterministic/stochastic differential or difference equations. While these models are very precise for many processes, for some of the most challenging applications of dynamical systems, such as climate dynamics, brain dynamics, biological systems, or financial markets, developing such models is notably difficult. On the other hand, the field of machine learning is concerned with algorithms designed to accomplish specific tasks, whose performance improves with more data input. Applications of machine learning methods include computer vision, stock market analysis, speech recognition, recommender systems, and sentiment analysis in social media. The machine learning approach is invaluable in settings where no explicit model is formulated, but measurement data are available. This is often the case in many systems of interest, and the development of data-driven technologies is increasingly important in many applications. The intersection of the fields of dynamical systems and machine learning is largely unexplored, and the objective of this talk is to show that working in reproducing kernel Hilbert spaces offers tools for a data-based theory of nonlinear dynamical systems.

In the first part of the talk, we introduce simple methods to learn surrogate models for complex systems. We present variants of the method of Kernel Flows as simple approaches for learning the kernel that appear in the emulators we use in our work. First, we will discuss the method of parametric and nonparametric kernel flows for learning chaotic dynamical systems. We’ll also explore learning dynamical systems from irregularly sampled time series and from partial observations. We will introduce the methods of Sparse Kernel Flows and Hausdorff-metric based Kernel Flows (HMKFs) and apply them to learn 132 chaotic dynamical systems. We draw parallels between Minimum Description Length (MDL) and Regularization in Machine Learning (RML), showcasing that the method of Sparse Kernel Flows offers a natural approach to kernel learning. By considering code lengths and complexities rooted in Algorithmic Information Theory (AIT), we demonstrate that data-adaptive kernel learning can be achieved through the MDL principle, bypassing the need for cross-validation as a statistical method. Finally, we extend the method of Kernel Mode Decomposition to design kernels in view of detecting critical transitions in some fast-slow random dynamical systems.

Then, we introduce a data-based approach to estimating key quantities which arise in the study of nonlinear autonomous, control, and random dynamical systems. Our approach hinges on the observation that much of the existing linear theory may be readily extended to nonlinear systems – with a reasonable expectation of success – once the nonlinear system has been mapped into a high or infinite dimensional Reproducing Kernel Hilbert Space. We develop computable, non-parametric estimators approximating controllability and observability energies for nonlinear systems. We apply this approach to the problem of model reduction of nonlinear control systems. It is also shown that the controllability energy estimator provides a key means for approximating the invariant measure of an ergodic, stochastically forced nonlinear system. Finally, we show how kernel methods can be used to approximate center manifolds, propose a data-based version of the center manifold theorem, and construct Lyapunov functions for nonlinear ODEs.

- Speaker: Boumediene Hamzi
- Thursday 25 April 2024, 15:00-16:00
- Venue: Centre for Mathematical Sciences, MR2.
- Series: Applied and Computational Analysis; organiser: Matthew Colbrook.

- 1
- 2