skip to content

IMAGES

a network for developers and users of imaging and analysis tools
 
Subscribe to Talks feed
A personal list of talks.
Updated: 36 min 1 sec ago

Mon 19 Feb 14:00: SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning

Mon, 19/02/2024 - 21:12
SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning

Deep Reinforcement Learning (DRL) has shown significant promise for uncovering sophisticated control policies that interact in environments with complicated dynamics, such as stabilizing the magnetohydrodynamics of a tokamak reactor and minimizing the drag force exerted on an object in a fluid flow. However, these algorithms require many training examples and can become prohibitively expensive for many applications. In addition, the reliance on deep neural networks results in an uninterpretable, black-box policy that may be too computationally challenging to use with certain embedded systems. Recent advances in sparse dictionary learning, such as the Sparse Identification of Nonlinear Dynamics (SINDy), have shown to be a promising method for creating efficient and interpretable data-driven models in the low-data regime. In this work, we extend ideas from the SIN Dy literature to introduce a unifying framework for combining sparse dictionary learning and DRL to create efficient, interpretable, and trustworthy representations of the dynamics model, reward function, and control policy. We demonstrate the effectiveness of our approaches on benchmark control environments and challenging fluids problems, achieving comparable performance to state-of-the-art DRL algorithms using significantly fewer interactions in the environment and an interpretable control policy orders of magnitude smaller than a deep neural network policy.

Add to your calendar or Include in your list

Wed 14 Feb 14:00: SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning

Wed, 14/02/2024 - 13:35
SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning

Deep Reinforcement Learning (DRL) has shown significant promise for uncovering sophisticated control policies that interact in environments with complicated dynamics, such as stabilizing the magnetohydrodynamics of a tokamak reactor and minimizing the drag force exerted on an object in a fluid flow. However, these algorithms require many training examples and can become prohibitively expensive for many applications. In addition, the reliance on deep neural networks results in an uninterpretable, black-box policy that may be too computationally challenging to use with certain embedded systems. Recent advances in sparse dictionary learning, such as the Sparse Identification of Nonlinear Dynamics (SINDy), have shown to be a promising method for creating efficient and interpretable data-driven models in the low-data regime. In this work, we extend ideas from the SIN Dy literature to introduce a unifying framework for combining sparse dictionary learning and DRL to create efficient, interpretable, and trustworthy representations of the dynamics model, reward function, and control policy. We demonstrate the effectiveness of our approaches on benchmark control environments and challenging fluids problems, achieving comparable performance to state-of-the-art DRL algorithms using significantly fewer interactions in the environment and an interpretable control policy orders of magnitude smaller than a deep neural network policy.

Add to your calendar or Include in your list

Thu 22 Feb 15:00: Computing lower eigenvalues on rough domains

Tue, 06/02/2024 - 22:54
Computing lower eigenvalues on rough domains

In this talk I will describe a strategy for finding sharp upper and lower numerical bounds of the Poincare constant on a class of planar domains with piecewise self-similar boundary. The approach is developed in [A] and it consists of four main blocks: 1) tight inner-outer shape interpolation, 2) conformal mapping of the approximate polygonal regions, 3) grad-div system formulation of the spectral problem and 4) computation of the eigenvalue bounds. After describing the method, justifying its validity and reporting on general convergence estimates, I will show concrete evidence of its effectiveness on the Koch snowflake. I will conclude the talk by discussing potential applications to other linear operators on rough regions. This research has been conducted jointly with Lehel Banjai (Heriot-Watt University).

[A] J. Fractal Geometry 8 (2021) No. 2, pp. 153-188

Add to your calendar or Include in your list

Thu 15 Feb 15:00: Adaptive Intrusive Methods for Forward UQ in PDEs

Sun, 28/01/2024 - 17:54
Adaptive Intrusive Methods for Forward UQ in PDEs

In this talk we discuss a so-called intrusive approach for the forward propagation of uncertainty in PDEs with uncertain coefficients. Specifically, we focus on stochastic Galerkin finite element methods (SGFEMs). Multilevel variants of such methods provide polynomial-based surrogates with spatial coefficients that reside in potentially different finite element spaces. For elliptic PDEs with diffusion coefficients represented as affine functions of countably infinitely many parameters, well established theoretical results state that such methods can achieve rates of convergence independent of the number of input parameters, thereby breaking the curse of dimensionality. Moreover, for nice enough test problems, it is even possible to prove convergence rates afforded to the chosen finite element method for the associated deterministic PDE . However, achieving these rates in practice using automated computational algorithms remains highly challenging, and non-intrusive multilevel sampling methods are often preferred for their ease of use. We discuss an adaptive framework that is driven by a classical hierarchical a posteriori error estimation strategy — modified for the more challenging parametric PDE setting — and present numerical results.

Add to your calendar or Include in your list

Thu 01 Feb 15:00: What happens when you chop an equation?

Sat, 27/01/2024 - 11:05
What happens when you chop an equation?

This talk will discuss a tricky business: truncating a differential equation to produce finite solutions. A truncation scheme is often built directly into the steps needed to create a numerical system. E.g., finite differences replace exact differential operators with more manageable shadows, sweeping the exact approach off the stage.

In contrast, this talk will discuss the “tau method” which adds an explicit parameterised perturbation to an original equation. By design, the correction calls into existence an exact (finite polynomial) solution to the updated analytic system. The hope is that the correction comes out minuscule after comparing it with a hypothetical exact solution. The tau method has worked splendidly in practice, starting with Lanczos’s original 1938 paper outlining the philosophy. However, why the scheme works so well (and when it fails) remains comparably obscure. While addressing the theory behind the Tau method, this talk will answer at least one conceptual question: Where does an infinite amount of spectrum go when transitioning from a continuous differential equation to an exact finite matrix representation?

Add to your calendar or Include in your list