skip to content

IMAGES

a network for developers and users of imaging and analysis tools
 

Wed 24 Apr 14:00: Title to be confirmed

Other events - Wed, 13/03/2024 - 11:37
Title to be confirmed

Abstract not available

Add to your calendar or Include in your list

Mon 11 Mar 18:00: Using organoids to reveal what sets the human brain apart Check website for latest updates and booking information http://www.cambridgephilosophicalsociety.org

Talks - Fri, 08/03/2024 - 16:05
Using organoids to reveal what sets the human brain apart

The human brain sets us apart as a species, yet how it develops and functions differently to that of other mammals is still largely unclear. This also makes it difficult to understand how disorders of the brain arise, and therefore how to treat them. To understand such a complex organ, we have developed cerebral organoids, or brain organoids, 3D brain tissues made from stem cells that mimic the fetal brain. Such organoids are allowing us to tackle questions previously impossible with more traditional approaches. Indeed, our recent findings provide insight into various factors that influence the developing brain, and how the human brain becomes so uniquely large enabling our special cognitive abilities.

Check website for latest updates and booking information http://www.cambridgephilosophicalsociety.org

Add to your calendar or Include in your list

Wed 24 Apr 15:00: Title to be confirmed

Talks - Fri, 08/03/2024 - 11:37
Title to be confirmed

Abstract not available

Add to your calendar or Include in your list

Wed 24 Apr 15:00: Title to be confirmed

Other events - Fri, 08/03/2024 - 11:37
Title to be confirmed

Abstract not available

Add to your calendar or Include in your list

Thu 14 Mar 15:00: v Tangent Kernels

Talks - Sun, 03/03/2024 - 15:49
v Tangent Kernels

Machine learning (ML) has been profitably leveraged across a wide variety of problems in recent years. Empirical observations show that ML models from suitable functional spaces are capable of adequately efficient learning across a wide variety of disciplines. In this work (first in a planned sequence of three), we build the foundations for a generic perspective on ML model optimization and generalization dynamics. Specifically, we prove that under variants of gradient descent, “well-initialized” models solve sufficiently well-posed problems at \textit{a priori} or \textit{in situ} determinable rates. Notably, these results are obtained for a wider class of problems, loss functions, and models than the standard mean squared error and large width regime that is the focus of conventional Neural Tangent Kernel (NTK) analysis. The $\nu$ – Tangent Kernel ($\nu$TK), a functional analytic object reminiscent of the NTK , emerges naturally as a key object in our analysis and its properties function as the control for learning. We exemplify the power of our proposed perspective by showing that it applies to diverse practical problems solved using real ML models, such as classification tasks, data/regression fitting, differential equations, shape observable analysis, etc. We end with a small discussion of the numerical evidence, and the role $\nu$TKs may play in characterizing the search phase of optimization, which leads to the “well-initialized” models that are the crux of this work.

Add to your calendar or Include in your list

Thu 14 Mar 15:00: v Tangent Kernels

Other events - Sun, 03/03/2024 - 15:49
v Tangent Kernels

Machine learning (ML) has been profitably leveraged across a wide variety of problems in recent years. Empirical observations show that ML models from suitable functional spaces are capable of adequately efficient learning across a wide variety of disciplines. In this work (first in a planned sequence of three), we build the foundations for a generic perspective on ML model optimization and generalization dynamics. Specifically, we prove that under variants of gradient descent, “well-initialized” models solve sufficiently well-posed problems at \textit{a priori} or \textit{in situ} determinable rates. Notably, these results are obtained for a wider class of problems, loss functions, and models than the standard mean squared error and large width regime that is the focus of conventional Neural Tangent Kernel (NTK) analysis. The $\nu$ – Tangent Kernel ($\nu$TK), a functional analytic object reminiscent of the NTK , emerges naturally as a key object in our analysis and its properties function as the control for learning. We exemplify the power of our proposed perspective by showing that it applies to diverse practical problems solved using real ML models, such as classification tasks, data/regression fitting, differential equations, shape observable analysis, etc. We end with a small discussion of the numerical evidence, and the role $\nu$TKs may play in characterizing the search phase of optimization, which leads to the “well-initialized” models that are the crux of this work.

Add to your calendar or Include in your list

Thu 07 Mar 15:00: Hamiltonian simulation and optimal control

Talks - Tue, 27/02/2024 - 09:51
Hamiltonian simulation and optimal control

Hamiltonian simulation on quantum computers is one of the primary candidates for demonstration of quantum advantage. A central tool in Hamiltonian simulation is the matrix exponential. While uniform polynomial approximations (Chebyshev), best polynomial approximations, and unitary but asymptotic rational approximations (Padé) are well known and are extensively used in computational quantum mechanics, there was an important gap which has now been filled by the development of the theory and algorithms for unitary rational best approximations. This class of approximants leads to geometric numerical integrators with excellent approximation properties. In the second part of the talk I will talk about time-dependent Hamiltonians for many-body two-level systems, including a quantum algorithm for their simulation and some (classical) optimal control algorithms for quantum gate design.

Add to your calendar or Include in your list

Thu 07 Mar 15:00: Hamiltonian simulation and optimal control

Other events - Tue, 27/02/2024 - 09:51
Hamiltonian simulation and optimal control

Hamiltonian simulation on quantum computers is one of the primary candidates for demonstration of quantum advantage. A central tool in Hamiltonian simulation is the matrix exponential. While uniform polynomial approximations (Chebyshev), best polynomial approximations, and unitary but asymptotic rational approximations (Padé) are well known and are extensively used in computational quantum mechanics, there was an important gap which has now been filled by the development of the theory and algorithms for unitary rational best approximations. This class of approximants leads to geometric numerical integrators with excellent approximation properties. In the second part of the talk I will talk about time-dependent Hamiltonians for many-body two-level systems, including a quantum algorithm for their simulation and some (classical) optimal control algorithms for quantum gate design.

Add to your calendar or Include in your list

Thu 23 May 15:00: TBA

Talks - Thu, 22/02/2024 - 15:06
TBA

Abstract not available

Add to your calendar or Include in your list

Thu 23 May 15:00: TBA

Other events - Thu, 22/02/2024 - 15:06
TBA

Abstract not available

Add to your calendar or Include in your list

Fri 22 Mar 09:00: SCIENCE AND THE FUTURES OF MEDICINE One Day Meeting Check website for latest updates and booking information http://www.cambridgephilosophicalsociety.org

Talks - Thu, 22/02/2024 - 10:42
SCIENCE AND THE FUTURES OF MEDICINE One Day Meeting

Recent advances in the sciences underpinning medicine, and their translation to clinical impact, are transforming our ability to understand and treat human diseases. This one-day meeting will explore emerging areas in which the convergence of fundamental science and translational opportunities promises to shape the futures of medicine.

Programme

09.00-09.15 Introduction to meeting

09.15-10.15 Serena Nik-Zainal, Professor of Genomic Medicine and Bioinformatics, Department of Medical Genetics, School of Clinical Medicine, University of Cambridge – Genomes, genome engineering and personalised medicine

10.15-11.15 Shyni Varghese, Professor of Biomedical Engineering, Mechanical Engineering and Materials Science and Orthopaedics, Duke University, US - Tissue engineering

11.15-11.45 Morning Coffee

11.45- 12.45 Jan Hoeijmakers, Department of Molecular Genetics, Erasmus University, Rotterdam and Cologne, Princess Maxima Center for Pediatric Oncology, Oncode, Utrecht, both in the Netherlands and the CECAD , Cologne, Germany – DNA damage, cancer and aging, the unexpected impact of nutrition on medicine

12.45-13.45 Lunch

13.45-14.45 Paul Workman, Professor of Pharmacology and Therapeutics, Centre for Cancer Drug Discovery, The Institute of Cancer Research, London – Transforming small molecule cancer drug discovery for precision medicine

14.45-15.45 Iain Buchan, W.H. Duncan Chair in Public Health Systems, Associate Pro Vice Chancellor for Innovation, Public Health, Policy & Systems, University of Liverpool – How might artificial intelligence augment population health?

15.45-16.15 Afternoon Tea

16.15- 17.15 Alessio Ciulli, School of Life Sciences, University of Dundee – New approaches in drug discovery

17.15 Closing remarks

Check website for latest updates and booking information http://www.cambridgephilosophicalsociety.org

Add to your calendar or Include in your list

Thu 29 Feb 15:00: Efficient frequency-dependent numerical simulation of wave scattering problems

Talks - Wed, 21/02/2024 - 14:26
Efficient frequency-dependent numerical simulation of wave scattering problems

Wave propagation in homogeneous media is often modelled using integral equation methods. The boundary element method (BEM) is for integral equations what the finite element method is for partial differential equations. One difference is that BEM typically leads to dense discretization matrices. A major focus in the field has been the development of fast solvers for linear systems involving such dense matrices. Developments include the fast multipole method (FMM) and more algebraic methods based on the so-called H-matrix format. Yet, for time-harmonic wave propagation, these methods solve the original problem only for a single frequency. In this talk we focus on the frequency-sweeping problem: we aim to solve the scattering problem for a range of frequencies. We exploit the wavenumber-dependence of the dense discretization matrix for the 3D Helmholtz equation and demonstrate a memory-compact representation of all integral operators involved which is valid for a continuous range of frequencies, yet comes with a cost of a only small number of single frequency simulations. This is joined work at KU Leuven with Simon Dirckx, Kobe Bruyninckx and Karl Meerbergen.

Add to your calendar or Include in your list

Thu 29 Feb 15:00: Efficient frequency-dependent numerical simulation of wave scattering problems

Other events - Wed, 21/02/2024 - 14:26
Efficient frequency-dependent numerical simulation of wave scattering problems

Wave propagation in homogeneous media is often modelled using integral equation methods. The boundary element method (BEM) is for integral equations what the finite element method is for partial differential equations. One difference is that BEM typically leads to dense discretization matrices. A major focus in the field has been the development of fast solvers for linear systems involving such dense matrices. Developments include the fast multipole method (FMM) and more algebraic methods based on the so-called H-matrix format. Yet, for time-harmonic wave propagation, these methods solve the original problem only for a single frequency. In this talk we focus on the frequency-sweeping problem: we aim to solve the scattering problem for a range of frequencies. We exploit the wavenumber-dependence of the dense discretization matrix for the 3D Helmholtz equation and demonstrate a memory-compact representation of all integral operators involved which is valid for a continuous range of frequencies, yet comes with a cost of a only small number of single frequency simulations. This is joined work at KU Leuven with Simon Dirckx, Kobe Bruyninckx and Karl Meerbergen.

Add to your calendar or Include in your list

Mon 19 Feb 14:00: SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning

Talks - Mon, 19/02/2024 - 21:12
SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning

Deep Reinforcement Learning (DRL) has shown significant promise for uncovering sophisticated control policies that interact in environments with complicated dynamics, such as stabilizing the magnetohydrodynamics of a tokamak reactor and minimizing the drag force exerted on an object in a fluid flow. However, these algorithms require many training examples and can become prohibitively expensive for many applications. In addition, the reliance on deep neural networks results in an uninterpretable, black-box policy that may be too computationally challenging to use with certain embedded systems. Recent advances in sparse dictionary learning, such as the Sparse Identification of Nonlinear Dynamics (SINDy), have shown to be a promising method for creating efficient and interpretable data-driven models in the low-data regime. In this work, we extend ideas from the SIN Dy literature to introduce a unifying framework for combining sparse dictionary learning and DRL to create efficient, interpretable, and trustworthy representations of the dynamics model, reward function, and control policy. We demonstrate the effectiveness of our approaches on benchmark control environments and challenging fluids problems, achieving comparable performance to state-of-the-art DRL algorithms using significantly fewer interactions in the environment and an interpretable control policy orders of magnitude smaller than a deep neural network policy.

Add to your calendar or Include in your list

Mon 19 Feb 14:00: SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning

Other events - Mon, 19/02/2024 - 21:12
SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning

Deep Reinforcement Learning (DRL) has shown significant promise for uncovering sophisticated control policies that interact in environments with complicated dynamics, such as stabilizing the magnetohydrodynamics of a tokamak reactor and minimizing the drag force exerted on an object in a fluid flow. However, these algorithms require many training examples and can become prohibitively expensive for many applications. In addition, the reliance on deep neural networks results in an uninterpretable, black-box policy that may be too computationally challenging to use with certain embedded systems. Recent advances in sparse dictionary learning, such as the Sparse Identification of Nonlinear Dynamics (SINDy), have shown to be a promising method for creating efficient and interpretable data-driven models in the low-data regime. In this work, we extend ideas from the SIN Dy literature to introduce a unifying framework for combining sparse dictionary learning and DRL to create efficient, interpretable, and trustworthy representations of the dynamics model, reward function, and control policy. We demonstrate the effectiveness of our approaches on benchmark control environments and challenging fluids problems, achieving comparable performance to state-of-the-art DRL algorithms using significantly fewer interactions in the environment and an interpretable control policy orders of magnitude smaller than a deep neural network policy.

Add to your calendar or Include in your list

Wed 14 Feb 14:00: SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning

Talks - Wed, 14/02/2024 - 13:35
SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning

Deep Reinforcement Learning (DRL) has shown significant promise for uncovering sophisticated control policies that interact in environments with complicated dynamics, such as stabilizing the magnetohydrodynamics of a tokamak reactor and minimizing the drag force exerted on an object in a fluid flow. However, these algorithms require many training examples and can become prohibitively expensive for many applications. In addition, the reliance on deep neural networks results in an uninterpretable, black-box policy that may be too computationally challenging to use with certain embedded systems. Recent advances in sparse dictionary learning, such as the Sparse Identification of Nonlinear Dynamics (SINDy), have shown to be a promising method for creating efficient and interpretable data-driven models in the low-data regime. In this work, we extend ideas from the SIN Dy literature to introduce a unifying framework for combining sparse dictionary learning and DRL to create efficient, interpretable, and trustworthy representations of the dynamics model, reward function, and control policy. We demonstrate the effectiveness of our approaches on benchmark control environments and challenging fluids problems, achieving comparable performance to state-of-the-art DRL algorithms using significantly fewer interactions in the environment and an interpretable control policy orders of magnitude smaller than a deep neural network policy.

Add to your calendar or Include in your list

Wed 14 Feb 14:00: SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning

Other events - Wed, 14/02/2024 - 13:35
SINDy-RL: Interpretable and Efficient Model-Based Reinforcement Learning

Deep Reinforcement Learning (DRL) has shown significant promise for uncovering sophisticated control policies that interact in environments with complicated dynamics, such as stabilizing the magnetohydrodynamics of a tokamak reactor and minimizing the drag force exerted on an object in a fluid flow. However, these algorithms require many training examples and can become prohibitively expensive for many applications. In addition, the reliance on deep neural networks results in an uninterpretable, black-box policy that may be too computationally challenging to use with certain embedded systems. Recent advances in sparse dictionary learning, such as the Sparse Identification of Nonlinear Dynamics (SINDy), have shown to be a promising method for creating efficient and interpretable data-driven models in the low-data regime. In this work, we extend ideas from the SIN Dy literature to introduce a unifying framework for combining sparse dictionary learning and DRL to create efficient, interpretable, and trustworthy representations of the dynamics model, reward function, and control policy. We demonstrate the effectiveness of our approaches on benchmark control environments and challenging fluids problems, achieving comparable performance to state-of-the-art DRL algorithms using significantly fewer interactions in the environment and an interpretable control policy orders of magnitude smaller than a deep neural network policy.

Add to your calendar or Include in your list

Thu 22 Feb 15:00: Computing lower eigenvalues on rough domains

Talks - Tue, 06/02/2024 - 22:54
Computing lower eigenvalues on rough domains

In this talk I will describe a strategy for finding sharp upper and lower numerical bounds of the Poincare constant on a class of planar domains with piecewise self-similar boundary. The approach is developed in [A] and it consists of four main blocks: 1) tight inner-outer shape interpolation, 2) conformal mapping of the approximate polygonal regions, 3) grad-div system formulation of the spectral problem and 4) computation of the eigenvalue bounds. After describing the method, justifying its validity and reporting on general convergence estimates, I will show concrete evidence of its effectiveness on the Koch snowflake. I will conclude the talk by discussing potential applications to other linear operators on rough regions. This research has been conducted jointly with Lehel Banjai (Heriot-Watt University).

[A] J. Fractal Geometry 8 (2021) No. 2, pp. 153-188

Add to your calendar or Include in your list

Thu 22 Feb 15:00: Computing lower eigenvalues on rough domains

Other events - Tue, 06/02/2024 - 22:54
Computing lower eigenvalues on rough domains

In this talk I will describe a strategy for finding sharp upper and lower numerical bounds of the Poincare constant on a class of planar domains with piecewise self-similar boundary. The approach is developed in [A] and it consists of four main blocks: 1) tight inner-outer shape interpolation, 2) conformal mapping of the approximate polygonal regions, 3) grad-div system formulation of the spectral problem and 4) computation of the eigenvalue bounds. After describing the method, justifying its validity and reporting on general convergence estimates, I will show concrete evidence of its effectiveness on the Koch snowflake. I will conclude the talk by discussing potential applications to other linear operators on rough regions. This research has been conducted jointly with Lehel Banjai (Heriot-Watt University).

[A] J. Fractal Geometry 8 (2021) No. 2, pp. 153-188

Add to your calendar or Include in your list

Thu 15 Feb 15:00: Adaptive Intrusive Methods for Forward UQ in PDEs

Talks - Sun, 28/01/2024 - 17:54
Adaptive Intrusive Methods for Forward UQ in PDEs

In this talk we discuss a so-called intrusive approach for the forward propagation of uncertainty in PDEs with uncertain coefficients. Specifically, we focus on stochastic Galerkin finite element methods (SGFEMs). Multilevel variants of such methods provide polynomial-based surrogates with spatial coefficients that reside in potentially different finite element spaces. For elliptic PDEs with diffusion coefficients represented as affine functions of countably infinitely many parameters, well established theoretical results state that such methods can achieve rates of convergence independent of the number of input parameters, thereby breaking the curse of dimensionality. Moreover, for nice enough test problems, it is even possible to prove convergence rates afforded to the chosen finite element method for the associated deterministic PDE . However, achieving these rates in practice using automated computational algorithms remains highly challenging, and non-intrusive multilevel sampling methods are often preferred for their ease of use. We discuss an adaptive framework that is driven by a classical hierarchical a posteriori error estimation strategy — modified for the more challenging parametric PDE setting — and present numerical results.

Add to your calendar or Include in your list