Scientific Computing Seminar


Scientific computing is the science and art of using solid mathematics and efficient computational models to solve important problems in science.

This seminar provides a framework for researchers and practitioners in scientific computing to present ongoing work and discuss new ideas. The subjects of the seminars cover scientific computing in its widest extent, and the list of speakers includes experts from both academia and industry.

The seminars usually take place on Wednesdays at 13:00 - 14:00 in room 136 in building 303B (Matematicum), and everyone is welcome to attend the talks. Since members of the audience often have very different backgrounds, presentations are usually held at an introductory level.

The seminars are organized by the section for scientific computing at DTU Compute. We conduct research in the areas of ordinary and partial differential equations, functional analysis, optimization and control, inverse problems, reconstruction methods in imaging and tomography, high-performance/heterogeneous computing, and biomathematics.

If you would like to know more about the seminars, please contact Yiqiu Dong or Kim Knudsen.


The table below shows all speakers in all semesters. For the current semester only, please see the current list.

Spring 2013

Time & PlaceSpeakerTitle

Feb. 19, 305/205Paul Fischer (DTU Compute - AlgoLog)Some NP-hart problems aren't and "Not all, statisticians believe in, is true"

March 12, 305/205 Mark Hoffmann (MAN Diesel)Mathematical modeling and scientific computing

April 16, 305/205 Toke Koldborg Jensen (Rambøll)Safe sailing in Femern Bælt and risk analysis

May 16, 305/205Harald Siegfried Waschl (Johannes Kepler University)Application Examples of Online MPC in Combustion Engine Control
The aim of this talk is to present different online MPC applications and give a short overview of the facilities and used MPC workflow at JKU. Both case studies have a linear MPC framework in combination with real-time capable QP-solver in common. The first application example is the control of a Diesel engine air system in view of stringent emission legislation and real sensors with limited accuracy. The second example will focus on the application of online MPC for integral engines used at compressor stations in the US Pipeline network. In this case also a numerically motivated self tuning strategy is presented which reduces the tuning effort on site and enhances the numerical condition of the problem.

May 21, 305/205 Nils Olsen (DTU Space)Measuring Earth's magnetic field

May 23, 421/73Simon R. Arridge (University College London)Computational Methods in PhotoAcoustic Tomography

May 23, 421/73Emil Sidky (University of Chicago)Objective Assessment in CT

June 4, 305/205 Ole Sigmund (DTU Mechanical Engineering)Topology Optimization

June 18, 321/033Antoine Laurain (TU Berlin)Shape and topology optimization methods for inverse problems
We propose a general shape optimization approach for the resolution of inverse problems in tomography. For instance, in the case of Electrical Impedance Tomography (EIT), we reconstruct the electrical conductivity while in the case of Fluorescence Diffuse Optical Tomography (FDOT), the unknown is a fluorophore concentration. In the two cases, the underlying partial differential equation are different but the reconstruction method essentially stays the same. These problems are in general severely ill-posed, and a standard remedy is to make additional assumptions on the unknowns to regularize the problem. Our approach consists in assuming that the functions to be reconstructed are piecewise constants.

June 25, 305/205Joachim Dahl (MOSEK)Conic modeling and optimization in MOSEK

Fall 2013

Time & PlaceSpeakerTitle

Sep. 3, 303B/136Peter Røgen (DTU Compute - SC)Geometry and "Knot Theory" for proteins and does topology optimization apply to protein structure prediction

Sep. 4, 101/S16Samuli Siltanen (University of Helsinki)Regularization for electrical impedance tomography using nonlinear Fourier transform
The aim of electrical impedance tomography (EIT) is to reconstruct the inner structure of an unknown body from voltage-to-current measurements performed at the boundary of the body. EIT has applications in medical imaging, nondestructive testing, underground prospecting and process monitoring. The imaging task of EIT is nonlinear and an ill-posed inverse problem. A non-iterative EIT imaging algorithm is presented, based on the use of a nonlinear Fourier transform. Regularization of the method is provided by nonlinear low-pass filtering, where the cutoff frequency is explicitly determined from the noise amplitude in the measured data. Numerical examples are presented, suggesting that the method can be used for imaging the heart and lungs of a living patient.

Sep. 10, 303B/136 Per Christian Hansen (DTU Compute - SC)Regularizing Iterations with Krylov Subspaces
Krylov subspaces are fascinating mathematical objects with many important applications in scientific computing, e.g., for solving large systems of linear equations, for computing eigenvalues, and for determining controllability in a control system. They are also important tools for regularization of large-scale discretizations of inverse problems, which is the topic of this talk.

Sep. 17, 303B/136 Peter Nørtoft (DTU Compute - SC)Isogeometric Analysis and Shape Optimization in Fluid Mechanics
Isogeometry represents a unification of finite element analysis and computer-aided design. This makes it a very potent tool for engineers designing shapes that are controlled by partial differential equations. A well-known example is the design of a wing of an airplane. In this talk, I outline some of the advantages and some of the challenges that the isogeometric method faces when applied to such problems within fluid mechanics.

Sep. 24, 303B/136Kim Knudsen (DTU Compute - SC)Elements of Impedance Tomography

Oct. 1, 303B/136Anton Evgrafov (DTU Compute - SC)Newton's method for nothing (and convergence checks for free)

Oct. 8, 303B/136Allan Peter Engsig-Karup (DTU Compute - SC)On Lovelace's Path to Modern Scientific Computing

Oct. 18, 101/S03Lieven Vandenberghe (UCLA)Decomposition by primal-dual splitting and applications in image deblurring
In optimization the term decomposition usually refers to iterative techniques for solving large problems that become block-separable after fixing certain coupling variables or removing coupling constraints. By extension, the same techniques apply to other types than block-diagonal structure, for example, problems with network or convolution structure. The most common approach is dual decomposition, often via the alternating direction method of multipliers (ADMM) or split Bregman method. In the dual approach coup- ling variables are handled by replicating the variables and enforcing equality through consistency con- straints. In this talk we will discuss alternative decomposition schemes based on a direct splitting of the primal-dual optimality conditions. The techniques will be illustrated with examples from image deblurring.

Oct. 18, 101/S03Jacek Gondzio (Edinburgh University)Inexact search directions in very large-scale optimization
In this talk we are concerned with the second-order methods for optimization (which naturally include interior point algorithms). Many large-scale problems cannot be solved with methods which rely on exact directions obtained by factorizing matrices. For such problems, the search directions have to be computed using iterative methods. We address the problem of how much of inexactness is allowed without notice- ably slowing down the convergence compared with the exact second-order method. We argue that (except for some very special problems) matrix-free approaches have to be applied to successfully tackle truly large scale problems. We provide new theoretical insights and back them up with computational experience

Oct. 18, 101/S03Yinyu Ye (Stanford University)Complexity Analysis beyond Convex Optimization
A powerful approach to solving difficult optimization problems is convex relaxation. In one application, the problem is first formulated as a cardinality-constrained linear program (LP) or rank-constrained semi- definite program (SDP), where the cardinality or rank corresponds to the target support size or dimension. Then, the non-convex cardinality or rank constraint is either dropped or replaced by a convex surrogate, thus resulting in a convex optimization problem. In this talk, we explore the use of a non-convex surrogate of the cardinality or rank function, namely the so-called Schatten quasi-norm. Although the resulting optimization problem is non-convex, we show that, for many cases, a first and second order KKT or critical point can be approximated to arbitrary accuracy in polynomial time. We also summarize a few complexity analysis results of more general non-convex optimization, which recently becomes a popular research topic and hopefully leads to more effective non-convex optimization solvers.

Oct. 22, 303B/136Yiqiu Dong (DTU Compute - SC)Total-Variation-Based Variational Methods in Image Restoration

Oct. 29, 303B/136Martin Skovgaard Andersen (DTU Compute - SC)Sparse Semidefinite Optimization

Oct. 30, 303B/026Sarah Hamilton (Helsinki University)Direct D-bar Methods for 2D Electrical Impedance Tomography
Electrical Impedance Tomography (EIT) is a non-invasive imaging modality that aims to recover the internal conductivity/permittivity of a body via current and voltage measurements taken at its surface. Images are then formed from the reconstructed conductivity/permittivity distribution(s) for diagnostic and evaluative purposes. In the 2D geometry, EIT is clinically useful for chest and brain imaging, and applications extend to nondestructive evaluation and underground prospecting. The reconstruction task is a highly ill-posed nonlinear inverse problem, which is very sensitive to noise, and requires the use of regularized solution methods such as D-bar methods. D-bar methods, based on tailor-made scattering transforms, regularize EIT through nonlinear low-pass filters. Key tools for the methods involve complex geometrical optics solutions, the Faddeev Green's function, scattering transforms, and D-bar equations. Advantages of D-bar algorithms include parallelization, low frequency limits in the scattering parameter, and as the methods are non-iterative, no concerns regarding convergence to local minima. In this introductory talk, a conceptual overview of D-bar methods for the EIT problem is given. Reconstructions of complex admittivities (conductivities and permittivities) from experimental EIT data are presented, and the partial data case is explored.

Nov. 5, 303B/136Jakob Sauer Jørgensen (DTU Compute - SC)Computed tomography with sparsity priors

Dec. 5, 101/MR2Luke Olson (University of Illinois at Urbana-Champaign)Advanced Multigrid Solvers

Dec. 5, 101/MR2Xing Cai (Simula)Adopting heterogeneous hardware platforms for scientific computing

Dec. 18, 321/033Joost Batenburg (University of Antwerp)An Integral Approach to X-Ray Tomography
X-ray tomography has developed into an advanced field of experimental research, utilizing not just the absorption contrast, but also phase, chemical and directional information to characterize the interior structure of the scanned object. Achieving the best possible results is becoming more and more an interdisciplinary effort, combining state-of-the-art experimental hardware, careful experiment design, mathematical modeling, customized algorithms and high performance computing. In the Computational Imaging group at CWI Amsterdam and the ASTRA group in Antwerp, we are developing theory, tools and methodology that can be used to study these steps in an integral framework. As it turns out, establishing the answer to the basic questions “what would you like to know about the object” is already very challenging and difficult to answer in a generic way. In this lecture I will give an overview of the interplay between the many disciplines involved in this field, and the tension between the goals of the different communities that need to work together. I will then give a brief overview of our activities in integrating key steps of the tomographic pipeline, illustrated by a series of concrete examples.

Spring 2014

Time & PlaceSpeakerTitle

Apr. 23, 13:00, 303B/136Jens Hugger (University of Copenhagen)Numerical pricing of Financial options with simple Finite Difference Methods

Apr. 30, 13:00, 303B/136Ole Lindberg (FORCE Technology)Real Time Simulation of Ship-Ship Interaction with GPU's
This talk considers real time calculation of ship to ship interaction forces in a maritime simulator. Maritime simulators are used for marine engineering and training of naval officers. In the simulator, it is possible to navigate realistic ship models in realistic marine environments. The more accurate the physical modeling of the environment, the higher the quality of the training and engineering. Motion of ships are determined by the forces on the ships from the environment. The forces on ships comes from interaction with other ships, fixed or floating structures, the seabed, wind, waves, viscous friction etc. The main challenge is that all forces have to be calculated in real time, and that the calculations of the solutions needs to be fast and scalable. This talk considers calculation of ship-ship interaction forces based on an existing potential flow finite difference (FD) model for large-scale ocean wave modeling; extended to include a new weighted least squares (WLS) approximation body boundary condition (BC) on the ship's hull. The model is solved using an efficient multigrid preconditioned defect correction solver (PDC) and implemented in parallel on high-throughput many-core graphical processing units (GPU) using the CUDA API. The new WLS approximation of the body BC is similar to immersed boundary methods (IBM) for elliptic equations, in particular the inhomogeneous Neumann IBM. This development is a result of an ongoing effort to develop more accurate, efficient and flexible models of wave forces on ships. This ship-ship interaction model is developed in cooperation between the Technical University of Denmark and FORCE Technology and is currently being integrated into the maritime simulator SimFlex4 at FORCE Technology.

May 7, 13:00, 303B/136Boyan Stefanov Lazarov (DTU Mechanical Engineering)Multiscale design of structures and materials using topology optimization
The focus of the presentation is on the applicability of multiscale computations in topology optimization of structures and materials. Topology optimization is an iterative process which finds a material distribution in a given design domain by minimizing an objective and fulfilling a set of predefined constraints. The material distribution is represented by a density field which takes value one, if the material point is occupied with material, and zero if the material point is void. In order to utilize gradient based optimization, the problem is relaxed to take intermediate values between zero and one. The optimization algorithm consists of alternating finite element analyzes, gradient evaluations, regularization steps and math programming updates. Most of the computational efforts are spent on solving the discretized physical problem and wider industrial adoption requires reduction of the solution time. A promising direction is the development of new scalable algorithms and codes for proper utilization of the modern parallel machines. Here, the Multiscale FEM (MsFEM) with local spectral basis functions is adopted for achieving this goal. The method constructs suitable coarse spaces which are capable of representing the important features of the solution and provide a good approximation of the system response. Initial optimization studies utilizing MsFEM reveal several challenges. The optimization process utilizes every weakness in the discretization and the obtained optimized topologies are heavily influenced by the selection of the coarse spaces. Hence, good solutions are obtained only for relatively large number of coarse basis functions which leads to unacceptable computational cost. A computationally effective alternative can be obtained by utilizing MsFEM as a preconditioner for Krylov iterative solvers. The proposed approach, without any significant modifications, is directly applicable in the design of optimal topologies under uncertainties, where it provides an effective scheme with computational time of order similar to the time necessary for optimizing a single deterministic problem.

May 14, 13:00, 303B/136Jesper Sandvig Mariegaard (DHI)Wave modelling at DHI and the development of a new phase-resolving wave model
DHI is developing a new phase-resolving wave model to replace the existing Enhanced Boussinesq model MIKE21BW. The model will be a 3D non-hydrostatic Navier-Stokes solver on unstructured mesh. This talk will introduce some basic wave modelling concepts, different type of wave models and provide details on the design of the new wave model. The talk will also give a short overview of DHIs activities related to numerical wave modelling with special focus on current research projects.

May 21, 13:00, 303B/136Mads Nielsen (University of Copenhagen)Quantification of appearance in imaging population studies
Populations studies includes hundreds or thousands or even hundreds of thousands of 2D or 3D medical images. The Alzheimer's Disease Neuroimaging Initiative includes 5000+ T1-wieghted 3D MRI scans. The Copenhagen Study on Breast Density include 150.000+ mammograms. The Danish Lung Cancer Screening Trial include 8000+ 3D CT scans. The OsteoArthritis Initiative include 25.000+ 3D knee MRIs. From these images the visual appearance (in opposition to the geometrical structure) carries information on the health of the subjects. We present methodologies to derive quantitative imaging biomarkers based on visual appearance and their applications in studies of Alzheimer's, Breast Cancer, Smoker's Lungs, and Osteoarthritis.

May 28, 13:00, 303B/136Achim Schroll (University of Southern Denmark)Computational Modeling of Fluorescence Loss in Photobleaching
A quantitative analysis of intracellular transport processes is essential for the diagnosis and improved treatment of diseases like Alzheimer, Parkinson, lysosomal storage disorders and arteriosclerosis. Fluorescence loss in photobleaching (FLIP) is a modern microscopy method for visualization of transport processes in living cells. Although FLIP is widespread, an automated reliable analysis of image data is still lacking. This paper presents a well--posed computational model based on spatially resolved diffusion constants as well as molecular binding quotas. Based on this model, FLIP images are simulated and thus molecular transport in living cells is reliably quantified.

Jun. 4, 13:00, 303B/136Mirza Karamehmedovic (DTU Compute)Modelling and Computation for Total Crystallography
Using examples from materials science, I shall motivate "Total Crystallography". Then, after presenting some of the computational challenges involved here, I shall briefly describe what we are doing to achieve a next generation of fast and accurate numerical solvers.

Jun. 11, 13:00, 303B/136Per Berg and Jacob Weismann Poulsen (Danish Meteorological Institute) Recent Developments of the HBM Ocean Circulation Model
We will give an introduction to HBM, the HIROMB-BOOS Model, which is an ocean circulation model code that is developed and applied at DMI in different setups from operational forecast model for the official Danish stormsurge warning system to research projects such as climate and bio-geo-chemical modelling on Pan-European scale. The model code is being developed for present and future architectures, and therefore has matured support for shared and distributed memory systems using openMP and MPI, respectively. More recently, our attention has been towards many-core architectures, and we would like to share some details of the work we did for re-factoring for Intel's XEON PHI co-processor.

Jun. 25, 13:00, 303B/136Tieyong Zeng (Hong Kong Baptist University)Total Variation Dictionary Model and Dictionary Learning for Image Restoration
Image restoration plays an important role in image processing, and numerous approaches have been proposed to tackle this problem. This talk presents a modified model for image restoration, that is based on a combination of Total Variation (TV) and Dictionary approaches. Since the well-known TV regularization is non-differentiable, the proposed method utilizes its dual formulation instead of its approximation in order to exactly preserve its properties. The data-fidelity term combines the one commonly used in image restoration and a wavelet thresholding based term. Then the resulting optimization problem is solved via a first-order primal-dual algorithm. Numerical experiments demonstrate the good performance of the proposed model. Moreover, we replace the classical TV by the nonlocal TV regularization, which results in a much higher quality of restoration. We then turn to the dictionary learning problem for image recovery. Various numerical results on non-Gaussian noises and image decompression illustrate the superior performance of our approach.

Fall 2014

Time & PlaceSpeakerTitle

Aug. 21, 10:00, 303B/136Marta Betcke (UCL) Fabry-Perot single pixel camera: imaging using data sparsity
We present a new compressed sensing photoacoustic scanner based on optically addressed Fabry-Perot interferometer. Instead of slow raster acquisition the new scanner interrogates the whole sensor with a series of independent illumination patterns, each individual measurement being a scalar product of the illumination pattern and the acoustic field on the sensor. We discuss various aspects of compressed data acquisition and image reconstruction for this novel device on both simulated and real data.

Aug. 21, 11:00, 303B/136Nuutti Hyvönen (Aalto University) Reconstruction of outer boundary shape and edge-enhancing regularization in electrical impedance tomography
Electrical impedance tomography (EIT) is an imaging modality for extracting information about the conductivity distribution inside a physical body from boundary measurements of current and voltage. This talk considers two computational tasks related to practical EIT: (i) The simultaneous reconstruction of the exterior shape and the internal conductivity of the examined body. (ii) The reconstruction of embedded inhomogeneities in an approximately constant background by employing edge-preferring regularization. The reconstruction methods are built in the framework of the complete electrode model, which is the most accurate model for real-life EIT. The functionality of the proposed algorithms is evaluated by experimental data.

Sep. 9, 13:00, 303B/136Angelos Mantzaflaris (RICAM)Integration by Interpolation and lookup for Isogeometric analysis
In the IGA context, the unstructured FEM mesh is replaced by a structured parametric quadrilateral mesh, which is typically uniform. The global geometry map can then be used to map it to a "free-form" physical mesh. Element-wise assembly is computationally expensive in this setting, due to both the elevated degree and the extended supports of the basis functions, spanning multiple elements. Driven by these observations, we propose a new, quadrature-free approach, based on interpolation and fast look-up operations for values of uniform B-spline integrals. The method can be regarded as projection of the integrands to the space of B-spline tri-products followed by exact integration in that space. We obtain theoretical error estimates for the, so called, consistency error that are supported by the observed convergence rate in the experiments.

Oct. 8, 13:00, 303B/136Martin Hanke-Bourgeois (Johannes Gutenberg-Universität Mainz)Multi frequency MUSIC for impedance imaging
We investigate a multi frequency impedance imaging technique that has been suggested by Scholz in combination with the TransScan device, and reappeared recently in work by Ammari, Boulier, and Garnier in a model for the active electrolocation of weakly electric fish. While for the classical MUSIC algorithm (in the impedance imaging context) different input currents with fixed frequency are used to generate data, the multi frequency MUSIC scheme is restricted to one single current pattern with different driving frequencies. In either case the goal is to determine the locations of small irritating obstacles within a homogeneous background and, if possible, their shape, from measured voltages. In this talk we will outline limitations and potentials of the multi frequceny MUSIC technique.

Nov. 12, 13:00, 303B/136Lars Kai Hansen (DTU Compute)Bayesian approaches to Sparsity
Ill-posed inverse problems are found in all areas of science and technology. In many application domains sparsity of the solution is a key ingredient in stabilizing the solution. In Bayesian analysis sparsity can be promoted in a number of ways including mechanisms like "automatic relevance determination", "spike and slab" and the "variational garrote". In the talk I will first discuss some general ideas relating to Bayesian optimality. I'll introduce the three mentioned sparsity mechanisms, and discuss our experiences with them in hard ill-posed problems from the field of brain imaging. This application domain also suggests various generalizations of the basic linear inverse problem, leading us to consider both "multiple measurements vectors" and uncertainty of the forward model.

Spring 2015

Time & PlaceSpeakerTitle

Feb. 18, 13:00, 303B/136Mario Ricchiuto Uncertainty quantification and robust code-to-code comparison for long wave run-up
In this talk we aim at comparing, on a very simple case of long wave run up over a constant slope, the full set of statistics obtained from an uncertainty quantification study based on two independent shallow water codes. This robust code-to-code comparison allows to both enhance the detail of the benchmarking process, as well as to give reliable information on some physical aspects related to the UQ outputs.

Feb. 26, 10:00, 303B/136 Maarten Gulliksson, University of Orebro Sparse Regularization and Damped Dynamical Systems for Solving Equations

Feb. 27, 11:00, 303B/136 Tobias Lindstrøm Jensen, Aalborg University Fast Algorithms for High-Order Sparse Linear Prediction with Applications to Speech Processing

Mar. 4, 13:00, 303B/136 Axel Thielscher (DTU Elektro) Forward modeling in brain stimulation
Non-invasive brain stimulation methods (NIBS) modulate brain activity by means of injected currents. As counterparts to imaging modalities such as magnetic resonance imaging (MRI), they have an important role in systems neuroscience research. They are also tested as a treatment in a variety of clinical applications such as depression treatment, stroke rehabilitation or tinnitus treatment.;A major shortcoming of current NIBS methods is their low spatial specificity, partly caused by a high uncertainty regarding the field distribution injected in the brain. In the last few years, forward modeling of the electric fields using increasingly accurate models of the human head and numerical methods such as finite element methods (FEM) has helped to shed light on the spatial characteristics of NIBS. After a short introduction to NIBS methods, I will review recent results from the forward modeling to highlight what we have learned so far, but also discuss some shortcomings which still exist.

Mar. 10, 11:00, 303B/136Jurgen Frikel, TUM Incomplete Data Tomography

May 8, 10:00, 303B/136Jan Modersitzki, Fraunhofer MEVIS, Lubeck Image Registration, Data Fusion, Motion Correction
Image registration is a fascinating, important and challenging problem in image processing and particularly in medical imaging. Given are two or more images taken at different times, from different devices or perspectives. The goal of image registration is to establish correspondences of objects within the images. In this obviously ill-posed problem the objective is to automatically determine a reasonable transformations, such that a transformed versionof one of the images becomes similar to the other one. In this tutorial type talk, we give an introduction to this problem and present typical areas of medical applications. We outline a state-of-the-art mathematical variational approach, that provides the necessary flexibility for a huge range of applications. The backbone of this approach is a well-designed objective function that is based on problem specific data-fitting terms and regularizers. Constraints can be used to incorporate additional information such as point-to-point correspondences, local rigidity or volume preservation of the sought transformation. The mathematics is motivated by a number of examples including data fusion and motion correction.

May 21, 13:00, 303B/136 James G. Nagy, Emory University Mathematical Modeling and Computational Methods for Digital Tomographic Breast Imaging
The standard approach for breast imaging is mammography, which produces a two-dimensional radiograph of the three-dimensional breast. This results in tissue superposition, which can have negative consequences when attempting to detect cancer. To avoid this issue, digital breast tomosynthesis (DBT) and dedicated breast computed tomography (BCT) have been recently developed. DBT is a limited angle tomography technique that produces a quasi-3D image, while BCT results in a fully 3D reconstructed image. There are tradeoffs (e.g., resolution of the reconstructed image vs radiation dose to the patient) when using each of these techniques. But in either case, development of efficient algorithms is important. In this talk we consider algorithms that exploit poly-energetic assumptions of the x-ray source, and we consider dual spectrum single pass breast imaging to reduce beam hardening artifacts and with an aim to achieve material quantification in the imaged object. We will describe our current efforts, discuss open problems, and provide results on real data.

Fall 2015

Time & PlaceSpeakerTitle

Aug. 4, 13:00, 303B/136 (unusual time)Bernadette Hahn, Saarland UniversityImage reconstruction from motion corrupted CT-data
In computerized tomography, the data acquisition takes a considerably amount of time, since the radiation source rotates around the investigated object. Temporal changes of the specimen during this period result in inconsistent data sets. Hence, the application of standard reconstruction algorithms causes motion artefacts in the images which can severely impede the diagnostic analysis. To reduce the artefacts, the reconstruction method has to take the dynamic behavior of the specimen into account. To obtain an adequate reconstruction, a priori information about the motion is required, which has to be extracted from the measured data. Then, these information are included in specially designed algorithms which compensate for the object's motion within the reconstruction step. Both challenges are addressed in this talk and illustrated with numerical results.

Aug. 26, 13:00, 303B/136 Michael Vogelius, Rutgers University Cloaking by mapping and related issues

Sept. 2, 13:00, 101/s10 (unusual room) Mila Nikolova, CMLA, ENS Cachan, CNRS, France Combining models is an open problem: case studies and applications
Many imaging tasks amount to solve inverse problems. They are typically solved by minimizing an objective that accounts for the models of the recording device and the sought-after image. The common approach is to take a weighted combination: however it appears that the solution deviates from both models. Our talk focuses on the ways how these models can be used jointly so that all available information is used more efficiently. We present two such models as well as applications.

Sept. 9, 13:00, 303B/136 Andreas Noack, Computer Science and Artificial Intelligence Laboratory, MIT Fast and flexible linear algebra in Julia
Applied scientists often develop computer programs exploratively, where data examination, manipulation, visualization and code development are tightly coupled. The traditional programming languages supporting this workflow are relatively slow and, in consequence, performance critical computations are delegated to library code written in faster languages infeasible for interactive development. In this talk, I introduce the Julia programming language and briefly describe its core design. I shall argue that the language is well suited for computational linear algebra. Julia provides features for exploratory program development, but the language itself can be almost as fast as C and Fortran. Furthermore, Julia's rich type system makes it possible to extend linear algebra functions with user defined element types, such as finite fields or exotic algebras. I will show examples of Julia programs that are relatively simple, yet fast and flexible.

Sept. 16, 13:00, 303B/136 Chen Keasar, Department of Computer Science and Department of Life Sciences, Ben-Gurion University, IsraelChemistry by reverse engineering - multibody solvation energy for proteins
Proteins are linear and flexible molecules that serve as the major building blocks and engines of all life on earth. Under physiological conditions, solvation phenomena, namely the hydrophobic effect (i.e. oil drop-like behavior) and screening of electrostatic interactions, stabilize proteins in complex, 3D-structures. These structures enable the diverse protein functions and are thus the focus of much research. Chemical insight offers powerful approaches to computational studies of solvation phenomena, and the way they dictate protein structures. However the theoretically robust methods tend to be computationally demanding, often impractical. An alternative approach trades theoretical robustness for computational efficiency, and learns solvation phenomena by examining their outcome, namely, known protein structures. In these "reverse engineering" studies, biased distributions of inter-atomic distances emerge as a hallmark of solvation effects. Hydrophobic atoms manifest their tendency to aggregate by being close to one another, and electrostatic screening is manifested by rarity of close contacts between charged groups. These observations have given rise to quite a few knowledge-based (aka mean-force) energy terms, which turned out to be very useful in protein research. Unfortunately, biased pairwise distributions cannot represent some solvation aspects, which are inherently related to properties of atom ensembles. For examples, interactions of charged atoms do (rarely) occur in protein structures, but only when shielded from the solvent by layers of hydrophobic atoms. Such ensemble effects prove resistant to statistics based formulation. In my lecture I will present our attempt to cope with this challenge by a new, multibody, derivable, and computationally efficient solvation term.

Oct. 30, 11:00, 303B/136 Anton Evgrafov, NTNU Topology optimization of fluid domains using high order methods
The desire to create objects whose shapes have specific properties when immersed in a moving liquid or gas has an extremely long history, to at least the appearence of longboats, canoes, and later sailing boats some thousands of years ago. Nowadays computer tools for automatic computation of shapes with desired fluid functionality find applications in automobile and aerospace industries, lab-on-a-chip microfluidic systems, fuel cells, among others. Mathematically, one is interested in selecting a domain for a PDE system modelling a given flow situation (often Navier--Stokes equations) out of some family of shapes, in order to minimize (or at least, reduce) the value of a given performace functional. There are several aspects that make this problem challenging. Analytically, we have to deal with the fact that domain families do not naturally form a vector space. Computationally, we would like to reduce the number of times we have to solve the governing PDEs. We will start by looking at some applications of shape/topology optimization in fluid mechanics. We will then briefly outline a few different approaches to these PDE-constrained optimization problems. Finally, we will focus on topology optimization (a.k.a. shape optimization through homogenization) and derive an optimization algorithm with very fast local convergence.

Nov. 4, 13:00, 303B/130 Tianshi Chen, Linköping University On kernel structures for regularized LTI system identification
A key issue of system identification is dealing with the bias-variance tradeoff. For the classical maximum likelihood/prediction error method, this issue becomes how to find a parametric model structure with suitable model complexity, which is often handled by model validation techniques. Regularization is another way to deal with this issue and has long been known beneficial for general inverse problems of which system identification is an example. However, the use of regularization has not been investigated rigorously in system identification until very recently with the appearance of kernel-based regularization methods. With carefully designed kernel structures to embed available prior knowledge as well as well-tuned regularization, promising results have been reported for both dynamic model estimation and structure detection problems. In this talk, I will mainly focus on kernel structures and discuss how to make use of prior knowledge in the kernel structure design from a machine learning perspective and/or from a system theory perspective, depending on the type of the prior knowledge.

Nov. 10, 13:00, 303B/136 (unusual time - together with tomography seminar) Francois Lauze, DIKU, Copenhagen University

Nov. 11, 13:00, 324/170 (unusual room) Bijan Mohammed, Universite de Montpellier UQ in cascade
We present an original framework for uncertainty quantification (UQ) in optimization. It is based on a cascade of ingredients with growing computational complexity for both forward and reverse uncertainty propagation. The approach is merely geometric. It starts with a complexity-based splitting of the independent variables and the definition of a parametric optimization problem. Geometric characterization of global sensitivity spaces through their dimensions and relative positions by the principal angles between global search subspaces bring a first set of information on the impact of uncertainties on the functioning parameters on the optimal solution. Joining the multi-point descent direction and the quantiles on the optimization parameters permits to define the notion of Directional Extreme Scenarios (DES) without sampling of large dimension design spaces. One goes beyond DES with Ensemble Kalman Filters (EnKF) after the multi-point optimization algorithm is cast into an ensemble simulation environment. This formulation accounts for the variability in large dimension. The UQ cascade ends with the joint application of the EnKF and DES leading to the concept of Ensemble Directional Extreme Scenarios (EDES) which provides more exhaustive possible extreme scenarios knowing the Probability Density Function of our optimization parameters. The different ingredients are illustrated on different problems is aircraft shape design and one example of reservoir history matching in the presence of operational and/or geometric uncertainties.

Nov. 25, 13:00, 324/170 (unusual room) Zeinab Mahmoudi, DTU Compute Performance Enhancement of Continuous Glucose Monitoring in a Dual-Hormone Artificial Pancreas for Type 1 Diabetes
The control of blood glucose (BG) concentration by means of a portable artificial pancreas (AP) will substantially increase the quality of life for type 1 diabetes patients, by reducing the burden of meticulous considerations about the manual adjustment of insulin dosage and timing, and by reducing late diabetes complications through intensive insulin therapy without severe hypoglycemia. However, the AP technology faces several challenges, on top of them patient's safety. One of the main factors contributing to insufficient patient's safety is the fault associated with CGM sensor; and therefore, the sensor faults can have major impact on the performance of the AP, and are limiting factors to achieve a sufficiently reliable closed-loop control for BG. The sources of CGM faults and artifacts include BG-to- Interstitial Glucose (IG) kinetics, random noise and spike, and problems caused by the biochemistry of glucose sensors such as signal drift and sensor sensitivity variations, miscalibration, signal dropout caused by communication loss, and pressure-induced sensor attenuation (PISA). Faults and anomalies can significantly reduce the accuracy of sensor measurements; consequently, the CGM readings may deviate substantially from the actual BG concentrations, and that can cause critical circumstances in the AP which relies on the CGM output. Therefore, although a reliable control algorithm may be able to keep the CGM measurements in the target glycemic range, due to the deviation of the CGM readings from the actual BG concentration, the controller may be unable to maintain the actual BG levels within the target range. The first aim of this project is to develop a CGM accuracy enhancement module that detect and correct the CGM faults and anomalies, and can reduce the deviatios of the CGM readings from the actual BG levels in a dual-hormone AP. This is planned to be achieved by using novel mathematical algorithms and signal processing methods based on stochastic differential equations, nonlinear modeling and filtering, in combination with sensor redundancy. The second aim of the project is the clinical evaluation of the module. To fulfill this goal, we will pursue two approaches: 1) the performance of the CGM accuracy enhancement module will be investigated in the presence of different AP disturbances such as meal, exercise, and stress, 2) the effect of CGM improvement on the CGM-based clinical decision making for diabetes treatment will be evaluated in a follow-up study, outside the closed-loop approach

Dec. 9., 13:00, 306 aud 36 (unusual room)Roland Herzog, TU Chemnitz Function space aspects of optimal control problems
In this presentation we consider some prototypical optimal control problems for partial differential equations. We will emphasize the function space aspects of such problems both in terms of analysis as well as solution methods, and illustrate them with numerical experiments. We shall also address current and potential future research directions in the field.

Dec. 16., 13:00, 324/170 (unusual room)Dimitri Boiroux, DTU Compute The artificial pancreas for people with type 1 diabetes
For more than 50 years, automated or semi-automated (i.e. including meal and/or physical activity announcements) administration of insulin, also known as the artificial pancreas (AP), has had the ambition to improve glucose regulation, to reduce the risk of diabetes-related complications and to ease the life of people with type 1 diabetes. Current prototypes comprise a continuous glucose monitor (CGM), a control algorithm implemented on a mobile device and a pump. In most cases, Model Predictive Control (MPC) algorithms are used as a control algorithm for the AP. Recently, glucagon analogues stable in liquid form are being considered as a possible improvement of the AP. An AP using insulin and glucagon is referred as dual-hormone APs. In this talk, I will present some of our key results using a single-hormone AP. Then, I will discuss the benefits and challenges of our dual-hormone AP.

Jan. 20., 13:00, ??Michael Pedersen, DTU Compute The Hilbert Uniqueness Method for Optimal Boundary Control

Spring 2016

Time & PlaceSpeakerTitle

March 9, 13:00, 303B/136 Johan Rønby, DHI The art of moving a surface

Jan. 20., 13:00, 324/170Michael Pedersen, DTU Compute Control of PDE's - The HUM method of J.L.Lions