
Fall 2015 

Time & Place  Speaker  Title 

Aug. 4, 13:00, 303B/136 (unusual time)  Bernadette Hahn, Saarland University  Image reconstruction from motion corrupted CTdata 
In computerized tomography, the data acquisition takes a considerably amount of time, since the radiation source rotates around the investigated object. Temporal changes of the specimen during this period result in inconsistent data sets. Hence, the application of standard reconstruction algorithms causes motion artefacts in the images which can severely impede the diagnostic analysis. To reduce the artefacts, the reconstruction method has to take the dynamic behavior of the specimen into account. To obtain an adequate reconstruction, a priori information about the motion is required, which has to be extracted from the measured data. Then, these information are included in specially designed algorithms which compensate for the object's motion within the reconstruction step. Both challenges are addressed in this talk and illustrated with numerical results. 

Aug. 26, 13:00, 303B/136  Michael Vogelius, Rutgers University  Cloaking by mapping and related issues 


Sept. 2, 13:00, 101/s10 (unusual room)  Mila Nikolova, CMLA, ENS Cachan, CNRS, France  Combining models is an open problem: case studies and applications 
Many imaging tasks amount to solve inverse problems. They are typically solved by minimizing an objective that accounts for the models of the recording device and the soughtafter image. The common approach is to take a weighted combination: however it appears that the solution deviates from both models. Our talk focuses on the ways how these models can be used jointly so that all available information is used more efficiently. We present two such models as well as applications. 

Sept. 9, 13:00, 303B/136  Andreas Noack, Computer Science and Artificial Intelligence Laboratory, MIT  Fast and flexible linear algebra in Julia 
Applied scientists often develop computer programs exploratively, where data examination, manipulation, visualization and code development are tightly coupled. The traditional programming languages supporting this workflow are relatively slow and, in consequence, performance critical computations are delegated to library code written in faster languages infeasible for interactive development. In this talk, I introduce the Julia programming language and briefly describe its core design. I shall argue that the language is well suited for computational linear algebra. Julia provides features for exploratory program development, but the language itself can be almost as fast as C and Fortran. Furthermore, Julia's rich type system makes it possible to extend linear algebra functions with user defined element types, such as finite fields or exotic algebras. I will show examples of Julia programs that are relatively simple, yet fast and flexible. 

Sept. 16, 13:00, 303B/136  Chen Keasar, Department of Computer Science and Department of Life Sciences, BenGurion University, Israel  Chemistry by reverse engineering  multibody solvation energy for proteins 
Proteins are linear and flexible molecules that serve as the major building blocks and engines of all life on earth. Under physiological conditions, solvation phenomena, namely the hydrophobic effect (i.e. oil droplike behavior) and screening of electrostatic interactions, stabilize proteins in complex, 3Dstructures. These structures enable the diverse protein functions and are thus the focus of much research. Chemical insight offers powerful approaches to computational studies of solvation phenomena, and the way they dictate protein structures. However the theoretically robust methods tend to be computationally demanding, often impractical. An alternative approach trades theoretical robustness for computational efficiency, and learns solvation phenomena by examining their outcome, namely, known protein structures. In these "reverse engineering" studies, biased distributions of interatomic distances emerge as a hallmark of solvation effects. Hydrophobic atoms manifest their tendency to aggregate by being close to one another, and electrostatic screening is manifested by rarity of close contacts between charged groups. These observations have given rise to quite a few knowledgebased (aka meanforce) energy terms, which turned out to be very useful in protein research. Unfortunately, biased pairwise distributions cannot represent some solvation aspects, which are inherently related to properties of atom ensembles. For examples, interactions of charged atoms do (rarely) occur in protein structures, but only when shielded from the solvent by layers of hydrophobic atoms. Such ensemble effects prove resistant to statistics based formulation. In my lecture I will present our attempt to cope with this challenge by a new, multibody, derivable, and computationally efficient solvation term. 

Oct. 30, 11:00, 303B/136  Anton Evgrafov, NTNU  Topology optimization of fluid domains using high order methods 
The desire to create objects whose shapes have specific properties when immersed in a moving liquid or gas has an extremely long history, to at least the appearence of longboats, canoes, and later sailing boats some thousands of years ago. Nowadays computer tools for automatic computation of shapes with desired fluid functionality find applications in automobile and aerospace industries, labonachip microfluidic systems, fuel cells, among others. Mathematically, one is interested in selecting a domain for a PDE system modelling a given flow situation (often NavierStokes equations) out of some family of shapes, in order to minimize (or at least, reduce) the value of a given performace functional. There are several aspects that make this problem challenging. Analytically, we have to deal with the fact that domain families do not naturally form a vector space. Computationally, we would like to reduce the number of times we have to solve the governing PDEs. We will start by looking at some applications of shape/topology optimization in fluid mechanics. We will then briefly outline a few different approaches to these PDEconstrained optimization problems. Finally, we will focus on topology optimization (a.k.a. shape optimization through homogenization) and derive an optimization algorithm with very fast local convergence. 

Nov. 4, 13:00, 303B/130  Tianshi Chen, Linköping University  On kernel structures for regularized LTI system identification 
A key issue of system identification is dealing with the biasvariance tradeoff. For the classical maximum likelihood/prediction error method, this issue becomes how to find a parametric model structure with suitable model complexity, which is often handled by model validation techniques. Regularization is another way to deal with this issue and has long been known beneficial for general inverse problems of which system identification is an example. However, the use of regularization has not been investigated rigorously in system identification until very recently with the appearance of kernelbased regularization methods. With carefully designed kernel structures to embed available prior knowledge as well as welltuned regularization, promising results have been reported for both dynamic model estimation and structure detection problems. In this talk, I will mainly focus on kernel structures and discuss how to make use of prior knowledge in the kernel structure design from a machine learning perspective and/or from a system theory perspective, depending on the type of the prior knowledge. 

Nov. 10, 13:00, 303B/136 (unusual time  together with tomography seminar)  Francois Lauze, DIKU, Copenhagen University  


Nov. 11, 13:00, 324/170 (unusual room)  Bijan Mohammed, Universite de Montpellier  UQ in cascade 
We present an original framework for uncertainty quantification (UQ) in optimization. It is based on a cascade of ingredients with growing computational complexity for both forward and reverse uncertainty propagation. The approach is merely geometric. It starts with a complexitybased splitting of the independent variables and the definition of a parametric optimization problem. Geometric characterization of global sensitivity spaces through their dimensions and relative positions by the principal angles between global search subspaces bring a first set of information on the impact of uncertainties on the functioning parameters on the optimal solution. Joining the multipoint descent direction and the quantiles on the optimization parameters permits to define the notion of Directional Extreme Scenarios (DES) without sampling of large dimension design spaces. One goes beyond DES with Ensemble Kalman Filters (EnKF) after the multipoint optimization algorithm is cast into an ensemble simulation environment. This formulation accounts for the variability in large dimension. The UQ cascade ends with the joint application of the EnKF and DES leading to the concept of Ensemble Directional Extreme Scenarios (EDES) which provides more exhaustive possible extreme scenarios knowing the Probability Density Function of our optimization parameters. The different ingredients are illustrated on different problems is aircraft shape design and one example of reservoir history matching in the presence of operational and/or geometric uncertainties. 

Nov. 25, 13:00, 324/170 (unusual room)  Zeinab Mahmoudi, DTU Compute  Performance Enhancement of Continuous Glucose Monitoring in a DualHormone Artificial Pancreas for Type 1 Diabetes 
The control of blood glucose (BG) concentration by means of a portable artificial pancreas (AP) will substantially increase the quality of life for type 1 diabetes patients, by reducing the burden of meticulous considerations about the manual adjustment of insulin dosage and timing, and by reducing late diabetes complications through intensive insulin therapy without severe hypoglycemia. However, the AP technology faces several challenges, on top of them patient's safety. One of the main factors contributing to insufficient patient's safety is the fault associated with CGM sensor; and therefore, the sensor faults can have major impact on the performance of the AP, and are limiting factors to achieve a sufficiently reliable closedloop control for BG. The sources of CGM faults and artifacts include BGto Interstitial Glucose (IG) kinetics, random noise and spike, and problems caused by the biochemistry of glucose sensors such as signal drift and sensor sensitivity variations, miscalibration, signal dropout caused by communication loss, and pressureinduced sensor attenuation (PISA). Faults and anomalies can significantly reduce the accuracy of sensor measurements; consequently, the CGM readings may deviate substantially from the actual BG concentrations, and that can cause critical circumstances in the AP which relies on the CGM output. Therefore, although a reliable control algorithm may be able to keep the CGM measurements in the target glycemic range, due to the deviation of the CGM readings from the actual BG concentration, the controller may be unable to maintain the actual BG levels within the target range. The first aim of this project is to develop a CGM accuracy enhancement module that detect and correct the CGM faults and anomalies, and can reduce the deviatios of the CGM readings from the actual BG levels in a dualhormone AP. This is planned to be achieved by using novel mathematical algorithms and signal processing methods based on stochastic differential equations, nonlinear modeling and filtering, in combination with sensor redundancy. The second aim of the project is the clinical evaluation of the module. To fulfill this goal, we will pursue two approaches: 1) the performance of the CGM accuracy enhancement module will be investigated in the presence of different AP disturbances such as meal, exercise, and stress, 2) the effect of CGM improvement on the CGMbased clinical decision making for diabetes treatment will be evaluated in a followup study, outside the closedloop approach 

Dec. 9., 13:00, 306 aud 36 (unusual room)  Roland Herzog, TU Chemnitz  Function space aspects of optimal control problems 
In this presentation we consider some prototypical optimal control problems for partial differential equations. We will emphasize the function space aspects of such problems both in terms of analysis as well as solution methods, and illustrate them with numerical experiments. We shall also address current and potential future research directions in the field. 

Dec. 16., 13:00, 324/170 (unusual room)  Dimitri Boiroux, DTU Compute  The artificial pancreas for people with type 1 diabetes 
For more than 50 years, automated or semiautomated (i.e. including meal and/or physical activity announcements) administration of insulin, also known as the artificial pancreas (AP), has had the ambition to improve glucose regulation, to reduce the risk of diabetesrelated complications and to ease the life of people with type 1 diabetes. Current prototypes comprise a continuous glucose monitor (CGM), a control algorithm implemented on a mobile device and a pump. In most cases, Model Predictive Control (MPC) algorithms are used as a control algorithm for the AP. Recently, glucagon analogues stable in liquid form are being considered as a possible improvement of the AP. An AP using insulin and glucagon is referred as dualhormone APs. In this talk, I will present some of our key results using a singlehormone AP. Then, I will discuss the benefits and challenges of our dualhormone AP. 

Jan. 20., 13:00, ??  Michael Pedersen, DTU Compute  The Hilbert Uniqueness Method for Optimal Boundary Control 


