# 2014/2015

Todd ARBOGAST, University of Texas at Austin: Approximation of a Linear Degenerate Elliptic Equation Arising from A Two-Phase Mixture

We consider the linear but degenerate elliptic system of two first order equations u = -phi grad p and div(phi u) + phi p = phi f, where the porosity phi>=0 may be zero on a set of positive measure. The model equation we consider has a similar degeneracy as that arising in the equations describing the mechanical system modeling the dynamics of partially melted materials, e.g., in the Earth’s mantle, and the flow of ice sheets in the polar ice caps and glaciers. In the context of mixture theory, phi represents the phase variable separating the solid one-phase (phi=0) and fluid-solid two phase (phi>0) regions. Two main problems arise. First, as phi vanishes, one equation is lost. Second, after we extract stability or energy bounds for the solution, we see that the pressure p is not controlled outside the support of phi. After an appropriate scaling of the pressure, we can show existence and uniqueness of a solution over the entire domain. We then develop a stable mixed finite element method for the problem, and show some numerical results.

Wednesday, September 17, 10:00 a.m. to 11:00 a.m., Turing lecture hall, building 1, Paris-Rocquencourt center. Coffee from 9:45 a.m.

Ludovic CHAMOIN, Ecole Normale Supérieure de Cachan: Goal-oriented approach for verification, reduction, and updating of mechanical models

In order to perform reliable numerical simulations, it is crucial to control the various models and methods which are used. This scientific issue, known as model Verification & Validation (V&V), nevertheless remains too rarely performed in practice due to its complexity (control of numerous model and discretization parameters, collection of a large amount of experimental data,…). However, the main goal of numerical simulations is usually not the global solution itself, but simply some local features (quantities of interest) that scientists and engineers need for decision-making. It is thus relevant and appealing to set up a partial (goal-oriented) V&V approach, dedicated to the accurate prediction of quantities of interest alone and associated with lighter computational procedures.

During the talk, we will illustrate the goal-oriented V&V approach focusing on three subjects: (i) control of the discretization error in FEM simulations; (ii) control of the reduction error for multi-parameter models reduced by the Proper Generalized Decomposition (PGD) technique; (iii) control of the modeling error when updating constitutive models by means of experimental data. These subjects will be addressed within a common framework built from adjoint-based methods and the Constitutive Relation Error (CRE) concept.

REFERENCES

L. Chamoin, P. Ladevèze, A non-intrusive method for the calculation of strict and efficient bounds of calculated outputs of interest in linear viscoelasticity problems, Computer Methods in Applied Mechanics and Engineering, 197(9-12):994–1014 (2008)

P. Ladevèze, L. Chamoin, Calculation of strict error bounds for finite element approximations of nonlinear point-wise quantities of interest, International Journal for Numerical Methods in Engineering, 84:1638–1664 (2010)

P. Ladevèze, L. Chamoin, On the verification of model reduction methods based on the Proper Generalized Decomposition, Computer Methods in Applied Mechanics and Engineering, 200:2032–2047 (2011)

L. Chamoin, P. Ladevèze, J. Waeytens, Goal-oriented updating of mechanical models using the adjoint framework, Computational Mechanics (to appear, DOI: 10.1007/s00466-014-1066-5)

Video from the talk

Christophe DENIS, EDF – R&D: Amélioration de la performance et de la qualité numérique du logiciel numérique industriel

L’amélioration de la performance d’un logiciel numérique industriel est essentielle pour être en mesure de simuler des phénomènes de plus en plus proche de la réalité. Il est toutefois au moins aussi important d’étudier la qualité numérique des résultats. Nous présentons tout d’abord un résumé de nos travaux portant sur la résolution efficace et parallèle de grands systèmes linéaires creux dans un environnement de grille de calcul distribuée. Une perspective de ces travaux serait d’utiliser la méthode de résolution parallèle utilisée pour augmenter la performance d’un logiciel de mécanique de structures sur une machine parallèle classique. Le développement des principaux codes numériques à EDF R&D utilise un processus d’intégration continue des sources permettant de régler au plus tôt des anomalies dans le cycle de développement et de maintenance en condition opérationnelle. Ce processus d’intégration continue ne prend malheureusement pas en compte les spécificités du calcul numérique en arithmétique flottante. Nous avons donc proposé de prendre en compte la précision numérique dès l’écriture des spécifications fonctionnelles du logiciel et de décliner des tests sur la précision numérique. Il est pour cela indispensable de mesurer la précision numérique des résultats produits par le logiciel. Nos activités portant sur la vérification numérique, effectuées d’abord dans un contexte académique puis industriel, sont alors exposées.

Video from the talk

Tuesday, November 4, 10:00 a.m. to 11:30 a.m., Turing lecture hall, building 1, Paris-Rocquencourt center. Coffee from 9:45 a.m.

Nicolas MOES, Ecole Centrale de Nantes: Strain localization and transition to fracture with the Thick Level Set model

Key Words: Damage, Fracture, localization, non-local model, Level Set

When a material model exhibits softening (negative slope of the stress-strain curve), a length scale and localization limiter needs to be introduced to avoid zero thickness dissipation process. The thick Level Set model is a new, level set based, localization limiter.

A nice advantage is that an iso-contour of the level set eventually locates the crack created by the localization process. This allows the extended finite element model (X-FEM) to be used efficiently to introduce displacement discontinuity along the crack path.

Two- and three-dimensional examples will demonstrate the capability of the TLS to simulate complex cracking patterns in quasi-brittle solids. Advantages of the TLS over other type of localization limiter will be discussed.

REFERENCES

[1] N. Moës, C. Stolz, P.-E. Bernard, N. Chevaugeon, A level set based model for damage growth: the thick level set approach. International Journal For Numerical Methods in Engineering, 86, 358–380, 2011.
[2] C. Stolz, N. Moës, A new model of damage: a moving thick layer approach. International Journal of Fracture, 174, 49–60, 2012.
[3] P.-E. Bernard, N. Moës, N. Chevaugeon, Damage growth modeling using the Thick Level Set (TLS) approach: Efficient discretization for quasi-static loadings. Computer Methods in Applied Mechanics and Engineering, 233-236, 11–27, 2012.

Video from the talk

Patrick MASSIN, EDF – R&D: Propagation de défauts en 3D avec la méthode des éléments finis étendus

Depuis 2003, le LaMSID et le GeM se sont associés afin d’avancer dans le domaine de la propagation de défauts en 3D. Les défauts sont localisés par le biais de levels sets, et la discontinuité au niveau des interfaces traitée via la méthode des éléments finis étendus (X-FEM). L’avantage de cette approche est de ne pas avoir à rendre le maillage compatible géométriquement avec le défaut. Munis de ces deux ingrédients, nous avons développé des algorithmes de propagation de défauts, initialement en mécanique élastique linéaire de la rupture, que nous sommes en train d’étendre à des situations plus complexes. On présentera l’ensemble de la méthodologie retenue ainsi que les difficultés auxquelles nous nous sommes heurtés (conditionnement, espaces d’approximation, etc.) dans le cadre du déploiement de cette approche au sein du code industriel Code_Aster d’EDF R&D.

Video from the talk

Tuesday, December 2, 10:00 a.m. to 11:30 a.m., Turing lecture hall, building 1, Paris-Rocquencourt center. Coffee from 9:45 a.m.

Yvon MADAY, Laboratoire Jacques-Louis Lions, University Paris 6: Méthodes de bases réduites pour l’approximation de la solution d’EDP paramétrée et l’assimilation de données

Les méthodes de bases réduites sont des méthodes d’approximation de solutions d’équations aux dérivées partielles (EDP) dépendant de paramètres. De façon générique, elles reposent sur des étapes en deux temps. Tout d’abord, lors de l’ “offline stage” on choisit certaines valeurs des paramètres et on calcule par une méthode classique la solution de l’EDP pour ces quelques valeurs de paramètres. Ceux-ci sont choisis de manière récursive et permettent d’améliorer les propriétés d’approximation de la méthode de bases réduite. Lors de cette étape on fait un certain nombre d’autres calculs dont la complexité n’excède pas celle de la résolution du problème considéré. Lors de la seconde étape : “online stage” pour d’autres valeurs des paramètres on calcule une approximation de la solution par une approche de Galerkin dans l’espace vectoriel engendré par les solutions calculées dans la première étape. Dans plusieurs applications, le nombre de bases réduites est très petit ce qui conduit à des calculs “online” en temps réel, ces simulations pouvant résider sur des tablettes.

La raison fondamentale du succès de ces techniques repose sur une propriété de l’ensemble de toutes les solutions de l’EDP paramétrée lorsque le paramètre varie. Cette propriété est d’avoir une petite dimension de Kolmogorov. Nous exposerons les bases de la méthode, les raisons de son succès, des détails algorithmiques en particulier sur la gestion des deux étapes “offline” et “online” et donnerons quelques directions de recherches actuelles dont l’assimilation de données.

Video from the talk

Dimitri KOMATITSCH, University of Aix-Marseille: Modélisation et imagerie avec des ondes acoustiques : défis de calcul et utilisation de maillages complexes

La propagation des ondes acoustiques et l’imagerie (tomographie et résolution de problèmes inverses) à l’aide de ces ondes sont un outil crucial par exemple en géophysique, en acoustique sous marine ou en contrôle non destructif des matériaux. Dans cet exposé nous illustrerons le fait qu’à très haute résolution cela pose des défis de calcul (calcul intensif, résolution de gros problèmes inverses) et cela conduit à l’utilisation de maillages complexes et de grande taille. Nous illustrerons cela avec quelques exemples calculés avec notre logiciel open-source appelé SPECFEM3D.

Video from the talk

Tuesday, January 6, 10:00 a.m. to 11:30 a.m., Turing lecture hall, building 1, Paris-Rocquencourt center. Coffee from 9:45 a.m.

Marc HOFFMANN, Université Paris-Dauphine: Inférence statistique pour certains modèles de type transport-fragmentation

We will review some results about the statistical inference of the branching rate of certain piecewise deterministic Markov models. Whereas their abstract statistical structure is relatively well-known from a parametric point of view, some recent applications (arising for instance from cell division models in biology) have renewed the interest of such statistical models, in particular from a non- parametric and testing point of view. In that context, new difficulties emerge, in particular from the perspective of implementing procedures. We will present some generic inference results (including real-data study on Escherichia Coli) and explain how fragile the information is with respect to the observation scheme (namely observing data in a stationary regime, at branching times or simply the whole genealogy over a given fixed time), a point that is sometimes overlooked by practitioners.

Video from the talk

Lydia ROBERT, INRA & Université Paris 6: Division control in bacteria

Many organisms couple the progression into the cell cycle to cellular growth through “size control” mechanisms: cells must reach a critical size to trigger some cell cycle event. Bacterial division is often assumed to be under such control. Deciding whether division control relies on a “timer” or “sizer” mechanism requires quantitative comparisons between models and data. The “timer” and “sizer” hypotheses find a natural translation in models based on Partial Differential Equations. We confronted these models with recent data on Escherichia coli single cell growth. We demonstrated that a size-independent “timer” mechanism for division control is quantitatively incompatible with the data and extremely sensitive to slight variations in the growth law. In contrast, a “sizer” model is robust and can fit the cell size and cell age distributions. The observed correlation between size at birth and size at division suggest a revision of the critical size paradigm: cells do not divide when they reach a critical size but when they add a constant size to their size at birth.

Video from the talk

Tuesday, February 3, 10:00 a.m. to 11:30 a.m., Turing lecture hall, building 1, Paris-Rocquencourt center. Coffee from 9:45 a.m.

Iuliu Sorin POP, Eindhoven University of Technology: Numerical methods for reactive porous media flows

We discuss the numerical discretization of reactive porous media flow models. To guarantee mass conservation, the models are written in mixed form. In the first part, the focus is on unsaturated one phase flow models (the Richards equation) and two phase flow models, for which we analyze the convergence of a mixed finite element scheme. Then we consider the coupled reactive transport and unsaturated flow, and show how the flow computation impacts the accuracy for the transport.

Talk

Isabelle FAILLE, IFPEN Rueil-Malmaison: Modélisation de bassins sédimentaires

La simulation de bassin sédimentaire a pour objectif d’améliorer la connaissance du sous sol en retraçant son histoire. Son principe est de modéliser l’évolution des couches sédimentaires depuis le début de leurs formation jusqu’à l’age actuel pour obtenir des informations qualitatives et quantitatives sur les fluides occupant l’espace poreux des roches sédimentaires. Les développements récents portent sur la modélisation 3D des bassins en contexte tectonique cassant pour lesquels il est nécessaire de prendre en compte les déplacements le long des failles et les écoulements résultants. On abordera quelques unes des difficultés rencontrées et les solutions actuellement mises en œuvre, en particulier l’utilisation de maillage 3D évolutifs et l’emploi d’un modèle double-interface pour simuler les écoulements le long des zones de faille pour lequel on présentera différentes discrétisations de type Volumes finis.

Talk

Tuesday, March 3, 10:00 a.m. to 11:30 a.m., Turing lecture hall, building 1, Paris-Rocquencourt center. Coffee from 9:45 a.m.

Miguel FERNANDEZ, INRIA Paris-Rocquencourt: Unfitted mesh methods and coupling schemes for incompressible fluid-structure interaction

Fictitious domain/immersed boundary methods for the numerical simulation of fluid-structure interaction problems, involving large interface deflections, have recently seen a surge of interest. Most of the existing approaches are known to be inaccurate in space either because the fluid equations are integrated in a non-physical (fictitious) domain or because the discrete approximations are not able to reproduce the weak and strong discontinuities of the physical solution. In this talk we propose alternative unfitted formulations which circumvent these accuracy issues. The kinematic/kinetic fluid-solid coupling is enforced consistently using a variant of Nitsche’s method involving cut elements. Robustness with respect to arbitrary interface/element intersections is guaranteed through suitable stabilization. Whenever present, weak and strong discontinuities across the interface are allowed via suitable XFEM enrichment. Several coupling schemes, with different degrees of fluid-solid splitting (implicit, semi-implicit and explicit), will be presented.The stability and convergence properties of the different methods will be discussed in a representative linear setting. Their performance will be illustrated in several numerical examples, involving static and moving interfaces.

Video from the talk

Hervé TURLIER, European Molecular Biology Laboratory, Germany: Active cell surface deformations

The shape of animal cells is controlled primarily by the actin-myosin cortex, which lies beneath the plasma membrane. The cortex is a viscoelastic thin layer of cross-linked polymers (actin), to which molecular motors (myosin) provide contractile properties by converting continuously chemical energy into active stresses within the layer. Cells control locally and temporally this contractile activity to perform fundamental functions, such as cell division, cell polarization or cell migration. Deformations of the cortical layer are well described by hydrodynamic active gel equations (Kruse 2005). We present here a Lagrangian formulation for the dynamics of an active-viscous surface in axisymmetric geometry. Based on scaling arguments, we neglect at first order interactions of the cell surface with the surrounding fluid. We show that, for cell division, this approximation is well justified, and that we can calculate numerically very convincing cell shapes and division dynamics (Turlier 2014). We present further applications of our model for biology (Bun 2014) and we suggest possible extensions of our modeling approach.

Abstract

Video from the talk

Tuesday, April 7, 10:00 a.m. to 11:30 a.m., Turing lecture hall, building 1, Paris-Rocquencourt center. Coffee from 9:45 a.m.

Franck LEDOUX, CEA DAM Île-de-France: Hexahedral meshing – Towards an automatic and reliable solution?

For some numerical simulation codes, hexahedral meshes are preferred to tetrahedral meshes. Depending on the interlocutor with whom you will discuss this, usual given reasons are that you need less hexahedral elements than tetrahedral elements to discretize a geometrical domain (with the same accuracy), hexahedral elements are less rigid than tetrahedral ones, the layered structure of hexahedral meshes can fit some specific physical alignments (shock waves, flows), legacy codes, etc. All of these reasons can be discussed, especially because you can find very good reasons to prefer tetrahedral elements (for instance reliable meshing algorithm and mesh adaptation technics). But it remains that industry strongly asks for having robust and efficient hexahedral meshing algorithms. And, until now, there is no automatic solution that allows engineers getting the expected hexahedral mesh.

The main aim of the presented talk will be to explain why hexahedral meshes are so difficult to generate and which research directions are promising to get results. In this context, I will first give the usual geometric features that hexahedral meshes have to fulfil, and describe their topological structure. Then, I will give an overview of the main existing approaches to generate hexahedral meshes in an automatic manner. I will give benefits and limitations of each approach, and eventually, I will sketch the main trends, that I think promising to follow in the future.

Video from the talk

Olivier ALLAIN, LEMMA: Application of the mesh adaptation to industrial problem

Engineering problems require a continuous progress in the simulation of systems coupling structures and fluids with interfaces.The main difficulties of these problems come from the multi-scale aspect (for rheological phenomena, or sloshing problem), or from the important deformation of the mesh (fluid-structure interaction, rotating movement,…).

The improvement in capturing small scale details thanks to local refinement can be determinant for the final overall accuracy. Indeed, local numerical errors can be of paramount impact on the accuracy of the predictions in multi-fluid and free surface flows. A typical example of a physical phenomenon where improving the accuracy is significant is the slamming of an obstacle on a liquid surface. The contact of a body with the liquid is a complex event with high pressure variations. These remarks motivate the use of a mesh adaptation.

On an other hand, a metric-based mesh adaptation is presented. Two approaches are proposed for the mesh size prescription. The first one uses simple geometric criteria. The mesh is refined close to the body and around the initial interface area. The second method employs advanced error estimates and mesh adaptation algorithm dedicated to time dependent problems. The mesh is adapted to compute accurately the dynamic of the flow and to accurately capture the interface position. The mesh adaptation algorithm enables us to predict the phenomenon evolution and to automatically refine all the regions of interest.

The presentation will be illustrated with different industrial cases (offshore, space, …).

Video from the talk

Tuesday, May 5, 10:00 a.m. to 11:30 a.m., Turing lecture hall, building 1, Paris-Rocquencourt center. Coffee from 9:45 a.m.