PFE

  • PFE - Internship 8 months as part of research - INRIA , Bordeaux - February-September 2009, HIEPACS team
    Load balancing and data distribution for irregular coupled parallel numerical simulations: application to multiscale simulation for crack propagation.
  • State of the art : [articles]
  • Specifications : [pdf , src]
  • Report : ENSEIRB [pdf , src] Master Bordeaux 1 [pdf , src]
  • Final Report : [pdf , src]
  • Intermidiate summaries : [cr1 , src] , [cr2 , src] , [cr3 , src] , [cr4 , src] , [cr5 , src]
  • Slides : ENSEIRB [pdf , src] , Master Bordeaux 1 [pdf , src]
  • Developed Module : [LoadBalancing]
  • Technical notes : [trick]
  • Tests cases : [test Argon2D]
  • Existing Module : [LibMultiScale , Doc] [Thesis : Thesis , slides , presentation , article]
  • Acquired skills : High Performance Scientific Computing, modeling problems using graphs and hypergraphs, numerical simulations, meshing techniques (use of GMSH), visualization with Paraview.

Subject

Title

Load balancing and data distribution for irregular coupled parallel numerical simulations: application to multiscale simulation for crack propagation.

Project Team

INRIA-Bordeaux
The team name is Hiepacs.
I work with : Aurélien Esnard, Olivier Coulaud et Jean Roman.

Background research

The simulation of physical and chemical behavior of materials, such as tracking the spread of cracks or the study of macroscopic properties of hybrid materials, needs to model the phenomena at different scales of space and time. In these simulations, we couple macroscopic program code to a microscopic scale one, which may be an atomic scale and even some electronic couplings. This coupled micro-macro "is essential to get a good description of the material, by modeling a more coarse (more macroscopic) model for samples in larger sizes. These new simulations are developed by coupling parallel codes each representing different physical models.

If one considers the propagation of such cracks (in the case of lenses for focusing the Laser Megajoule), the multi-scale method that was developed, assumes that two models are coupled : an atomic model (Molecular Dynamics) and a continuous model (finite element approximation). The transmission of information between these two models is through a recovery zone where the two models will coexist. Under a parallel coupling, a division of data is performed to distribute on the available processors and to optimize the performance of the coupler. The data cut from each of the codes are made separately for better load balancing, respectively, but once together, we do not obtain a good global load balancing for calculation tasks between processors. Indeed, it is necessary to maximize the number of processors in charge of the coupling to reduce the computing time of it, while minimizing the number of areas connected between the models to reduce the communication between parallel codes underlying.

One important issue in these multi-physics simulations, and in any simulation involving a complex coupling of codes (meteorology, climatology, ...), is to protect the "scalable" of the overall parallel execution coupled codes.

Description the internship of activity

The objective of this project is on the first hand to propose solutions based on generic models of problem-based graphs and study the other hand if the model hypergraphs would not further describe the coupling. As part of the propagation of cracks and using the library Libmultiscale [1], it will explore different strategies for load balancing to improve the performance of the coupler. In the middle atomic coupling - a continuous medium, the decomposition algorithms in packs of space simulation of molecular dynamics can not be challenged, we can essentially modify on the nature of the partitioning of finite element mesh of continuous model. Strategies, on the one hand to balance the elements and nodes taking into account the tasks of calculation and on the other hand, to force the partitioning of the mesh by adding constraints to the partitionner. From a software point of view, we will use the graph partitioner SCOTCH [2] or ZOLTAN hypergraphs partitioner [3] and/or others to analyze, implement and test different strategies.

this text was translated ... (See the original details in french)

Required skills and abilities

parallel algorithm, code coupling, partitioning of graphs and hypergraphs.

Useful links

[1] Libmultiscale
[2] SCOTCH
[3] ZOLTAN
[4] Thesis : coupling with Libmultiscale

Bibliography

See repport