Arriba

Blog In Silico SE

Multiphysics simulation: more than just pushing a couple of buttons

March 28th, 2023

By: Alejandro Cano

Multiphysics simulation has become one of the pillars of development in various areas of engineering such as chemical process design, structural analysis, electromagnetic process analysis, among many others. Its use has allowed cost reduction for multiple companies and has boosted technological development in a controlled and safe environment. 

Currently, there is a wide range of commercial (COMSOL Multiphysics, ANSYS, among others) and open source (OpenFoam, Elmer, among others) computational tools for multiphysics simulation, capable of solving various engineering problems. Its use is quite intuitive for users and allows the development of modeling and simulation tasks in a practical way in a short time.

Although the use of these tools represents a great advantage in the analysis of various engineering processes, many times users do not go beyond the default configuration provided and the results obtained by these programs. This means that the user does not have complete control of the analysis process and does not take full advantage of the capabilities of these programs to perform an efficient and effective simulation.

Understanding the physics behind our model

A very important factor when building our model is to have an excellent understanding of the expressions used to represent our phenomena of interest, the initial and boundary conditions present, and the constitutive relationships associated with the geometry and materials involved.

This usually leads to the construction of a mathematical model containing partial differential equations in both space and time, linear and nonlinear equations, and other expressions necessary to solve our problem. Additionally, it is possible to propose simplifications to the constructed mathematical model in order to reduce the computational cost without losing important information.

At this point it is necessary to have a broad theoretical basis of the processes of interest, and to be constantly updated on new advances in these processes.

Selection of the computational tool

The selection of a simulation tool should be based on a careful evaluation of the project needs and a detailed comparison of the different tools available in the market. This process should take into account different aspects such as the type of simulation to be carried out, the ease of use of the tool, accuracy and speed, among others.

However, the steps to follow to carry out the simulation are almost universal for all available tools. Initially, a geometry is established for the system and it is defined how the selected physics interact through the operation parameters, process variables, initial and boundary conditions, materials, among others. Subsequently, the system is discretized in time through the definition of steps, and in space through the generation of a mesh. Many simulation programs have special algorithms for automatic mesh generation, which greatly simplifies this process. Finally, the software performs the relevant calculations and the results obtained are post-processed to obtain the necessary information to analyze the system in question.

Although this procedure allows us to obtain results for our system of interest, it is very limited when it comes to building a good model and taking full advantage of the capabilities offered by the program. It is for this reason that it is necessary to go beyond what the Black-box shows us, and to understand in greater depth what happens when simulating our models.

The magic behind the discretization method used

The simulation process usually involves a discretization process in both time and space (mesh generation), depending on the complexity of our model. During discretization, the mathematical model is converted into a system of algebraic equations that represent the variation of the variables of interest in certain regions of space or in certain periods of time. This allows complex problems to be solved using numerical methods on a computer. Proper discretization can lead to shorter simulation times, more stable calculations and more accurate results.

Discretization methods for time are usually classified as explicit and implicit. In the explicit methods, the state of a system at time t+1 is computed from the state at time t. Among the most notable methods are those of Runge-Kutta, Adams-Bashforth, among others. On the other hand, the implicit methods determine the state of the system in time implicitly using information from the system in time. t+1 implicitly using information from the system at time t y t+1which makes it more computationally expensive, but at the same time more stable compared to explicit methods. Among the most notable methods are the BDF (Backward-Differentiation-Formula), Crank-Nicholson, among others.

For spatial discretization, there is a wide range of methods, among which the finite element method (FEM) and the finite volume method (FVM) stand out. 

The finite element method is characterized by dividing the system into small regions called elements and using the integral form, also known as the weak form, of the associated mathematical model. Each element has a certain number of nodes and these are used to calculate the solution of the mathematical problem posed. Subsequently, the values of each variable along the system are calculated by using shape functions and calculated values for the nodes of each element. Among the most commonly used shape functions are linear and quadratic functions, among others. The selection of the shape functions will largely determine the accuracy of the results obtained and the associated computational cost. 

The finite volume method, like the finite element method, divides the system into small regions called control volumes and applies the conservation principles (matter, momentum and energy) by integrating the associated transport equations over each control volume. This results in the appearance of different terms related to the transport of matter, momentum and energy between the different volumes, which modifies the value of the variables of interest within each volume.

Tomado de: https://www.comsol.com/multiphysics/finite-element-method?parent=physics-pdes-numerical-042-62

Tomado de: https://www.comsol.com/blogs/your-guide-to-meshing-techniques-for-efficient-cfd-modeling/

Mesh generation: A tedious but necessary task

In case of considering spatial variations in the system, the next step is the generation of a mesh to represent the spatial discretization. Many people at this point use specialized programs for automatic mesh generation, so that the program itself uses algorithms and parameters already established. Although these programs seek to facilitate the simulation process by automating this task, many times the generated meshes are not the most appropriate to carry out the simulation. 

At this point, it becomes necessary for users to set their own parameters and define the order in which a better quality mesh will be built. However, this is not an easy task, since the quality of a mesh depends on several factors such as maximum element size, growth ratio, distribution along a boundary, among others, which make this process to be considered as an art. 

In spite of this, the generation of a good quality mesh can save us from several headaches during the solution of the computational model. The quality of the elements of a mesh can be defined through different criteria such as skewness, the ratio between the volume and the length of each one, the maximum angle, among others.

Which solution method is the most appropriate?

At this point, the mathematical model is ready to be solved. During the discretization and mesh generation process, the mathematical model is reduced to a system of linear or nonlinear equations, which can be grouped using a matrix notation.

      Ax=b

Where A is the matrix of coefficients, which represents the relationship between the different elements of the mesh according to the established mathematical model, x is the vector of unknown variables that will be calculated for each element of the mesh, and b is the vector of source terms associated to the system. In the case that the coefficients of the matrix A or of the source terms are non-linear with respect to any of the variables described above, the system is non-linear and the matrix A and the vector b must be calculated in each iteration using the vector of variables calculated in the previous iteration.

Due to the large number of variables to be calculated and the complexity of the system of equations, it is necessary to use specialized algorithms for the solution of these systems. Among these algorithms are the so-called direct algorithms, which seek to solve the system of equations directly in a single step. Among the most relevant are the PARDISO, MUMPS, WSPM algorithms, among others. 

Tomado de: https://en.wikipedia.org/wiki/Sparse_matrix

Tomado de: https://www.comsol.com/blogs/improving-convergence-multiphysics-problems/

On the other hand, there are iterative algorithms that are responsible for finding an approximate solution to the system of linear equations in an iterative manner, progressively improving the solution until reaching a desired degree of accuracy and convergence. Among the most relevant iterative algorithms are Gauss-Siedel, GMRES, BiCGStab, among others.

The choice of the most appropriate algorithm will depend on the specific characteristics of the problem, such as the size and structure of the matrix, the number of non-zero elements, the required accuracy and the computational characteristics. Direct algorithms have the advantage of being robust and of general use. However, their computational cost increases rapidly as the size of the problem and the complexity of the mathematical model increase. On the other hand, iterative algorithms require fewer computational resources and their cost increases more slowly than direct algorithms. However, they are less robust and their convergence is affected by ill-conditioned matrices, which generate a solution of the mathematical model sensitive to the structure of the matrices. Therefore, it is advisable to experiment with different algorithms and parameters to find the most efficient and accurate solution.

Obtain really relevant information from post-processing

Once the calculations necessary to find a solution have been completed, the next step is to process the results in a way that provides useful information for our research. Before starting this process, it is necessary to verify that the results make physical sense and can be contrasted with experimental data. In this way, it can be ensured that the results are reliable for the required analysis.

Many programs perform post-processing automatically and provide information through surface plots, contours, among others. This is useful on many occasions because the information provided can help us to draw conclusions from our research more quickly. However, it is necessary to take the time to draw your own graphs that would not normally be obtained automatically and that provide more information to complement the research and help achieve the proposed objectives. Sometimes a line graph or a global result can provide more information than a surface graph with flashy colors.

Programming our own calculation routines

At this point, we could consider that we have a wide control of the simulation and that we take full advantage of the computational tool. While this is true in many cases, we can still go further and add new capabilities to our programs by generating code that can be understood by them. For example, we can add a function that allows us to describe a property or a physical behavior that is not implemented by default in the programs. We can also add special calculation routines to solve a particular problem and improve the quality of our simulation. 

Each computational tool provides a user guide specialized in the implementation of these programming algorithms. Their use will allow us to obtain more complete and consistent results with the proposed objectives, as well as to implement new functionalities that may be useful in the future. 

Conclusion

As we have seen, a successful simulation implies knowledge of each of the processes involved and how we can make the most of each one of them. Although the different computational tools offer a wide range of options to carry out our studies, experience and knowledge will determine how accurate and efficient they will be.

In Silico SE SAS Solutions

If you are sure, or you think that simulation can boost any project of yours, or the one you are part of, tell us more, we would love to explore options to help you make it happen.

If you want to learn about virtual simulation, you can access our STEM training courses, where you can start from scratch to acquire skills in this cutting-edge tool.

You can also access customized consulting packages to solve specific doubts with your simulation project, both from academia and industry.

In Silico SE SAS has more than 26 years of experience in systems and process simulation, scientific research, university teaching and development of engineering projects; we would love that together we can contribute to the achievement of the SDGs, contact us.

About the author: Edgar Alejandro Cano Zapata

I am a chemical engineer with a broad interest in the areas of modeling and simulation. I have participated in different projects related to the study of bioreactors and electrolytic cells through computational modeling. I firmly believe that in silico analysis will allow us to make great technological developments by evaluating different scenarios that are normally difficult to reproduce in the laboratory. For this reason, I wish to make my contribution in this area to promote technological progress at national and international level.

¿Did you like this blog?

Follow us

Contact Us