Commit ec8870aa authored by Guido Giuntoli's avatar Guido Giuntoli

Cleaning a bit

parent f385d263
......@@ -29,9 +29,12 @@ extract_data.py
compile_commands.json
scripts/output*
# Latex
# Latex
*.blg
*.bbl
*.synctex.gz
*.fls
*.fdb_latexmk
\#*
.#*
......
......@@ -9,7 +9,7 @@ Code to localize strains and homogenize stress in a Representative Volume Elemen
1. Works with 3D structured FE elements problems
2. OpenACC acceleration support for GPUs
3. OpenMP support for multi-core CPUs
4. Solver: Conjugate Gradients with Diagonal Preconditioner (CGPD)
4. Solver: Conjugate Gradients with Diagonal Preconditioner (CGPD)
5. Different varieties of micro-structures and material laws
6. Native instrumentation to measure performance
7. C and Fortran Wrappers
......@@ -20,7 +20,7 @@ Micropp solves the FE problem on heterogeneous RVEs composed with more than one
<img src="./pics/mic_1.png" alt="drawing" width="300"/>
_micropp_ is designed to be coupled with a macro-scale code in order to simulate multi-scale physical systems like an composite aircraft panel:
Micropp is designed to be coupled with a macro-scale code in order to simulate multi-scale physical systems like an composite aircraft panel:
<img src="./pics/coupling-micropp-macro.png" alt="drawing" width="300"/>
......@@ -28,12 +28,12 @@ Micropp has been coupled with high-performance codes such as [Alya](http://bscca
<img src="./pics/scala.png" alt="drawing" width="350"/>
`MicroPP` has its own ELL matrix format routines optimized for the structured grid geometries that it has to manage. This allows to reach a really good performance in the assembly stage of the matrix. The relation between the assembly time and the solving time can be below than 1% depending on the problem size. The solving algorithm for the linear system of equations consists on a Conjugate Gradient algorithm with diagonal preconditioner.
Micropp has its own ELL matrix format routines optimized for the structured grid geometries that it has to manage. This allows to reach a really good performance in the assembly stage of the matrix. The relation between the assembly time and the solving time can be below than 1% depending on the problem size. The solving algorithm for the linear system of equations consists on a Conjugate Gradient algorithm with diagonal preconditioner.
Build steps with CMake:
-----------------------
1. Clone the repository
1. Clone the repository
2. cd cloned directory
3. mkdir build (can be also build+anything)
4. cd build
......@@ -56,9 +56,8 @@ and the debug version:
cmake -DCMAKE_BUILD_TYPE=Debug ..
```
Other possible options are:
Other possible options are:
1. `TIMER=[ON|OFF]` activate the native instrumentation for measuring times
2. `OPENACC=[ON|OFF]` compiles with OpenACC (only supported by some compilers such as PGI)
3. `OPENMP=[ON|OFF]` compiles with OpenMP for multi-core CPUs
all:
pdflatex manual.tex
\section{Benchmarks}
\subsection{\texttt{benchmarks-sol-ass}}
This benchmark is intended to measure the computing times of the assembly of the Residue vector ($\times2$) and the
......@@ -129,7 +128,7 @@ product) of the stage is accelerated and it is translated to the whole calculati
x unit=\# Elements,
ylabel=Computing Time,
y unit=s,
x=0.15cm,
x=0.15cm,
ymin=-1,
xtick={30,40,50,60,70,80,90,100},
xticklabels={30\tst,40\tst,50\tst,60\tst,70\tst,80\tst,90\tst,100\tst},
......@@ -138,9 +137,9 @@ product) of the stage is accelerated and it is translated to the whole calculati
legend style={anchor=north west},
legend pos= north west
]
\addplot [fill=blue, ybar, point meta = explicit symbolic, nodes near coords]
\addplot [fill=blue, ybar, point meta = explicit symbolic, nodes near coords]
table[meta=Speedup,x=interval,y=CPU] {\mydata};
\addplot [fill=green, ybar, point meta = explicit symbolic, nodes near coords]
\addplot [fill=green, ybar, point meta = explicit symbolic, nodes near coords]
table[x=interval,y=GPU] {\mydata};
\legend{CPU, GPU};
\end{axis}
......
......@@ -4,12 +4,13 @@
\usepackage{algorithmic}
\usepackage{amsmath}
\usepackage{amssymb}
\RequirePackage{amsfonts}
\usepackage{amsfonts}
\usepackage{siunitx}
\usepackage{standalone}
\usepackage{tikz}
\usetikzlibrary{matrix,backgrounds,calc,shapes,arrows,arrows.meta,fit,positioning}
\usetikzlibrary{chains,shapes.multipart}
\usepackage{pgfplots, pgfplotstable}
\usepgfplotslibrary{units}
\usepackage{xcolor}
......@@ -46,14 +47,23 @@ tabsize=4
\begin{document}
\title{MicroPP: Reference Manual}
\author{\IEEEauthorblockN{
Guido Giuntoli\IEEEauthorrefmark{1}\IEEEauthorrefmark{2},
Jimmy Aguilar\IEEEauthorrefmark{1}\IEEEauthorrefmark{3}}
\IEEEauthorblockA{\IEEEauthorrefmark{1}Barcelona Supercomputing Center}
\IEEEauthorblockA{\IEEEauthorrefmark{2}guido.giuntoli@bsc.es}
\IEEEauthorblockA{\IEEEauthorrefmark{3}jimmy.aguilar@bsc.es}}
\title{Micropp: Reference Manual}
%\author{\IEEEauthorblockN{
% Guido Giuntoli\IEEEauthorrefmark{1}\IEEEauthorrefmark{3},
% Jimmy Aguilar\IEEEauthorrefmark{1}\IEEEauthorrefmark{4}
% Judica\"el Grasset\IEEEauthorrefmark{2}\IEEEauthorrefmark{5}}
%\IEEEauthorblockA{\IEEEauthorrefmark{1}Barcelona Supercomputing Center, Spain}
%\IEEEauthorblockA{\IEEEauthorrefmark{2}STFC Daresbury Laboratory, UK}
%\IEEEauthorblockA{\IEEEauthorrefmark{3}guido.giuntoli@bsc.es}
%\IEEEauthorblockA{\IEEEauthorrefmark{4}jimmy.aguilar@bsc.es}
%\IEEEauthorblockA{\IEEEauthorrefmark{5}judicael.grasset@stfc.ac.uk}}
\author{
Guido Giuntoli (gagiuntoli@gmail.com) \\
Jimmy Aguilar (spacibba@aol.com) \\
Judica\"el Grasset (judicael.grasset@stfc.ac.uk)
}
\maketitle
......@@ -79,16 +89,10 @@ tabsize=4
\section{Implementation}
\begin{figure}[!htbp]
\centering
\resizebox{5cm}{!}{\input{figures/work_basis.tikz}}
\vspace{0.5cm}
\caption{\label{fig:comp_scheme}}
\end{figure}
The Voigt convention used here is the same as in Ref.~\cite{simo}.
\begin{equation}
\epsilon = \left[\epsilon_{11} \quad \epsilon_{22} \quad \epsilon_{33} \quad \epsilon_{12} \quad \epsilon_{13} \quad \epsilon_{23} \right]^T
\epsilon = \left[\epsilon_{12} \quad \epsilon_{22} \quad \epsilon_{33} \quad \epsilon_{12} \quad \epsilon_{13} \quad \epsilon_{23} \right]^T
\end{equation}
\section{Geometries}
......@@ -175,7 +179,7 @@ f_{n+1}^{\text{trial}} = || s_{n+1}^{\text{trial}} || - \sqrt{\frac{2}{3}} (\sig
\begin{array}{ll}
\epsilon_{n+1}^{p} = \epsilon_{n}^{p} - \Delta \gamma \mathbf{n}_{n+1} \\[5pt]
\alpha_{n+1} = \alpha_{n} + \sqrt{\frac{2}{3}} \Delta \gamma \\[5pt]
\sigma_{n+1} = k \, \text{tr} (\epsilon_{n+1}) + s_{n+1}^{\text{trial}} - 2 \mu \Delta \gamma \mathbf{n}_{n+1}
\sigma_{n+1} = k \, \text{tr} (\epsilon_{n+1}) + s_{n+1}^{\text{trial}} - 2 \mu \Delta \gamma \mathbf{n}_{n+1}
\end{array}
\right.
\end {equation}
......@@ -196,6 +200,7 @@ f_{n+1}^{\text{trial}} = || s_{n+1}^{\text{trial}} || - \sqrt{\frac{2}{3}} (\sig
\end{algorithm}
\input{Sections/compilation.tex}
\input{Sections/coding_style.tex}
\input{Sections/benchmarks.tex}
......@@ -212,14 +217,14 @@ The authors would like to thank to the Barcelona Supercomputing Center for the r
\bibitem{paper1}{
G. Giuntoli, J. Aguilar, M. Vazquez, S. Oller and G. Houzeaux.
``An FE$^2$ multi-scale implementation for modeling composite materials on distributed architectures''.
``An FE$^2$ multi-scale implementation for modeling composite materials on distributed architectures''.
Coupled Systems Mechanics, 8(2), 2018
}
\bibitem{simo}{
J.C. Simo \& T.J.R. Huges.
``Computational Ineslasticity''.
Springer, 2000.
Springer, 2000.
}
\bibitem{cte-power}{
......@@ -230,7 +235,7 @@ The authors would like to thank to the Barcelona Supercomputing Center for the r
\bibitem{oller}{
S. Oller.
``Numerical Simulation of Mechanical Behavior of Composite Materials''.
Springer, 2014.
Springer, 2014.
}
\end{thebibliography}
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment