|
|
|
The Alya Python module (in short pyAlya) are a set of tools developed in python to interface with Alya post-process using mpio tools to compute post-processing quantities in parallel.
|
|
|
|
|
|
|
|
This tool has been developed with Python 3 in mind and might not be compatible with lower version of Python.
|
|
|
|
|
|
|
|
## Usage
|
|
|
|
|
|
|
|
In order for Python to see pyAlya, it only needs to be added to the _PYTHONPATH_ environmental variable.
|
|
|
|
```bash
|
|
|
|
export PYTHONPATH=$PYTHONPATH:<PATH_TO_THIS_TOOL>/
|
|
|
|
```
|
|
|
|
**CAREFUL**, the folder *pyAlya* must be left outside of *PATH_TO_THIS_TOOL*, e.g.,
|
|
|
|
```bash
|
|
|
|
export PYTHONPATH=$PYTHONPATH:~/stranger-tools/Alya
|
|
|
|
```
|
|
|
|
assuming the git repository was cloned at the home directory. Then, to load the tool on Python just do
|
|
|
|
```python
|
|
|
|
import pyAlya
|
|
|
|
```
|
|
|
|
or to import an especific module (say the *Field* class)
|
|
|
|
```python
|
|
|
|
import pyAlya.Field
|
|
|
|
```
|
|
|
|
or
|
|
|
|
```python
|
|
|
|
from pyAlya import Field
|
|
|
|
```
|
|
|
|
|
|
|
|
### Compiled modules
|
|
|
|
|
|
|
|
Some modules are also provided as Cython codes that need to be compiled into libraries before runtime. That provides a huge performance boost in the tool, especially in tools that are strongly dependent on gradient or math operations. Moreover, the **MEP** module entirely depends on a C++ library so compilation is mandatory.
|
|
|
|
|
|
|
|
In order to compile the tool simply use the provided *Makefile* for convenience:
|
|
|
|
```bash
|
|
|
|
make python
|
|
|
|
```
|
|
|
|
|
|
|
|
## Modules
|
|
|
|
|
|
|
|
Following comes a description of the python modules implemented so far.
|
|
|
|
|
|
|
|
### Mesh class
|
|
|
|
|
|
|
|
The Mesh class wraps the nodes, the elements and a number of variablesand relates them so that the operations in parallel are easier. Mesh needs a number of Alya outputs to function properly, these are:
|
|
|
|
* LNODS, the connectivity matrix.
|
|
|
|
* LTYPE, the element type vector.
|
|
|
|
* LNINV, the global node numbering.
|
|
|
|
* COMMU, the communications matrix (only needed for parallel operations).
|
|
|
|
* MASSM, the lumped mass matrix (only needed for parallel operations and can be recomputed by the class).
|
|
|
|
|
|
|
|
Mesh implements a number of methods to find nodes and elements within the mesh subdomain. It also stores a copy of the Communicator class for internal use and easy access. Parallel FEM operations (such as the gradient or divergence) are wrapped inside the Mesh class for easy use:
|
|
|
|
```python
|
|
|
|
gscaf = mesh.gradient(scaf)
|
|
|
|
gvecf = mesh.gradient(vecf)
|
|
|
|
gtenf = mesh.gradient(tenf)
|
|
|
|
```
|
|
|
|
|
|
|
|
Moreover, Mesh implements a method to filter the nodes that are on the boundaries when LNINV and COMMU are defined:
|
|
|
|
```python
|
|
|
|
array_filtered = mesh.filter_bc(array)
|
|
|
|
```
|
|
|
|
|
|
|
|
Meshes can be easily defined as:
|
|
|
|
```python
|
|
|
|
# Define mesh parameters
|
|
|
|
xyz = np.array([[0,0,0], # 1
|
|
|
|
[1.2,1,0], # 2
|
|
|
|
[0,0.3,0], # 3
|
|
|
|
[0,1,0.2], # 4
|
|
|
|
[0,1.5,0] # 5
|
|
|
|
])
|
|
|
|
lnods = np.array([[1,2,3,4],[3,2,5,4]])
|
|
|
|
ltype = np.array([30,30])
|
|
|
|
lninv = np.array([1,2,3,4,5])
|
|
|
|
|
|
|
|
# Create the domain mesh
|
|
|
|
mesh = pyAlya.Mesh(xyz,lnods,ltype,lninv,ngauss=4)
|
|
|
|
```
|
|
|
|
or they can be read from the outputs of Alya using the *read* method:
|
|
|
|
```python
|
|
|
|
BASEDIR = './'
|
|
|
|
CASESTR = 'cavtri03'
|
|
|
|
|
|
|
|
mesh = pyAlya.Mesh.read(CASESTR,basedir=BASEDIR,ngauss=4,read_commu=True,read_massm=False)
|
|
|
|
```
|
|
|
|
it is usually recommended, however, to drop the **ngauss** variable so that the tool guesses automatically the number of gauss points to be used for the input element. Meshes can be easily saved and loaded into various formats using the *save* and *load* methods. Unlike Fields, no reduction operation is currently implemented for meshes.
|
|
|
|
|
|
|
|
### Field class
|
|
|
|
|
|
|
|
Field is a class that can be used to store various arrays and the node positions so that they are both referenced at the same time. Fields can be operated between themselves, i.e., they can be added, subtracted and multiplicated and divided by a constant. When fields are not equal, a merging process occurs, this is particularly useful to reduce the fields using the *MPI.SUM* operation. This operation is, however, slow and a faster algorithm is provided if the node ordering is not that important. This is the *fieldFastReduce* operation. Reducing will filter the NaNs created by the master processor (rank=0).
|
|
|
|
|
|
|
|
Fields can be easily defined as:
|
|
|
|
```python
|
|
|
|
stats = pyAlya.Field(xyz = pyAlya.truncate(mesh.xyz,6),
|
|
|
|
AVVEL = mesh.newArray(ndim=3), # Vectorial field
|
|
|
|
AVVE2 = mesh.newArray(ndim=3), # Vectorial field
|
|
|
|
RETAU = mesh.newArray() # Scalar field
|
|
|
|
)
|
|
|
|
```
|
|
|
|
or they can be read from the outputs of Alya using the *read* method:
|
|
|
|
```python
|
|
|
|
BASEDIR = './'
|
|
|
|
CASESTR = 'channel'
|
|
|
|
VARLIST = ['VELOC','PRESS']
|
|
|
|
instant = 0
|
|
|
|
|
|
|
|
field, header = pyAlya.Field.read(CASESTR,VARLIST,instant,mesh.xyz,basedir=BASEDIR)
|
|
|
|
```
|
|
|
|
Fields can be easily saved and loaded into various formats using the *save* and *load* methods.
|
|
|
|
|
|
|
|
Using the **Geom** module, one can select a sub-field of a Field and only work with that part. This is done with the *select* method as:
|
|
|
|
```python
|
|
|
|
field_cut = field.select(pyAlya.Geom.Cube(...))
|
|
|
|
```
|
|
|
|
for more details on how to use the **Geom** please refer to the appropriate section and the examples. When running in parallel and if the partition does not have any node in the vicinity then an empty field is returned (much like the master).
|
|
|
|
|
|
|
|
### Communicator class
|
|
|
|
|
|
|
|
The Communicator class manages the point-to-point communications between different subdomains in a similar manner to how Alya does. It is meant to be an internal class that is automatically defined inside Mesh and that the user can later access. Communicators are usually defined from the communications matrix outputted by Alya. They implement methods to communicate point-to-point scalar and vectorial fields as well as reduction operations.
|
|
|
|
|
|
|
|
### Checkpoint class
|
|
|
|
|
|
|
|
The checkpoint class can be used to restart the code at a certain point. It allows to save the checkpoint class along with the variables defined by the user as a _pickle_ file per processor that can be later recovered. The checkpoint class can be created as:
|
|
|
|
```python
|
|
|
|
checkp = pyAlya.Checkpoint.create(CHEKP_FREQ,START,END,step=DT)
|
|
|
|
```
|
|
|
|
This will either read from the disk if there is any checkpoint available or create a new instance of the class and initialize the checkpoints. Checkpoints have tools to manage loops for easy use from the users. Saving data can be either done normally or forced:
|
|
|
|
```python
|
|
|
|
checkp.save(flag,instant,'message',var_name=var)
|
|
|
|
checkp.force_save(flag,instant,'message',var_name=var)
|
|
|
|
```
|
|
|
|
*save* will honour the checkpoint frequency defined when creating the class while *force_save* will directly dump to disk. A **flag** variable can be used to define different parts of the code. These can be later accessed with:
|
|
|
|
```python
|
|
|
|
if checkp.enter_part(flag=flag):
|
|
|
|
```
|
|
|
|
A message can be defined and will be printed by the master processor (usually rank 0). Variables can later be recovered as:
|
|
|
|
```python
|
|
|
|
var = checkp['var_name']
|
|
|
|
```
|
|
|
|
Finally, the checkpoints can be cleaned at the end of the code with the following command:
|
|
|
|
```python
|
|
|
|
checkp.clean()
|
|
|
|
```
|
|
|
|
Please refer to the examples in order to see how to implement the checkpoints.
|
|
|
|
|
|
|
|
### Finite Elements module
|
|
|
|
|
|
|
|
This Python module contains a finite elements library with linear elements (at the moment) and implements routines to compute the gradient and the lumped mass matrix. A high level version of these functions can be accessed through the Mesh class.
|
|
|
|
|
|
|
|
Two master element classes are implemented, *Element2D* and *Element3D* to manage 2D and 3D elements respectively. The elements programmed so far are:
|
|
|
|
* LinearTriangle (2D)
|
|
|
|
* LinearTetrahedron (3D)
|
|
|
|
* LinearPyramid (3D)
|
|
|
|
* LinearPrism (3D)
|
|
|
|
* TrilinearBrick (3D)
|
|
|
|
|
|
|
|
A *createElementByType* function helps create the correct element given the Alya element ID. This is especially important on hybrid meshes.
|
|
|
|
|
|
|
|
A compilable cython version of these tools is provided in order to boost the performance of the tool.
|
|
|
|
|
|
|
|
### Input-Output tools
|
|
|
|
|
|
|
|
This Python module contains the mpio read (and hopefully write) routines from Joan C., as well as the h5 interface for the HiFiTurb database.
|
|
|
|
|
|
|
|
For the h5 output, serial and parallel methods are provided. For the latter, it is recommended to peform the *filter_bc* operation from mesh. Note that the parallel output needs to be saved in a contiguous format for perfomance issues, so expect your data to be scrambled.
|
|
|
|
|
|
|
|
### Math
|
|
|
|
|
|
|
|
This python module contains basic functions to operate vector and tensor arrays (defined as 3x3 matrices).
|
|
|
|
|
|
|
|
The implemented vectorial operations are:
|
|
|
|
* dot: dot product between two vectorial arrays.
|
|
|
|
* cross: cross product between two vectorial arrays.
|
|
|
|
* outer: outer product between two vectorial arrays.
|
|
|
|
* scaVecProd: scalar array times a vector array.
|
|
|
|
* vecTensProd: vector array times a tensor array.
|
|
|
|
|
|
|
|
The implemented tensor operations are:
|
|
|
|
* transpose: transpose of a tensor array.
|
|
|
|
* trace: trace of a tensor array.
|
|
|
|
* det: determinant of a tensor array.
|
|
|
|
* inverse: inverse of a tensor array.
|
|
|
|
* matmul: matrix multiplication of two tensor arrays.
|
|
|
|
* doubleDot: double dot product of a tensor array.
|
|
|
|
* scaTensProd: scalar array times a tensor array.
|
|
|
|
* tensVecProd: tensor array times a vector array.
|
|
|
|
|
|
|
|
A compilable cython version of these tools is provided in order to boost the performance of the tool.
|
|
|
|
|
|
|
|
### Postprocess
|
|
|
|
|
|
|
|
Postprocess are a set of tools to perform postprocess operations:
|
|
|
|
* vorticity: computes the vorticity given the velocity gradients.
|
|
|
|
* Qcriterion: computes the Q-criterion given the velocity gradients.
|
|
|
|
* Omegacriterion: computes the Omega-criterion given the velocity gradients.
|
|
|
|
|
|
|
|
### Statistics
|
|
|
|
|
|
|
|
Statistics are a set of tools to perform statistics of the flow:
|
|
|
|
* addS1: accumulate first order statistics.
|
|
|
|
* addS2: accumulate second order statistics.
|
|
|
|
* addS3: accumulate third order statistics.
|
|
|
|
* tripleCorrelation: compute the triple correlation.
|
|
|
|
* reynoldsStressTensor: compute the Reynolds stress tensor given the fluctuating velocity or AVVEL, AVVE2 and AVVXY
|
|
|
|
* strainTensor: computes the strain tensor given the velocity gradients.
|
|
|
|
* vorticityTensor: computes the vorticity tensor given the velocity gradients.
|
|
|
|
* TKE: compute the turbulent kinetic energy from AVVEL or Rij.
|
|
|
|
* dissipation: compute the turbulent dissipation.
|
|
|
|
* taylorMicroscale: compute the Taylor microscale.
|
|
|
|
* kolmogorovLengthScale: compute the Kolmogorov length scale.
|
|
|
|
* kolmogorovTimeScale: compute the Kolmogorov time scale.
|
|
|
|
The following functions compute the budgets of the Reynolds stress equations:
|
|
|
|
* convectionBudget
|
|
|
|
* productionBudget
|
|
|
|
* turbulentDiffusion1Budget
|
|
|
|
* turbulentDiffusion2Budget
|
|
|
|
* molecularDiffusionBudget
|
|
|
|
* pressureStrainBudget
|
|
|
|
* dissipationBudget
|
|
|
|
|
|
|
|
### Utilities
|
|
|
|
|
|
|
|
A number of rather useful and fancy utilities:
|
|
|
|
* pprint: a parallel print utility in which the printing processor can be easily defined.
|
|
|
|
* truncate: truncate array given a precision (courtesy of Joan C.).
|
|
|
|
* printArray: print fancy stats of an array for debugging purposes.
|
|
|
|
* midlineAvg: a tool to perform a midline average vectorized.
|
|
|
|
|
|
|
|
### cr
|
|
|
|
|
|
|
|
The *cr* module is a timer to compute the performance of functions. It tells how many times a function is called and the minimum, maximum and average times. The timer is stated with
|
|
|
|
```python
|
|
|
|
cr_start('channel_name',suffix)
|
|
|
|
```
|
|
|
|
where suffix can be a positive number to indicate points deep inside a function. To stop a channel from counting use
|
|
|
|
```python
|
|
|
|
cr_stop('channel_name',suffix)
|
|
|
|
```
|
|
|
|
Subsequent calls to the same channel will be stored in the same channel. The time can be queried at any moment
|
|
|
|
```python
|
|
|
|
cr_time('channel_name',suffix)
|
|
|
|
```
|
|
|
|
Finally, the information can be printed on screen using
|
|
|
|
```python
|
|
|
|
cr_info(rank=-1)
|
|
|
|
```
|
|
|
|
by setting a *rank > 0* the cr_info of a given mpi process can be printed. A negative rank value will reduce the information of all channels to rank 0 and print it.
|
|
|
|
|
|
|
|
### MEP module
|
|
|
|
|
|
|
|
The MEP module is a port in cython of the [MEPX library](https://www.mepx.org/). This tool is provided in [github](https://github.com/mepx/mep-basic-src) and embedded in each source file. The tool is pretty straightforward to use.
|
|
|
|
|
|
|
|
First load and tune the MEP parameters
|
|
|
|
```python
|
|
|
|
params = pyAlya.MEP.Parameters()
|
|
|
|
```
|
|
|
|
then randomize the random (0 is being used here as an example)
|
|
|
|
```python
|
|
|
|
pyAlya.MEP.seed_random(0)
|
|
|
|
```
|
|
|
|
The next step is to read the data. Python basic IO can be very useful at this point. Finally the tool can be run with
|
|
|
|
```python
|
|
|
|
best = pyAlya.MEP.run(params,training_data,target_data)
|
|
|
|
```
|
|
|
|
where best is an instance of the *Chromosome* class containing the best chromosome of the regression. This chromosome can be print by simply doing
|
|
|
|
```python
|
|
|
|
print(best)
|
|
|
|
print('Expression = ',best.print_expression(best.best_index)) # Directly prints the best expression
|
|
|
|
```
|
|
|
|
An expression can also be evaluated directly on python by doing
|
|
|
|
```python
|
|
|
|
best.eval_expression(best.best_index,training_data)
|
|
|
|
```
|
|
|
|
were here the best index is used and the training_data acts as inputs. Additionally a number of statistical tests are provided inside the **MEP** module to verify the soundness of the results.
|
|
|
|
|
|
|
|
### Geometry module
|
|
|
|
|
|
|
|
The geometry module or **Geom** is a module dedicated to solve the 2D point in polygon problem in order to verify if a point is inside a geometry. For 3D entities, the points are projected in the plane of the faces of the entity and a 2D problem is solved. This is done by ignoring the largest dimension of the normal and arranging the other dimensions to be in the XY plane.
|
|
|
|
|
|
|
|
**Geom** defines the following basic entities:
|
|
|
|
* *Point*
|
|
|
|
* *Vector (internal)*
|
|
|
|
* *Ball (internal)*
|
|
|
|
* Polygon
|
|
|
|
* SimpleRectangle
|
|
|
|
* Rectangle
|
|
|
|
* SimpleCube
|
|
|
|
* Cube
|
|
|
|
|
|
|
|
The entities not in italics are defined to be used with the *select* method of the **Field** class in order to select sub-regions of the domain. *Point* is included as the primitive entity even though all of these entities can be defined as numpy arrays as
|
|
|
|
```python
|
|
|
|
xyz = np.array([[1.,0.,0.],[1.,1.,0.],[1.,1.,1.],[1.,0.,1.],
|
|
|
|
[0.,0.,0.],[0.,1.,0.],[0.,1.,1.],[0.,0.,1.]])
|
|
|
|
|
|
|
|
cube = pyAlya.Geom.Cube.from_array(xyz)
|
|
|
|
```
|
|
|
|
Then, the methods *isinside(Point)* or *areinside(np.array)* can be used to check if a point or a set of points are inside (True) or outside (False) the region.
|
|
|
|
```python
|
|
|
|
p1 = pyAlya.Geom.Point(0.5,0.5,0.5)
|
|
|
|
inside = cube.isinside(p1)
|
|
|
|
inside = cube > p1 # alternative to isinside
|
|
|
|
```
|
|
|
|
and the same works for numpy arrays using *areinside*
|
|
|
|
```python
|
|
|
|
xyzp = np.array([[.5,.5,0.1],[1.,.35,0.6],[0.2,0.5,1.2],[.35,.15,0.5]])
|
|
|
|
inside = cube.areinside(xyzp)
|
|
|
|
inside = cube > xyzp
|
|
|
|
```
|
|
|
|
Note that the operator *>* is more generic and is able to understand if the input data is a Point or a numpy array of points.
|
|
|
|
|
|
|
|
## Examples
|
|
|
|
|
|
|
|
A number of examples are provided as a means to demostrate the capabilities of the tool:
|
|
|
|
* example_FEM: A little example to show the FEM operations on a silly mesh.
|
|
|
|
* example_FEM_parallel: A little example to show the FEM operations on the cavtri_03 case.
|
|
|
|
* example_COMMU: A little example to read and compute the communications matrix (used for validation).
|
|
|
|
* example_MASSM: A little example to read and compute the mass matrix (used for validation).
|
|
|
|
* example_output: A little example how to use the output formats of this tool.
|
|
|
|
* example_output_parallel: A little example how to use the output formats of this tool in parallel.
|
|
|
|
* example_avg_parallel: An example on how to compute temporal averages and reduce with the tool.
|
|
|
|
* example_avgXZ_parallel: An example on how to compute temporal averages, average on the X and Z direction, reduce and compute the BL statistics from a channel flow using VELOC.
|
|
|
|
* example_avgXZ_AVVEL_parallel: An example on how to compute temporal averages, average on the X and Z direction, reduce and compute the BL statistics from a channel flow using AVVEL.
|
|
|
|
* example_dissi_parallel: An example on how to compute the dissipation and the Kolmogorov length and time scales.
|
|
|
|
* example_checkpint_parallel: An example on how to use the checkpoint when computing the dissipation.
|
|
|
|
* example_MEP: An example on how to apply MEP to obtain the regression.
|
|
|
|
* example_GEOM: An example on how to use the Geometry module in 2D.
|
|
|
|
* example_GEOM_3D: An example on how to use the Geometry module in 3D. |
|
|
|
\ No newline at end of file |