Skip to content
GitLab
Projects Groups Topics Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
  • Register
  • Sign in
  • EAR EAR
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributor statistics
    • Graph
    • Compare revisions
  • Issues 0
    • Issues 0
    • List
    • Boards
    • Service Desk
    • Milestones
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Releases
  • Wiki
    • Wiki
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • EAR_teamEAR_team
  • EAREAR
  • Wiki
  • User guide

User guide · Changes

Page history
small update authored Nov 20, 2024 by Oriol Vidal Teruel's avatar Oriol Vidal Teruel
Show whitespace changes
Inline Side-by-side
User-guide.md
View page @ 344b8e06
...@@ -267,7 +267,69 @@ See on the [environment variables page](EAR-environment-variables#ear_trace_plug ...@@ -267,7 +267,69 @@ See on the [environment variables page](EAR-environment-variables#ear_trace_plug
Another way to see runtime information with Paraver is to use the open source tool [**ear-job-visualization**](https://github.com/eas4dc/ear-job-visualization), a CLI program written in Python which gets CSV files generated by `--ear-user-db` flag and converts its data to the Paraver trace format. Another way to see runtime information with Paraver is to use the open source tool [**ear-job-visualization**](https://github.com/eas4dc/ear-job-visualization), a CLI program written in Python which gets CSV files generated by `--ear-user-db` flag and converts its data to the Paraver trace format.
EAR metrics are reported as trace events. EAR metrics are reported as trace events.
Node information is stored as Paraver task information. Node information is stored as Paraver task information.
Node GPU data is stored as Paraver thread information Node GPU data is stored as Paraver thread information.
## Data visualization with Grafana
EAR data can be visualized with Grafana dashboards in two different ways: Using grafana with SQL queries (depending on your Data Center configuration) and visualizing data collected with `eacct` and loading locally.
The second option will be explained since you might expect to not having access to the EAR Database.
Once you have your own Grafana instance running, you need to install [*csv-datasource*](https://grafana.com/grafana/plugins/marcusolsson-csv-datasource/):
```bash
bin/grafana-cli plugins install marcusolsson-csv-datasource (You can first check if it's already available by testing the available Data sources)
```
Enable the CSV plug-in by creating a `custom.ini` file in the conf directory with the following content:
```ini
[plugin.marcusolsson-csv-datasource]
allow_local_mode = true
```
Once you have a local server running on your PC or laptop, open your web browser and connect to Grafana at the URL: `http://localhost:3000/login`.
Next steps are:
**Create the Data source**
In the left menu, select Configuration/Data source/Add data source. Select CSV data source from the list of options.
You need to create a new data source for each CSV file you are going to visualize.
For each one, select *Local*. Note that the path must to be a **public directory**.
**Import the Dashboard**
Go to the left menu, *Dashboard*, and select the *Import* option.
This option allows you uploading or selecting a json file with pre-specified graphs, tables, etc.
Graphs are associated with data sources, so you may need to change the Data Source name in the json file to match the one you've created on Grafana.
The json file is [here](misc/EAR_job_data_visualization.json), and below you can see the Data Source names expected.
There is a configuration for two data sources: *EAR_loops* for visualizing CSV files containing EAR loop signatures (e.g., `eacct [-j <job_id>[.<step_id>]] -r -c <filename>`) and *EAR_app* for visualizing application signatures (e.g., `eacct [-j <job_id>[.<step_id>]] -l -c <filename>`).
```json
{
"__inputs": [
{
"name": "DS_EAR_LOOPS",
"label": "EAR_loops",
"description": "",
"type": "datasource",
"pluginId": "marcusolsson-csv-datasource",
"pluginName": "CSV"
},
{
"name": "DS_EAR_APPS",
"label": "EAR_apps",
"description": "",
"type": "datasource",
"pluginId": "marcusolsson-csv-datasource",
"pluginName": "CSV"
}
],
```
Import the JSON file to create the visualization dashboards and refresh the URL the browser page.
Below you can see an example of what you will see.
![EAR Grafana Dashboard example](images/grafana-example.jpg)
# EAR job submission flags # EAR job submission flags
...@@ -518,67 +580,5 @@ The core component of EAR at the user's job level is the EAR Library (EARL). ...@@ -518,67 +580,5 @@ The core component of EAR at the user's job level is the EAR Library (EARL).
The Library deals with job monitoring and is the component which implements and applies The Library deals with job monitoring and is the component which implements and applies
optimization policies based on monitored workload. optimization policies based on monitored workload.
We highly recommend you to read [EARL](EARL) documentation and also how energy policies work **We highly recommend you** to read [EARL](EARL) documentation and also how energy policies work
in order to better understand what is doing the Library internally, so you will can explore easily all features (e.g., tunning variables, collecting data) EAR offers to the end-user so you will have more knowledge about how much resources your application consumes and how to correlate with its computational characteristics. in order to better understand what is doing the Library internally, so you will can explore easily all features (e.g., tunning variables, collecting data) EAR offers to the end-user so you will have more knowledge about how much resources your application consumes and how to correlate with its computational characteristics.
# Data visualization with Grafana
EAR data can be visualized with Grafana dashboards in two different ways: Using grafana with SQL queries (depending on your Data Center configuration) and visualizing data collected with `eacct` and loading locally.
The second option will be explained since you might expect to not having access to the EAR Database.
Once you have your own Grafana instance running, you need to install [*csv-datasource*](https://grafana.com/grafana/plugins/marcusolsson-csv-datasource/):
```bash
bin/grafana-cli plugins install marcusolsson-csv-datasource (You can first check if it's already available by testing the available Data sources)
```
Enable the CSV plug-in by creating a `custom.ini` file in the conf directory with the following content:
```ini
[plugin.marcusolsson-csv-datasource]
allow_local_mode = true
```
Once you have a local server running on your PC or laptop, open your web browser and connect to Grafana at the URL: `http://localhost:3000/login`.
Next steps are:
**Create the Data source**
In the left menu, select Configuration/Data source/Add data source. Select CSV data source from the list of options.
You need to create a new data source for each CSV file you are going to visualize.
For each one, select *Local*. Note that the path must to be a **public directory**.
**Import the Dashboard**
Go to the left menu, *Dashboard*, and select the *Import* option.
This option allows you uploading or selecting a json file with pre-specified graphs, tables, etc.
Graphs are associated with data sources, so you may need to change the Data Source name in the json file to match the one you've created on Grafana.
The json file is [here](misc/EAR_job_data_visualization.json), and below you can see the Data Source names expected.
There is a configuration for two data sources: *EAR_loops* for visualizing CSV files containing EAR loop signatures (e.g., `eacct [-j <job_id>[.<step_id>]] -r -c <filename>`) and *EAR_app* for visualizing application signatures (e.g., `eacct [-j <job_id>[.<step_id>]] -l -c <filename>`).
```json
{
"__inputs": [
{
"name": "DS_EAR_LOOPS",
"label": "EAR_loops",
"description": "",
"type": "datasource",
"pluginId": "marcusolsson-csv-datasource",
"pluginName": "CSV"
},
{
"name": "DS_EAR_APPS",
"label": "EAR_apps",
"description": "",
"type": "datasource",
"pluginId": "marcusolsson-csv-datasource",
"pluginName": "CSV"
}
],
```
Import the JSON file to create the visualization dashboards and refresh the URL the browser page.
Below you can see an example of what you will see.
![EAR Grafana Dashboard example](images/grafana-example.jpg)
Clone repository
  • Home
  • User guide
    • Use cases
      • MPI applications
      • Non-MPI applications
      • Other use cases
      • Usage inside Singularity containers
      • Usage through the COMPSs Framework
    • EAR data
      • Post-mortem application data
      • Runtime report plug-ins
      • EARL events
      • MPI stats
      • Paraver traces
      • Data visualization
    • Submission flags
    • Examples
    • Job accounting
    • Job energy optimization
  • Tutorials
  • Commands
    • Job accounting (eacct)
    • System energy report (ereport)
    • EAR control (econtrol)
    • Database management
    • erun
    • ear-info
  • Environment variables
    • Support for Intel(R) speed select technology
  • Admin Guide
    • Quick installation guide
    • Installation from RPM
    • Updating
  • Installation from source
  • Architecture/Services
  • High Availability support
  • Configuration
  • Learning phase
  • Plug-ins
  • Powercap
  • Report plug-ins
  • Database
    • Updating the database from previous EAR versions
    • Tables description
  • Supported systems
  • EAR Data Center Monitoring
  • CHANGELOG
  • FAQs
  • Known issues