Skip to content
GitLab
Projects Groups Topics Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
  • Register
  • Sign in
  • EAR EAR
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributor statistics
    • Graph
    • Compare revisions
  • Issues 0
    • Issues 0
    • List
    • Boards
    • Service Desk
    • Milestones
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Releases
  • Wiki
    • Wiki
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • EAR_teamEAR_team
  • EAREAR
  • Wiki
  • EAR commands

EAR commands · Changes

Page history
v5.2 release authored Oct 23, 2025 by Oriol's avatar Oriol
Hide whitespace changes
Inline Side-by-side
EAR-commands.md
View page @ 168bb2b7
...@@ -30,17 +30,17 @@ It provides the following options. ...@@ -30,17 +30,17 @@ It provides the following options.
-v displays current EAR version -v displays current EAR version
-u specifies the user whose applications will be retrieved. Only available to privileged users. [default: all users] -u specifies the user whose applications will be retrieved. Only available to privileged users. [default: all users]
-j specifies the job id and step id to retrieve with the format [jobid.stepid] or the format [jobid1,jobid2,...,jobid_n]. -j specifies the job id and step id to retrieve with the format [jobid.stepid] or the format [jobid1,jobid2,...,jobid_n].
A user can only retrieve its own jobs unless said user is privileged. [default: all jobs] A user can only retrieve its own jobs unless said user is privileged. [default: all jobs]
-a specifies the application names that will be retrieved. [default: all app_ids] -a specifies the application names that will be retrieved. [default: all app_ids]
-c specifies the file where the output will be stored in CSV format. [default: no file] -c specifies the file where the output will be stored in CSV format. [default: no file]
-t specifies the energy_tag of the jobs that will be retrieved. [default: all tags]. -t specifies the energy_tag of the jobs that will be retrieved. [default: all tags].
-s specifies the minimum start time of the jobs that will be retrieved in YYYY-MM-DD. [default: no filter]. -s specifies the minimum start time of the jobs that will be retrieved in YYYY-MM-DD. [default: no filter].
-e specifies the maximum end time of the jobs that will be retrieved in YYYY-MM-DD. [default: no filter]. -e specifies the maximum end time of the jobs that will be retrieved in YYYY-MM-DD. [default: no filter].
-l shows the information for each node for each job instead of the global statistics for said job. -l shows the information for each node for each job instead of the global statistics for said job.
-x shows the last EAR events. Nodes, job ids, and step ids can be specified as if it were showing job information. -x shows the last EAR events. Nodes, job ids, and step ids can be specified as if it were showing job information.
-m prints power signatures regardless of whether mpi signatures are available or not. -m prints power signatures regardless of whether mpi signatures are available or not.
-r shows the EAR loop signatures. Nodes, job ids, and step ids can be specified as if were showing job information. -r shows the EAR loop signatures. Nodes, job ids, and step ids can be specified as if were showing job information.
-o modifies the -r option to also show the corresponding jobs. Should be used with -j. -o modifies the -r option to also show the corresponding jobs. Should be used with -j.
-n specifies the number of jobs to be shown, starting from the most recent one. [default: 20][to get all jobs use -n all] -n specifies the number of jobs to be shown, starting from the most recent one. [default: 20][to get all jobs use -n all]
-f specifies the file where the user-database can be found. If this option is used, the information will be read from the file and not the database. -f specifies the file where the user-database can be found. If this option is used, the information will be read from the file and not the database.
-b verbose mode for debugging purposes -b verbose mode for debugging purposes
...@@ -78,7 +78,7 @@ The command shows a pre-selected set of columns: ...@@ -78,7 +78,7 @@ The command shows a pre-selected set of columns:
| GBS | CPU main memory bandwidth (GB/second). Hint for CPU/Memory bound classification. | | GBS | CPU main memory bandwidth (GB/second). Hint for CPU/Memory bound classification. |
| CPI | CPU Cycles per Instruction. Hint for CPU/Memory bound classification. | | CPI | CPU Cycles per Instruction. Hint for CPU/Memory bound classification. |
| ENERGY(J) | Accumulated node energy. Includes all the nodes. In Joules. | | ENERGY(J) | Accumulated node energy. Includes all the nodes. In Joules. |
| GFLOPS/W | CPU GFlops per Watt. Hint for energy efficiency. The metric uses the number of operations, not instructions. | | GFLOPS/W | CPU+GPU GFlops per Watt. Hint for energy efficiency. The metric uses the number of operations, not instructions. |
| IO(MBS) | I/O (read and write) Mega Bytes per second. | | IO(MBS) | I/O (read and write) Mega Bytes per second. |
| MPI% | Percentage of MPI time over the total execution time. It’s the average including all the processes and nodes. | | MPI% | Percentage of MPI time over the total execution time. It’s the average including all the processes and nodes. |
...@@ -86,7 +86,7 @@ If EAR supports GPU monitoring/optimisation, the following columns are added: ...@@ -86,7 +86,7 @@ If EAR supports GPU monitoring/optimisation, the following columns are added:
| Column field | Description | | Column field | Description |
| ---------------- | ----------- | | ---------------- | ----------- |
| G-POW (T/U) | Average GPU power. Accumulated per node and average along involved nodes. *T* mean for total GPU power consumed (even the job is not using any or all of GPUs in one node). *U* means for only used GPUs on each node. | | G-POW (T/U) | Average GPU power. Accumulated per node and averaged along involved nodes. *T* mean for total GPU power consumed (even the job is not using any or all of GPUs in one node). *U* means for only used GPUs on each node. |
| G-FREQ | Average GPU frequency. Per node and average of all the nodes. | | G-FREQ | Average GPU frequency. Per node and average of all the nodes. |
| G-UTIL(G/MEM) | GPU utilization and GPU memory utilization. | | G-UTIL(G/MEM) | GPU utilization and GPU memory utilization. |
...@@ -108,62 +108,86 @@ Both aggregated and detailed accountings are available, as well as filtering: ...@@ -108,62 +108,86 @@ Both aggregated and detailed accountings are available, as well as filtering:
When requesting long format (i.e., `-l` option) or runtime metrics (i.e., `-r` option) When requesting long format (i.e., `-l` option) or runtime metrics (i.e., `-r` option)
to be stored in a CSV file (i.e., `-c` option), header names change from the output to be stored in a CSV file (i.e., `-c` option), header names change from the output
shown when you don't request CSV format. shown when you don't request CSV format.
Below table shows header names of CSV file storing long information about jobs: Below table shows header names of CSV file storing long information about jobs.
**Bold** fields indicate they are just filled when the EAR Library (EARL) is enabled.
Format is the same used by the [csv report plug-in](https://gitlab.bsc.es/ear_team/ear/-/wikis/Report#csv).
| Field name | Description | | Field name | Description |
| ---------- | ----------- | | ---------- | ----------- |
| NODENAME | The node name the row information belongs to. |
| JOBID | The JobID. | | JOBID | The JobID. |
| STEPID | The StepID. For the sbatch step, `SLURM_BATCH_SCRIPT` value is printed. | | STEPID | The StepID. |
| **APPID** | The EARL application ID used to identify the workload. This value is useful to identify different applications executed in a workflow within the same job-step. |
| USERID | The username of the user who executed the job. | | USERID | The username of the user who executed the job. |
| GROUPID | The group name of the user who executed the job. | | GROUPID | The group name of the user who executed the job. |
| ACCOUNTID | The account name of the user who executed the job. |
| JOBNAME | Job’s name or executable name if job name is not provided. | | JOBNAME | Job’s name or executable name if job name is not provided. |
| USER_ACC | The account name of the user who executed the job. | | ENERGY\_TAG | The energy tag used if the user set one for its job step. |
| ENERGY_TAG | The energy tag used if the user set one for its job step. | | JOB\_START\_TIME | The Unix Epoch timestamp in seconds when the job-step started. |
| POLICY | Energy optimization policy name. *MO* means for monitoring, *ME* for min\_energy, *MT* for min\_time and *NP* is the job ran without EARL. | | JOB\_END\_TIME | The Unix Epoch timestamp in seconds when the job-step ended. |
| POLICY_TH | The policy threshold used by the optimization policy set with the job. | | **JOB_EARL_START_TIME** | The Unix Epoch timestamp in seconds when the EARL started monitoring the job-step, i.e., the application start time. |
| AVG\_CPUFREQ\_KHZ | Average CPU frequency of the job step executed in the node, expressed in kHz. | | **JOB_EARL_END_TIME** | The Unix Epoch timestamp in seconds when the EARL ended monitoring the job-step, i.e., the application end time. |
| AVG\_IMCFREQ\_KHZ | Average uncore frequency of the job step executed in the node, expressed in kHz. **Default data fabric frequency on AMD sockets**. | | POLICY | Energy optimization policy name used by the EARL. **MO** means for *monitoring-only*, **ME** for *min\_energy*, **MT** for *min\_time* and **NP** is the job ran without EARL. |
| DEF\_FREQ\_KHZ | default frequency of the job step executed in the node, expressed in kHz. | | **POLICY_TH** | The policy threshold used by the optimization policy used by EARL. |
| TIME_SEC | Execution time (in seconds) of the application in the node. As this is computed by EARL, *sbatch* step does not contain such info. | | **JOB_NPROCS** | The number of processes involved in the application. |
| CPI | CPU Cycles per Instruction. Hint for CPU/Memory bound classification. | | JOB\_TYPE | The job type. |
| TPI | Memory transactions per Instruction. Hint for CPU/Memory bound classification. | | JOB\_DEF\_FREQ | The default frequency at which the job started. |
| MEM_GBS | CPU main memory bandwidth (GB/second). Hint for CPU/Memory bound classification. | | EARL\_ENABLED | Indicates whether the job-step ran with the EARL enabled. |
| IO_MBS | I/O (read and write) Mega Bytes per second. | | **EAR_LEARNING** | Whether the application was run in the [learning phase](https://gitlab.bsc.es/ear_team/ear/-/wikis/Learning-phase). |
| PERC_MPI | Percentage of MPI time over the total execution time. | | NODENAME | The node name the rest of the row information belongs to. |
| DC\_NODE\_POWER_W | Average node power, in Watts. | | **AVG_CPUFREQ_KHZ** | Average CPU frequency of the job step executed in the node, expressed in kHz. This value is computed by the EARL. |
| DRAM\_POWER\_W | Average DRAM power, in Watts. **Not available on AMD sockets**. | | **AVG_IMCFREQ_KHZ** | Average uncore frequency of the job step executed in the node, expressed in kHz. **Default data fabric frequency on AMD sockets**. This value is computed by the EARL. |
| PCK\_POWER\_W | Average RAPL package power, in Watts. | | **DEF_FREQ_KHZ** | The default frequency of the job step executed in the node, expressed in kHz. This value corresponds to the default frequency the EAR Library sets at the beginning, and it has the same value as *JOB\_DEF\_FREQ*. |
| CYCLES | Total number of cycles. | | **TIME_SEC** | Execution time period (in seconds) which comprises the application metrics reported by the EARL. |
| INSTRUCTIONS | Total number of instructions. | | **CPI** | CPU Cycles per Instruction. Hint for CPU/Memory bound classification. |
| CPU-GFLOPS | CPU GFlops per Watt. Hint for energy efficiency. The metric uses the number of operations, not instructions. | | **TPI** | Memory transactions per Instruction. Hint for CPU/Memory bound classification. |
| L1_MISSES | Total number of L1 cache misses. | | **MEM_GBS** | CPU main memory bandwidth (GB/second). Hint for CPU/Memory bound classification. |
| L2_MISSES | Total number of L2 cache misses. | | **IO_MBS** | I/O (read and write) Mega Bytes per second. |
| L3_MISSES | Total number of L3/LLC cache misses. | | **PERC_MPI** | Percentage of *TIME_SEC* spent in MPI calls. |
| SPOPS_SINGLE | Total number of single precision 64 bit floating point operations. | | **DC_NODE_POWER_W** | Average node power along the time period, in Watts. This value differs from *NODEMGR_DC_NODE_POWER_W* in that it is computed and reported by the EARL. |
| SPOPS_128 | Total number of single precision 128 bit floating point operations. | | **DRAM_POWER_W** | Average DRAM power along the time period, in Watts. **Not available on AMD sockets**. This value differs from *NODEMGR_DRAM_POWER_W* in that it is computed and reported by the EARL. |
| SPOPS_256 | Total number of single precision 256 bit floating point operations. | | **PCK_POWER_W** | Average RAPL package power along the time period, in Watts. This value shows the aggregated power of all sockets in a package. This value differs from *NODEMGR_PCK_POWER_W* in that it is computed and reported by the EARL. |
| SPOPS_512 | Total number of single precision 512 bit floating point operations. | | **CYCLES** | Total number of cycles retrieved along the time period. |
| DPOPS_SINGLE | Total number of double precision 64 bit floating point operations. | | **INSTRUCTIONS** | Total number of instructions retrieved along the time period. |
| DPOPS_128 | Total number of double precision 128 bit floating point operations. | | **CPU_GFLOPS** | Total number of giga-Floating point operations per second along the time period. |
| DPOPS_256 | Total number of double precision 256 bit floating point operations. | | **CPU_UTIL** | The CPU time of the application. |
| DPOPS_512 | Total number of double precision 512 floating point 512 operations. | | **L1_MISSES** | Total number of L1 cache misses along the time period. |
| **L2_MISSES** | Total number of L2 cache misses along the time period. |
If EAR supports GPU monitoring/optimisation, the following columns are added: | **L3_MISSES** | Total number of L3/LLC cache misses along the time period. |
| **SPOPS_SINGLE** | Total number of single precision 64 bit floating point operations. |
| **SPOPS_128** | Total number of single precision 128 bit floating point operations. |
| **SPOPS_256** | Total number of single precision 256 bit floating point operations. |
| **SPOPS_512** | Total number of single precision 512 bit floating point operations. |
| **DPOPS_SINGLE** | Total number of double precision 64 bit floating point operations. |
| **DPOPS_128** | Total number of double precision 128 bit floating point operations. |
| **DPOPS_256** | Total number of double precision 256 bit floating point operations. |
| **DPOPS_512** | Total number of double precision 512 floating point 512 operations. |
| NODEMGR\_DC\_NODE\_POWER\_W | Average node power along the time period, in Watts. This value differs from *DC_NODE_POWER_W* in that it is computed and reported by the [Node Manager](https://gitlab.bsc.es/ear_team/ear/-/wikis/Architecture#ear-node-manager) (the EARD) independently on whether the EARL was enabled. |
| NODEMGR\_DRAM\_POWER\_W | Average DRAM power along the time period, in Watts. **Not available on AMD sockets**. This value differs from *DRAM_POWER_W* in that it is computed and reported by the [Node Manager](https://gitlab.bsc.es/ear_team/ear/-/wikis/Architecture#ear-node-manager) (the EARD) independently on whether the EARL was enabled. |
| NODEMGR\_PCK\_POWER\_W | Average RAPL package power along the time period, in Watts. This value shows the aggregated power of all sockets in a package. This value differs from *PCK_POWER_W* in that it is computed and reported by the [Node Manager](https://gitlab.bsc.es/ear_team/ear/-/wikis/Architecture#ear-node-manager) (the EARD) independently on whether the EARL was enabled. |
| NODEMGR\_MAX\_DC\_POWER\_W| The peak DC node power computed by the Node Manager. |
| NODEMGR\_MIN\_DC\_POWER\_W| The minimum DC node power computed by the Node Manager. |
| NODEMGR\_TIME\_SEC | Execution time period (in seconds) which comprises the job-step metrics reported by the Node Manager. |
| NODEMGR\_AVG\_CPUFREQ\_KHZ | The average CPU frequency computed by the Node Manager during the job-step execution time. |
| NODEMGR\_DEF\_FREQ\_KHZ | The default frequency set by the Node Manager when the job-step began. |
If EARL supports GPU monitoring/optimisation, the following columns are added:
| Field name | Description | | Field name | Description |
| ---------- | ----------- | | ---------- | ----------- |
| GPU*x*\_POWER\_W | Average GPU*x* power, in Watts. | | GPU*x*\_POWER_W | Average GPU*x* power, in Watts. |
| GPU*x*\_FREQ\_KHZ | Average GPU*x* frequency, in kHz. | | GPU*x*\_FREQ\_KHZ | Average GPU*x* frequency, in kHz. |
| GPU*x*\_MEM\_FREQ\_KHZ | Average GPu*x* memory frequency, in kHz. | | GPU*x*\_MEM\_FREQ\_KHZ | Average GPu*x* memory frequency, in kHz. |
| GPU*x*\_UTIL\_PERC | Average percentage of GPU*x* utilization. | | GPU*x*\_UTIL\_PERC | Average percentage of GPU*x* utilization. |
| GPU*x*\_MEM\_UTIL_PERC | Average percentage of GPU*x* memory utilization. | | GPU*x*\_MEM\_UTIL\_PERC | Average percentage of GPU*x* memory utilization. |
| GPU*x*\_GFLOPS | GPU*x* GFLOPS. |
| GPU*x*\_TEMP | AverageGPU*x* temperature. |
| GPU*x*\_MEMTEMP | AverageGPU*x* memory temperature. |
For runtime metrics (i.e., `-r` option), *USERID*, *GROUPID*, *JOBNAME*, *USER_ACC*, For runtime metrics (i.e., `-r` option), *USERID*, *GROUPID*, *JOBNAME*, *USER_ACC*,
*ENERGY_TAG* (as energy tags disable EARL), *POLICY* and *POLICY_TH* are not stored *ENERGY_TAG* (as energy tags disable EARL), *POLICY* and *POLICY_TH* are not stored
at the CSV file. at the CSV file.
However, the iteration time (in seconds) is present on each loop as *ITER_TIME_SEC*, However, the iteration time (in seconds) is present on each loop as *ITER_TIME_SEC*,
as well as a timestamp (i.e., *TIMESTAMP*) with the elapsed time in seconds since the EPOCH. as well as a timestamp (i.e., *TIMESTAMP*) with the Unix Epoch elapsed time in seconds.
# EAR system energy Report (ereport) # EAR system energy Report (ereport)
...@@ -408,8 +432,8 @@ SLURM SPANK plug-in mechanism (e.g., OpenMPI), which is used to set up EAR at jo ...@@ -408,8 +432,8 @@ SLURM SPANK plug-in mechanism (e.g., OpenMPI), which is used to set up EAR at jo
You can launch `erun` with the `--program` option to specify the application name You can launch `erun` with the `--program` option to specify the application name
and arguments. See the usage below: and arguments. See the usage below:
``` ```bash
> erun --help erun --help
This is the list of ERUN parameters: This is the list of ERUN parameters:
Usage: ./erun [OPTIONS] Usage: ./erun [OPTIONS]
...@@ -455,7 +479,7 @@ Also you have to load the EAR environment module or define its environment varia ...@@ -455,7 +479,7 @@ Also you have to load the EAR environment module or define its environment varia
`ear-info` is a tool created to quickly view useful information about the current EAR installation of the system. `ear-info` is a tool created to quickly view useful information about the current EAR installation of the system.
It shows relevant details for both users and administrators, such as configuration defaults, installation paths, etc. It shows relevant details for both users and administrators, such as configuration defaults, installation paths, etc.
``` ```bash
[user@hostname ~]$ ear-info -h [user@hostname ~]$ ear-info -h
Usage: ear-info [options] Usage: ear-info [options]
--node-conf[=nodename] --node-conf[=nodename]
...@@ -517,7 +541,7 @@ HACK section............................ ...@@ -517,7 +541,7 @@ HACK section............................
............................................ ............................................
``` ```
EAR was designed to be installed on heterogeneous systems, so there are some configuration parameters that are applied to a set of nodes identified by different tags. EAR was designed to be installed on heterogeneous systems, so some configuration parameters that are applied to a set of nodes identified by different tags.
The `--node-conf` flag can be used to request additional information about a specific node. The `--node-conf` flag can be used to request additional information about a specific node.
Configuration related to EAR's power capping sub-system, default optimization policies configuration and other parameters associated with the node requested are retrieved. Configuration related to EAR's power capping sub-system, default optimization policies configuration and other parameters associated with the node requested are retrieved.
You can read the [EAR configuration section](https://gitlab.bsc.es/ear_team/ear/-/wikis/Configuration) for more details about how EAR uses tags to identify and configure different kind of nodes on a given heterogeneous system. You can read the [EAR configuration section](https://gitlab.bsc.es/ear_team/ear/-/wikis/Configuration) for more details about how EAR uses tags to identify and configure different kind of nodes on a given heterogeneous system.
......
Clone repository
  • Home
  • User guide
    • Use cases
      • MPI applications
      • Non-MPI applications
      • Other use cases
      • Usage inside Singularity containers
      • Usage through the COMPSs Framework
    • EAR data
      • Post-mortem application data
      • Runtime report plug-ins
      • EARL events
      • MPI stats
      • Paraver traces
      • Grafana
    • Submission flags
    • Examples
    • Job accounting
    • Job energy optimization
  • Tutorials
  • Commands
    • Job accounting (eacct)
    • System energy report (ereport)
    • EAR control (econtrol)
    • Database management
    • erun
    • ear-info
  • Environment variables
    • Support for Intel(R) speed select technology
  • Admin Guide
    • Quick installation guide
    • Installation from RPM
    • Updating
  • Installation from source
  • Architecture/Services
  • High Availability support
  • Configuration
  • Classification strategies
  • Learning phase
  • Plug-ins
  • Powercap
  • Report plug-ins
  • Database
    • Updating the database from previous EAR versions
    • Tables description
  • Supported systems
  • EAR Data Center Monitoring
  • CHANGELOG
  • FAQs
  • Known issues