Skip to content
GitLab
Projects Groups Topics Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
  • Register
  • Sign in
  • L llm.c - GPT2
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributor statistics
    • Graph
    • Compare revisions
  • Issues 4
    • Issues 4
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Container Registry
    • Terraform modules
  • Monitor
    • Monitor
    • Metrics
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • tchatela
  • llm.c - GPT2
  • Wiki
  • Runtime and performances

Runtime and performances · Changes

Page history
Update Runtime and performances authored Jun 28, 2024 by tchatela's avatar tchatela
Show whitespace changes
Inline Side-by-side
Runtime-and-performances.md
View page @ 0f4b2760
...@@ -39,3 +39,17 @@ What is unexpected is that the OpenMP/nOS-V version is slower than OpenMP versio ...@@ -39,3 +39,17 @@ What is unexpected is that the OpenMP/nOS-V version is slower than OpenMP versio
![thread-state-legend](uploads/4626cbde639c30905d6b003db5f8d6d6/thread-state-legend.png) ![thread-state-legend](uploads/4626cbde639c30905d6b003db5f8d6d6/thread-state-legend.png)
The paraver trace shows us that nearly half of the threads are working simultaneously. We could first increase the efficiency by decreasing the number of threads used for the application. The paraver trace shows us that nearly half of the threads are working simultaneously. We could first increase the efficiency by decreasing the number of threads used for the application.
# Time passed in functions
![paraver](uploads/d86531a4e321e31bf262498c9116d3b4/paraver.png)
![legend](uploads/1fbc249856438649a150b86717133a6a/legend.png)
A test has been run using openmp and extrae, with 2 threads and 2 cpus on one single NUMA (**numactl -N 1 -m 1**), to create a paraver trace of the execution. On the trace, we can see that most of the runtime is passed in matmul backward. Here below is a graph of the time taken per iteration for this same test:
![test-2-openmp-tinyshakespeare-mean](uploads/80003c0cba5f8759a46e445151d8fcd4/test-2-openmp-tinyshakespeare-mean.png)
As we see, the time taken for each iteration is 7 seconds on average. However, on another test, which runtime is shown just below, we see that when using one single CPU, our average runtime is 41 seconds. This is unexpected, as in theory the runtime should not be higher than
![test-5-openmp-tinyshakespeare-mean](uploads/d600df70538473235c9037630724a5a2/test-5-openmp-tinyshakespeare-mean.png)
\ No newline at end of file
Clone repository
  • Distributed Model
  • Fork Join Model
  • GPT2 Parallelization and porting
  • Metrics
  • Runtime and performances
  • Task Based Model
  • Various informations
  • _sidebar
  • Home