Skip to content
GitLab
Projects Groups Topics Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
  • Register
  • Sign in
  • L llm.c - GPT2
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributor statistics
    • Graph
    • Compare revisions
  • Issues 4
    • Issues 4
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Container Registry
    • Terraform modules
  • Monitor
    • Monitor
    • Metrics
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • tchatela
  • llm.c - GPT2
  • Wiki
  • Distributed Model

Distributed Model · Changes

Page history
Update Distributed Model authored Aug 08, 2024 by tchatela's avatar tchatela
Show whitespace changes
Inline Side-by-side
Distributed-Model.md
View page @ eda5eb41
......@@ -29,13 +29,14 @@ Data transfer is done all at once through one single blocking MPI instruction.
Dt = 1.0 s
![all-at-once-5](uploads/afd3253f7a71005bf376d39c6e3d68c6/all-at-once-5.png)
![base-5.code_legend](uploads/56f3e0f7bc8eb219a295bd16106f9008/base-5.code_legend.png)
- Broadcast takes about 95 ms
- Reduce takes about 380 ms
- - Forward + backward pass takes about 143 ms
- Time per iteration is 650 ms
![base-5.code_legend](uploads/56f3e0f7bc8eb219a295bd16106f9008/base-5.code_legend.png)
## Using 8 workers and 1 server
**One worker is computing half a token sequence (32 tokens / worker)**
......@@ -53,8 +54,10 @@ Data transfer is done all at once through one single blocking MPI instruction.
Dt = 2.5 s
![all-nine](uploads/2d471a1e2cf8dc0bf30f957a4554d9e5/all-nine.png)
![base-9-priority.code_legend](uploads/fd7dfb7130bb685aa20d6636fe20d339/base-9-priority.code_legend.png)
- Broadcast takes about 500 ms
- Reduce takes about 115 ms
- Reduce takes about 500 ms
- Broadcast takes about 115 ms
- Forward + backward pass takes about 100 ms
- Time per iteration is about 720 ms
![base-9-priority.code_legend](uploads/fd7dfb7130bb685aa20d6636fe20d339/base-9-priority.code_legend.png)
\ No newline at end of file
Clone repository

GPT2 Parallelization and Porting

  • Model Description
  • Runtime and Performances
  • Improvements
  • Traces
  • Fork Join Model
  • Task Based Model
  • Distributed Model