Skip to content
GitLab
Projects Groups Topics Snippets
  • /
  • Help
    • Help
    • Support
    • Community forum
    • Submit feedback
  • Register
  • Sign in
  • L llm.c - GPT2
  • Project information
    • Project information
    • Activity
    • Labels
    • Members
  • Repository
    • Repository
    • Files
    • Commits
    • Branches
    • Tags
    • Contributor statistics
    • Graph
    • Compare revisions
  • Issues 4
    • Issues 4
    • List
    • Boards
    • Service Desk
    • Milestones
  • Merge requests 0
    • Merge requests 0
  • CI/CD
    • CI/CD
    • Pipelines
    • Jobs
    • Schedules
  • Deployments
    • Deployments
    • Environments
    • Releases
  • Packages and registries
    • Packages and registries
    • Package Registry
    • Container Registry
    • Terraform modules
  • Monitor
    • Monitor
    • Metrics
    • Incidents
  • Analytics
    • Analytics
    • Value stream
    • CI/CD
    • Repository
  • Wiki
    • Wiki
  • Snippets
    • Snippets
  • Activity
  • Graph
  • Create a new issue
  • Jobs
  • Commits
  • Issue Boards
Collapse sidebar
  • tchatela
  • llm.c - GPT2
  • Wiki
  • GPT2 Parallelization and porting

GPT2 Parallelization and porting · Changes

Page history
Update GPT2 Parallelization and porting authored Jun 25, 2024 by tchatela's avatar tchatela
Show whitespace changes
Inline Side-by-side
GPT2-Parallelization-and-porting.md
View page @ 3917189a
......@@ -51,7 +51,30 @@ This part is dedicated to facilitate the comprehension of the code given in the
- **B** : Batch size i.e. number of token sequence per batch (4)
- **NH** : Number of heads in attention layer (12)
### Model parameters
### Model global data structure _GPT2_
* **GPT2Config config** : Contains all the parameters described just above
* **<span dir="">ParameterTensors params</span>** : Contains the parameters described in the "Model forward parameters" section
* **<span dir="">size_t param_sizes\[NUM_PARAMETER_TENSORS\] </span>**: A list of the size of each parameter array inside "params"
* **<span dir="">float\* params_memory </span>**: A pointer towards the first parameters array of "params"
* **<span dir="">size_t num_parameters</span>** : The total number of parameters in our model (124 439 808 in our case)
* **<span dir="">ParameterTensors grads</span>** : Contains all the gradients arrays for backpropagation
* **<span dir="">float\* grads_memory </span>**: A pointer towards the first array of "grads"
* **<span dir="">float\* m_memory</span>** : Used for AdamW optimization function. It is an array of size _num_parameters_
* **<span dir="">float\* v_memory</span>** : Same as _m_memory_
* **<span dir="">ActivationTensors acts </span>**: Contains the parameters described in the "Variables for backward propagation" section
* **size_t act_sizes\[NUM_ACTIVATION_TENSORS\]** : A list of the size of each parameter array inside "acts"
* **<span dir="">float\* acts_memory</span>** : A pointer towards the first parameters array of "acts"
* **<span dir="">size_t num_activations</span>** : The total number of parameters in our model for backward propagation (the sum of _act_sizes_)
* **<span dir="">ActivationTensors grads_acts</span>** : Contains the activation gradients
* **<span dir="">float\* grads_acts_memory </span>**: Pointer to the first array of "grads_acts"
* **int batch_size** : The size of the current batch
* **int seq_len** : The size of the current token sequence
* **<span dir="">int\* inputs</span>** : The token array for the current batch
* **<span dir="">int\* targets</span>** : The target array for the current batch
* **<span dir="">float mean_loss</span>** : Mean loss of the current batch
### Model forward parameters
- **wte** : Weights for token embedding, shape (V, C)
- **wpe** : Weights for positional embedding (maxT, C)
......
Clone repository
  • Distributed Model
  • Fork Join Model
  • GPT2 Parallelization and porting
  • Metrics
  • Runtime and performances
  • Task Based Model
  • Various informations
  • _sidebar
  • Home