... | @@ -51,7 +51,30 @@ This part is dedicated to facilitate the comprehension of the code given in the |
... | @@ -51,7 +51,30 @@ This part is dedicated to facilitate the comprehension of the code given in the |
|
- **B** : Batch size i.e. number of token sequence per batch (4)
|
|
- **B** : Batch size i.e. number of token sequence per batch (4)
|
|
- **NH** : Number of heads in attention layer (12)
|
|
- **NH** : Number of heads in attention layer (12)
|
|
|
|
|
|
### Model parameters
|
|
### Model global data structure _GPT2_
|
|
|
|
|
|
|
|
* **GPT2Config config** : Contains all the parameters described just above
|
|
|
|
* **<span dir="">ParameterTensors params</span>** : Contains the parameters described in the "Model forward parameters" section
|
|
|
|
* **<span dir="">size_t param_sizes\[NUM_PARAMETER_TENSORS\] </span>**: A list of the size of each parameter array inside "params"
|
|
|
|
* **<span dir="">float\* params_memory </span>**: A pointer towards the first parameters array of "params"
|
|
|
|
* **<span dir="">size_t num_parameters</span>** : The total number of parameters in our model (124 439 808 in our case)
|
|
|
|
* **<span dir="">ParameterTensors grads</span>** : Contains all the gradients arrays for backpropagation
|
|
|
|
* **<span dir="">float\* grads_memory </span>**: A pointer towards the first array of "grads"
|
|
|
|
* **<span dir="">float\* m_memory</span>** : Used for AdamW optimization function. It is an array of size _num_parameters_
|
|
|
|
* **<span dir="">float\* v_memory</span>** : Same as _m_memory_
|
|
|
|
* **<span dir="">ActivationTensors acts </span>**: Contains the parameters described in the "Variables for backward propagation" section
|
|
|
|
* **size_t act_sizes\[NUM_ACTIVATION_TENSORS\]** : A list of the size of each parameter array inside "acts"
|
|
|
|
* **<span dir="">float\* acts_memory</span>** : A pointer towards the first parameters array of "acts"
|
|
|
|
* **<span dir="">size_t num_activations</span>** : The total number of parameters in our model for backward propagation (the sum of _act_sizes_)
|
|
|
|
* **<span dir="">ActivationTensors grads_acts</span>** : Contains the activation gradients
|
|
|
|
* **<span dir="">float\* grads_acts_memory </span>**: Pointer to the first array of "grads_acts"
|
|
|
|
* **int batch_size** : The size of the current batch
|
|
|
|
* **int seq_len** : The size of the current token sequence
|
|
|
|
* **<span dir="">int\* inputs</span>** : The token array for the current batch
|
|
|
|
* **<span dir="">int\* targets</span>** : The target array for the current batch
|
|
|
|
* **<span dir="">float mean_loss</span>** : Mean loss of the current batch
|
|
|
|
|
|
|
|
### Model forward parameters
|
|
|
|
|
|
- **wte** : Weights for token embedding, shape (V, C)
|
|
- **wte** : Weights for token embedding, shape (V, C)
|
|
- **wpe** : Weights for positional embedding (maxT, C)
|
|
- **wpe** : Weights for positional embedding (maxT, C)
|
... | | ... | |