... | ... | @@ -30,4 +30,6 @@ Moreover, the aim of the backward layer is to compute the gradients, which will |
|
|
|
|
|
The particularity of this given order is that we don't have to fully complete a step before going to the next one. In fact, the gradients values can be set to 0 just before computing their new values during their related backward layer. This way, we can, for example, overlap a backward layer will setting the gradients of the next backward layer to 0. Also, this same idea works for the update of the forward layers' parameters, as we can update the weights of one layer during the its previous forward layer. However, keep in mind that the backward pass is the forward pass but mirrored, so the first backward layers are implying data dependencies (the update of the weights) towards the last forward layer.
|
|
|
|
|
|
With everything that has been stated, we can now create the following data flow diagram : ![GPT-2_task_based_model](uploads/10d03f7bd64d70881163ab0c15f7749b/GPT-2_task_based_model.png) |
|
|
\ No newline at end of file |
|
|
With everything that has been stated, we can now create the following data flow diagram : ![GPT-2_task_based_model](uploads/10d03f7bd64d70881163ab0c15f7749b/GPT-2_task_based_model.png)
|
|
|
|
|
|
![GPT-2_task_based_model-legend](uploads/795fa5f34780b4860c5a2b3a050be977/GPT-2_task_based_model-legend.png) |
|
|
\ No newline at end of file |