|
|
# Why going towards a task based model
|
|
|
|
|
|
We want to go from a for join model to a task based one.
|
|
|
![backward_transformer_block](uploads/41e0ed558170fe7d13bd7efbf848bb57/backward_transformer_block.png)
|
|
|
![Forward_pass](uploads/4e68b0a08e2153e3e885cd805fd1f8e3/Forward_pass.png)
|
|
|
We want to go from a for join model to a task based one. ![backward_transformer_block](uploads/41e0ed558170fe7d13bd7efbf848bb57/backward_transformer_block.png) ![Forward_pass](uploads/4e68b0a08e2153e3e885cd805fd1f8e3/Forward_pass.png)
|
|
|
|
|
|
Basically, we would like to avoid the black regions (sequential portions) that we can see in these two traces. Thus, creating a task based model will allow us to overlap the computation of 2 distinct layers, avoiding any CPU inactivity.
|
|
|
|
|
|
# Analysis of the for join model current state
|
|
|
|
|
|
First, we need to analyze what are the parts of the model that can be overlapped. Therefore, we will inspect each layer and check for their data dependencies. Then, depending on these dependencies, we will create a data flow diagram that will show the different tasks and how each one depends on a previous one.
|
|
|
First, we need to analyze what parts of the model can be overlapped. Therefore, we will inspect each layer and check for their data dependencies. Then, depending on these dependencies, we will create a data flow diagram that will show the different tasks and how each one depends on a previous one.
|
|
|
|
|
|
## Data dependency
|
|
|
|
|
|
On a general point of view, each layer of the forward needs to have its weights and bias updated before being computed.
|
|
|
|
|
|
As the backward pass is generating the gradient values (necessary to update the weights) in the reverse order of the forward pass, we have to wait until the end of the backward pass to begin the forward pass. In fact, the last layer computed in the backward pass is the encoder, which is the first layer of the forward.
|
|
|
|
|
|
The same logic applies for the backward pass, and we also need to wait the forward pass to be finished to begin our backward pass.
|
|
|
|
|
|
With this general / too wide view of the model, it does not seems like we will be able to make overlapping tasks. Therefore, we should go more in depth to apprehend the model.
|
|
|
|
|
|
During a forward pass, nearly each of the calculations made are channel-independent, as there is only the attention layer that is using multiple channels a one calculation. Therefore, we can set up tasks with k - channels as an input data, k a divider of T in \[1, T\], allowing us to overlap two different layers using different sets of k-channels. However, the attention layer is using all the tokens of a sentence for its computation. Thus, it is mandatory to wait for all the k-channels of a sentence before entering this layer (this is also why we choose k as a divider of T in \[1, T\]). But as the attention layer is only computing one sentence at once, we can still compute k-channels sets from other sentences at the same time as the attention layer is running.
|
|
|
|
|
|
During a backward pass, the principle of k-channels is a bit different, as we are not computing tokens anymore, but weights. However, we can still see |
|
|
\ No newline at end of file |