The transformer consists of a stack of number of encoders. The output of one encoder is sent as input to the encoder above it. As shown in the following figure, we have a stack of number of encoders. Each encoder sends its output to the encoder above it. The final encoder returns the representation of the given source sentence as output. We feed the source sentence as input to the encoder and get the representation of the source sentence as output:
Note that in the transformer paper Attention Is All You Need, the authors have used , meaning that they stacked up six encoders one above the another. However, we can try out different values of . For simplicity and better understanding, let's keep :
Okay, the question is how exactly does the encoder work? How is it generating the representation for the given source sentence (input sentence)...