Skip to content

Commit 1074c4b

Browse files
authored
Update 6.mdx
spacing between text and math formula
1 parent abb71f2 commit 1074c4b

File tree

1 file changed

+3
-3
lines changed
  • chapters/en/chapter1

1 file changed

+3
-3
lines changed

chapters/en/chapter1/6.mdx

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -209,10 +209,10 @@ length.
209209
### Axial positional encodings
210210

211211
[Reformer](https://huggingface.co/docs/transformers/model_doc/reformer) uses axial positional encodings: in traditional transformer models, the positional encoding
212-
E is a matrix of size \\(l\\) by \\(d\\), \\(l\\) being the sequence length and \\(d\\) the dimension of the
212+
E is a matrix of size\ \\(l\\) by\ \\(d\\),\ \\(l\\) being the sequence length and\ \\(d\\) the dimension of the
213213
hidden state. If you have very long texts, this matrix can be huge and take way too much space on the GPU. To alleviate
214214
that, axial positional encodings consist of factorizing that big matrix E in two smaller matrices E1 and E2, with
215-
dimensions \\(l_{1} \times d_{1}\\) and \\(l_{2} \times d_{2}\\), such that \\(l_{1} \times l_{2} = l\\) and
215+
dimensions\ \\(l_{1} \times d_{1}\\) and \\(l_{2} \times d_{2}\\), such that \\(l_{1} \times l_{2} = l\\) and
216216
\\(d_{1} + d_{2} = d\\) (with the product for the lengths, this ends up being way smaller). The embedding for time
217217
step \\(j\\) in E is obtained by concatenating the embeddings for timestep \\(j \% l1\\) in E1 and \\(j // l1\\)
218218
in E2.
@@ -221,4 +221,4 @@ in E2.
221221

222222
In this section, we've explored the three main Transformer architectures and some specialized attention mechanisms. Understanding these architectural differences is crucial for selecting the right model for your specific NLP task.
223223

224-
As we move forward in the course, you'll get hands-on experience with these different architectures and learn how to fine-tune them for your specific needs. In the next section, we'll look at some of the limitations and biases present in these models that you should be aware of when deploying them.
224+
As we move forward in the course, you'll get hands-on experience with these different architectures and learn how to fine-tune them for your specific needs. In the next section, we'll look at some of the limitations and biases present in these models that you should be aware of when deploying them.

0 commit comments

Comments
 (0)