Skip to content

Commit 2b39c82

Browse files
authored
Merge pull request #862 from huggingface/bump_release
[RELEASE] March 31st 2025
2 parents 5f6c43e + 16e862d commit 2b39c82

File tree

5 files changed

+13
-8
lines changed

5 files changed

+13
-8
lines changed

chapters/en/chapter12/1.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -79,7 +79,7 @@ Don't worry if you're missing some of these – we'll explain key concepts as we
7979

8080
<Tip>
8181

82-
If you don't have all the prerequisites, check out this [course](chapter1/1.mdx) from units 1 to 11.
82+
If you don't have all the prerequisites, check out this [course](/course/chapter1/1) from units 1 to 11
8383

8484
</Tip>
8585

chapters/en/chapter12/2.mdx

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -68,7 +68,7 @@ Think about learning to ride a bike. You might wobble and fall at first (negativ
6868

6969
Now, why is RL so important for Large Language Models?
7070

71-
Well, training really good LLMs is tricky. We can train them on massive amounts of text from the internet, and they become very good at predicting the next word in a sentence. This is how they learn to generate fluent and grammatically correct text, as we learned in [chapter 2](/chapters/en/chapter2/1).
71+
Well, training really good LLMs is tricky. We can train them on massive amounts of text from the internet, and they become very good at predicting the next word in a sentence. This is how they learn to generate fluent and grammatically correct text, as we learned in [chapter 2](/course/chapter2/1).
7272

7373
However, just being fluent isn't enough. We want our LLMs to be more than just good at stringing words together. We want them to be:
7474

@@ -78,7 +78,7 @@ However, just being fluent isn't enough. We want our LLMs to be more than just g
7878

7979
Pre-training LLM methods, which mostly rely on predicting the next word from text data, sometimes fall short on these aspects.
8080

81-
Whilst supervised training is excellent at producing structured outputs, it can be less effective at producing helpful, harmless, and aligned responses. We explore supervised training in [chapter 11](/chapters/en/chapter11/1).
81+
Whilst supervised training is excellent at producing structured outputs, it can be less effective at producing helpful, harmless, and aligned responses. We explore supervised training in [chapter 11](/course/chapter11/1).
8282

8383
Fine-tuned models might generate fluent and structured text that is still factually incorrect, biased, or doesn't really answer the user's question in a helpful way.
8484

chapters/en/chapter12/3.mdx

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ The initial goal of the paper was to explore whether pure reinforcement learning
1212

1313
<Tip>
1414

15-
Up until that point, all the popular LLMs required some supervised fine-tuning, which we explored in [chapter 11](/chapters/en/chapter11/1).
15+
Up until that point, all the popular LLMs required some supervised fine-tuning, which we explored in [chapter 11](/course/chapter11/1).
1616

1717
</Tip>
1818

chapters/en/chapter12/3a.mdx

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -398,6 +398,7 @@ As you continue exploring GRPO, consider experimenting with different group size
398398
Happy training! 🚀
399399

400400
## References
401+
401402
1. [RLHF Book by Nathan Lambert](https://github.com/natolambert/rlhf-book)
402403
2. [DeepSeek-V3 Technical Report](https://huggingface.co/papers/2412.19437)
403404
3. [DeepSeekMath](https://huggingface.co/papers/2402.03300)

chapters/en/chapter12/6.mdx

Lines changed: 8 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -6,13 +6,15 @@
66

77
# Practical Exercise: GRPO with Unsloth
88

9-
In this exercise, you'll fine-tune a model with GRPO (Group Relative Policy Optimization) using Unsloth, to improve a model's reasoning capabilities. We covered GRPO in [Chapter 3](/en/chapter3/3).
9+
In this exercise, you'll fine-tune a model with GRPO (Group Relative Policy Optimization) using Unsloth, to improve a model's reasoning capabilities. We covered GRPO in [Chapter 3](/course/chapter3/3).
1010

1111
Unsloth is a library that accelerates LLM fine-tuning, making it possible to train models faster and with less computational resources. Unsloth is plugs into TRL, so we'll build on what we learned in the previous sections, and adapt it for Unsloth specifics.
1212

1313

1414
<Tip>
15+
1516
This exercise can be run on a free Google Colab T4 GPU. For the best experience, follow along with the notebook linked above and try it out yourself.
17+
1618
</Tip>
1719

1820
## Install dependencies
@@ -72,7 +74,7 @@ This code loads the model in 4-bit quantization to save memory and applies LoRA
7274

7375
<Tip>
7476

75-
We won't cover the details of LoRA in this chapter, but you can learn more in [Chapter 11](/en/chapter11/3).
77+
We won't cover the details of LoRA in this chapter, but you can learn more in [Chapter 11](/course/chapter11/3).
7678

7779
</Tip>
7880

@@ -146,7 +148,7 @@ The dataset is prepared by extracting the answer from the dataset and formatting
146148

147149
## Defining Reward Functions
148150

149-
As we discussed in [an earlier page](/en/chapter13/4), GRPO can use reward functions to guide the model's learning based on verifiable criteria like length and formatting.
151+
As we discussed in [an earlier page](/course/chapter13/4), GRPO can use reward functions to guide the model's learning based on verifiable criteria like length and formatting.
150152

151153
In this exercise, we'll define several reward functions that encourage different aspects of good reasoning. For example, we'll reward the model for providing an integer answer, and for following the strict format.
152154

@@ -221,7 +223,7 @@ These reward functions serve different purposes:
221223

222224
## Training with GRPO
223225

224-
Now we'll set up the GRPO trainer with our model, tokenizer, and reward functions. This part follows the same approach as the [previous exercise](/en/chapter12/5).
226+
Now we'll set up the GRPO trainer with our model, tokenizer, and reward functions. This part follows the same approach as the [previous exercise](/course/chapter12/5).
225227

226228
```python
227229
from trl import GRPOConfig, GRPOTrainer
@@ -278,7 +280,9 @@ trainer.train()
278280
```
279281

280282
<Tip warning={true}>
283+
281284
Training may take some time. You might not see rewards increase immediately - it can take 150-200 steps before you start seeing improvements. Be patient!
285+
282286
</Tip>
283287

284288
## Testing the Model

0 commit comments

Comments
 (0)