Skip to content

Commit 5fe58de

Browse files
wgb14csukuangfj
andauthored
GigaSpeech recipe (k2-fsa#120)
* initial commit * support download, data prep, and fbank * on-the-fly feature extraction by default * support BPE based lang * support HLG for BPE * small fix * small fix * chunked feature extraction by default * Compute features for GigaSpeech by splitting the manifest. * Fixes after review. * Split manifests into 2000 pieces. * set audio duration mismatch tolerance to 0.01 * small fix * add conformer training recipe * Add conformer.py without pre-commit checking * lazy loading and use SingleCutSampler * DynamicBucketingSampler * use KaldifeatFbank to compute fbank for musan * use pretrained language model and lexicon * use 3gram to decode, 4gram to rescore * Add decode.py * Update .flake8 * Delete compute_fbank_gigaspeech.py * Use BucketingSampler for valid and test dataloader * Update params in train.py * Use bpe_500 * update params in decode.py * Decrease num_paths while CUDA OOM * Added README * Update RESULTS * black * Decrease num_paths while CUDA OOM * Decode with post-processing * Update results * Remove lazy_load option * Use default `storage_type` * Keep the original tolerance * Use split-lazy * black * Update pretrained model Co-authored-by: Fangjun Kuang <[email protected]>
1 parent d88e786 commit 5fe58de

27 files changed

+5049
-16
lines changed

.flake8

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ per-file-ignores =
77
egs/librispeech/ASR/*/conformer.py: E501,
88
egs/aishell/ASR/*/conformer.py: E501,
99
egs/tedlium3/ASR/*/conformer.py: E501,
10+
egs/gigaspeech/ASR/*/conformer.py: E501,
1011
egs/librispeech/ASR/pruned_transducer_stateless2/*.py: E501,
1112

1213
# invalid escape sequence (cause by tex formular), W605

.gitignore

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -6,6 +6,8 @@ exp
66
exp*/
77
*.pt
88
download
9+
dask-worker-space
10+
log
911
*.bak
1012
*-bak
1113
*bak.py

egs/gigaspeech/ASR/.gitignore

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
log-*

egs/gigaspeech/ASR/README.md

Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
# GigaSpeech
2+
GigaSpeech, an evolving, multi-domain English
3+
speech recognition corpus with 10,000 hours of high quality labeled
4+
audio, collected from audiobooks, podcasts
5+
and YouTube, covering both read and spontaneous speaking styles,
6+
and a variety of topics, such as arts, science, sports, etc. More details can be found: https://github.com/SpeechColab/GigaSpeech
7+
8+
## Download
9+
10+
Apply for the download credentials and download the dataset by following https://github.com/SpeechColab/GigaSpeech#download. Then create a symlink
11+
```bash
12+
ln -sfv /path/to/GigaSpeech download/GigaSpeech
13+
```
14+
15+
## Performance Record
16+
| | Dev | Test |
17+
|-----|-------|-------|
18+
| WER | 10.47 | 10.58 |
19+
20+
See [RESULTS](/egs/gigaspeech/ASR/RESULTS.md) for details.

egs/gigaspeech/ASR/RESULTS.md

Lines changed: 79 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,79 @@
1+
## Results
2+
3+
### GigaSpeech BPE training results (Conformer-CTC)
4+
5+
#### 2022-04-06
6+
7+
The best WER, as of 2022-04-06, for the gigaspeech is below
8+
9+
Results using HLG decoding + n-gram LM rescoring + attention decoder rescoring:
10+
11+
| | Dev | Test |
12+
|-----|-------|-------|
13+
| WER | 10.47 | 10.58 |
14+
15+
Scale values used in n-gram LM rescoring and attention rescoring for the best WERs are:
16+
| ngram_lm_scale | attention_scale |
17+
|----------------|-----------------|
18+
| 0.5 | 1.3 |
19+
20+
21+
To reproduce the above result, use the following commands for training:
22+
23+
```
24+
cd egs/gigaspeech/ASR
25+
./prepare.sh
26+
export CUDA_VISIBLE_DEVICES="0,1,2,3,4,5,6,7"
27+
./conformer_ctc/train.py \
28+
--max-duration 120 \
29+
--num-workers 1 \
30+
--world-size 8 \
31+
--exp-dir conformer_ctc/exp_500 \
32+
--lang-dir data/lang_bpe_500
33+
```
34+
35+
and the following command for decoding:
36+
37+
```
38+
./conformer_ctc/decode.py \
39+
--epoch 18 \
40+
--avg 6 \
41+
--method attention-decoder \
42+
--num-paths 1000 \
43+
--exp-dir conformer_ctc/exp_500 \
44+
--lang-dir data/lang_bpe_500 \
45+
--max-duration 20 \
46+
--num-workers 1
47+
```
48+
49+
Results using HLG decoding + whole lattice rescoring:
50+
51+
| | Dev | Test |
52+
|-----|-------|-------|
53+
| WER | 10.51 | 10.62 |
54+
55+
Scale values used in n-gram LM rescoring and attention rescoring for the best WERs are:
56+
| lm_scale |
57+
|----------|
58+
| 0.2 |
59+
60+
To reproduce the above result, use the training commands above, and the following command for decoding:
61+
62+
```
63+
./conformer_ctc/decode.py \
64+
--epoch 18 \
65+
--avg 6 \
66+
--method whole-lattice-rescoring \
67+
--num-paths 1000 \
68+
--exp-dir conformer_ctc/exp_500 \
69+
--lang-dir data/lang_bpe_500 \
70+
--max-duration 20 \
71+
--num-workers 1
72+
```
73+
Note: the `whole-lattice-rescoring` method is about twice as fast as the `attention-decoder` method, with slightly worse WER.
74+
75+
Pretrained model is available at
76+
<https://huggingface.co/wgb14/icefall-asr-gigaspeech-conformer-ctc>
77+
78+
The tensorboard log for training is available at
79+
<https://tensorboard.dev/experiment/rz63cmJXSK2fV9GceJtZXQ/>

egs/gigaspeech/ASR/conformer_ctc/__init__.py

Whitespace-only changes.

0 commit comments

Comments
 (0)