Skip to content

Latest commit

 

History

History

speech-recognition

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 

Automatic Speech Recognition examples

Connectionist Temporal Classification without Language Model (CTC w/o LM)

The script run_speech_recognition_ctc.py can be used to fine-tune any pretrained Connectionist Temporal Classification Model for automatic speech recognition on one of the official speech recognition datasets or a custom dataset.

Speech recognition models that have been pretrained in unsupervised fashion on audio data alone, e.g. Wav2Vec2, HuBERT, XLSR-Wav2Vec2, have shown to require only very little annotated data to yield good performance on automatic speech recognition datasets.

In the script [run_speech_recognition_ctc], we first create a vocabulary from all unique characters of both the training data and evaluation data. Then, we preprocesses the speech recognition dataset, which includes correct resampling, normalization and padding. Finally, the pretrained speech recognition model is fine-tuned on the annotated speech recognition datasets using CTC loss.


NOTE

If you encounter problems with data preprocessing by setting --preprocessing_num_workers > 1, you might want to set the environment variable OMP_NUM_THREADS to 1 as follows:

OMP_NUM_THREADS=1 python run_speech_recognition_ctc ...

If the environment variable is not set, the training script might freeze, i.e. see: pytorch/audio#1021 (comment)


Single-GPU

The following command shows how to fine-tune XLSR-Wav2Vec2 on Common Voice using a single GPU in half-precision.

python run_speech_recognition_ctc.py \
	--dataset_name="common_voice" \
	--model_name_or_path="facebook/wav2vec2-large-xlsr-53" \
	--dataset_config_name="tr" \
	--output_dir="./wav2vec2-common_voice-tr-demo" \
	--overwrite_output_dir \
	--num_train_epochs="15" \
	--per_device_train_batch_size="16" \
	--gradient_accumulation_steps="2" \
	--learning_rate="3e-4" \
	--warmup_steps="500" \
	--evaluation_strategy="steps" \
	--text_column_name="sentence" \
	--save_steps="400" \
	--eval_steps="100" \
	--layerdrop="0.0" \
	--save_total_limit="3" \
	--freeze_feature_extractor \
	--gradient_checkpointing \
	--chars_to_ignore , ? . ! - \; \: \" “ % ‘ ” � \
	--fp16 \
	--group_by_length \
	--push_to_hub \
	--do_train --do_eval 

On a single V100 GPU, this script should run in ca. 1 hour 20 minutes and yield a CTC loss of 0.39 and word error rate of 0.35.

Multi-GPU

The following command shows how to fine-tune XLSR-Wav2Vec2 on Common Voice using 8 GPUs in half-precision.

python -m torch.distributed.launch \
	--nproc_per_node 8 run_speech_recognition_ctc.py \
	--dataset_name="common_voice" \
	--model_name_or_path="facebook/wav2vec2-large-xlsr-53" \
	--dataset_config_name="tr" \
	--output_dir="./wav2vec2-common_voice-tr-demo-dist" \
	--overwrite_output_dir \
	--num_train_epochs="15" \
	--per_device_train_batch_size="4" \
	--learning_rate="3e-4" \
	--warmup_steps="500" \
	--evaluation_strategy="steps" \
	--audio_column_name="path" \
	--text_column_name="sentence" \
	--save_steps="400" \
	--eval_steps="100" \
	--logging_steps="1" \
	--layerdrop="0.0" \
	--save_total_limit="3" \
	--freeze_feature_extractor \
	--gradient_checkpointing \
	--chars_to_ignore , ? . ! - \; \: \" “ % ‘ ” � \
	--fp16 \
	--group_by_length \
	--push_to_hub \
	--do_train --do_eval

On 8 V100 GPUs, this script should run in ca. 18 minutes and yield a CTC loss of 0.39 and word error rate of 0.36.

Examples

The following tables present a couple of example runs on the most popular speech-recognition datasets. The presented performances are by no means optimal as no hyper-parameter tuning was done. Nevertheless, they can serve as a baseline to improve upon.

Dataset Dataset Config Pretrained Model Word error rate on eval GPU setup Training time Fine-tuned Model & Logs Command to reproduce
TIMIT - wav2vec2-base 0.21 1 GPU TITAN RTX 32min here run.sh
TIMIT - wav2vec2-base 0.21 1 GPU TITAN RTX 32min here run.sh
TIMIT - unispeech-large-1500h-cv 0.22 1 GPU TITAN RTX 35min here run.sh
TIMIT - asapp/sew-mid-100k 0.30 1 GPU TITAN RTX 28min here run.sh
TIMIT - ntu-spml/distilhubert 0.68 1 GPU TITAN RTX 26min here run.sh
Dataset Dataset Config Pretrained Model Word error rate on eval GPU setup Training time Fine-tuned Model & Logs Command to reproduce
Librispeech "clean" - "train.100" facebook/wav2vec2-large-lv60 0.042 8 GPU V100 1h30min here run.sh
Librispeech "clean" - "train.100" facebook/hubert-large-ll60k 0.088 8 GPU V100 1h30min here run.sh
Librispeech "clean" - "train.100" asapp/sew-mid-100k 0.167 8 GPU V100 54min here run.sh
Dataset Dataset Config Pretrained Model Word error rate on eval GPU setup Training time Fine-tuned Model & Logs Command to reproduce
Common Voice "tr" facebook/wav2vec2-large-xlsr-53 0.36 8 GPU V100 18min here run.sh
Common Voice "tr" facebook/wav2vec2-large-xlsr-53 0.35 1 GPU V100 1h20min here run.sh