Skip to content

Commit fe3df9d

Browse files
authored
[Docs] Add language identifiers to fenced code blocks (#28955)
Add language identifiers to code blocks
1 parent c617f98 commit fe3df9d

File tree

66 files changed

+137
-137
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

66 files changed

+137
-137
lines changed

docs/source/en/chat_templating.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -390,7 +390,7 @@ If your model expects those, they won't be added automatically by `apply_chat_te
390390
text will be tokenized with `add_special_tokens=False`. This is to avoid potential conflicts between the template and
391391
the `add_special_tokens` logic. If your model expects special tokens, make sure to add them to the template!
392392

393-
```
393+
```python
394394
tokenizer.chat_template = "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}"
395395
```
396396

docs/source/en/custom_models.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -310,7 +310,7 @@ Use `register_for_auto_class()` if you want the code files to be copied. If you
310310
you don't need to call it. In cases where there's more than one auto class, you can modify the `config.json` directly using the
311311
following structure:
312312

313-
```
313+
```json
314314
"auto_map": {
315315
"AutoConfig": "<your-repo-name>--<config-name>",
316316
"AutoModel": "<your-repo-name>--<config-name>",

docs/source/en/custom_tools.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -405,7 +405,7 @@ Assistant:
405405
Therefore it is important that the examples of the custom `chat` prompt template also make use of this format.
406406
You can overwrite the `chat` template at instantiation as follows.
407407

408-
```
408+
```python
409409
template = """ [...] """
410410

411411
agent = HfAgent(url_endpoint=your_endpoint, chat_prompt_template=template)

docs/source/en/installation.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ pip install 'transformers[tf-cpu]'
7272
M1 / ARM Users
7373

7474
You will need to install the following before installing TensorFLow 2.0
75-
```
75+
```bash
7676
brew install cmake
7777
brew install pkg-config
7878
```

docs/source/en/model_doc/fastspeech2_conformer.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -41,7 +41,7 @@ You can run FastSpeech2Conformer locally with the 🤗 Transformers library.
4141

4242
1. First install the 🤗 [Transformers library](https://github.com/huggingface/transformers), g2p-en:
4343

44-
```
44+
```bash
4545
pip install --upgrade pip
4646
pip install --upgrade transformers g2p-en
4747
```

docs/source/en/model_doc/layoutlmv2.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -50,7 +50,7 @@ this https URL.*
5050

5151
LayoutLMv2 depends on `detectron2`, `torchvision` and `tesseract`. Run the
5252
following to install them:
53-
```
53+
```bash
5454
python -m pip install 'git+https://github.com/facebookresearch/detectron2.git'
5555
python -m pip install torchvision tesseract
5656
```

docs/source/en/model_doc/lilt.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ The original code can be found [here](https://github.com/jpwang/lilt).
3939
- To combine the Language-Independent Layout Transformer with a new RoBERTa checkpoint from the [hub](https://huggingface.co/models?search=roberta), refer to [this guide](https://github.com/jpWang/LiLT#or-generate-your-own-checkpoint-optional).
4040
The script will result in `config.json` and `pytorch_model.bin` files being stored locally. After doing this, one can do the following (assuming you're logged in with your HuggingFace account):
4141

42-
```
42+
```python
4343
from transformers import LiltModel
4444

4545
model = LiltModel.from_pretrained("path_to_your_files")

docs/source/en/model_doc/musicgen.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -136,7 +136,7 @@ The same [`MusicgenProcessor`] can be used to pre-process an audio prompt that i
136136
following example, we load an audio file using the 🤗 Datasets library, which can be pip installed through the command
137137
below:
138138

139-
```
139+
```bash
140140
pip install --upgrade pip
141141
pip install datasets[audio]
142142
```

docs/source/en/model_doc/pop2piano.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -54,7 +54,7 @@ The original code can be found [here](https://github.com/sweetcocoa/pop2piano).
5454
## Usage tips
5555

5656
* To use Pop2Piano, you will need to install the 🤗 Transformers library, as well as the following third party modules:
57-
```
57+
```bash
5858
pip install pretty-midi==0.2.9 essentia==2.1b6.dev1034 librosa scipy
5959
```
6060
Please note that you may need to restart your runtime after installation.

docs/source/en/perf_hardware.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ Next let's have a look at one of the most important aspects when having multiple
6464

6565
If you use multiple GPUs the way cards are inter-connected can have a huge impact on the total training time. If the GPUs are on the same physical node, you can run:
6666

67-
```
67+
```bash
6868
nvidia-smi topo -m
6969
```
7070

docs/source/en/perf_train_cpu.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ IPEX release is following PyTorch, to install via pip:
3838
| 1.12 | 1.12.300+cpu |
3939

4040
Please run `pip list | grep torch` to get your `pytorch_version`, so you can get the `IPEX version_name`.
41-
```
41+
```bash
4242
pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
4343
```
4444
You can check the latest versions in [ipex-whl-stable-cpu](https://developer.intel.com/ipex-whl-stable-cpu) if needed.

docs/source/en/perf_train_cpu_many.md

+6-6
Original file line numberDiff line numberDiff line change
@@ -39,7 +39,7 @@ Wheel files are available for the following Python versions:
3939
| 1.12.0 | |||||
4040

4141
Please run `pip list | grep torch` to get your `pytorch_version`.
42-
```
42+
```bash
4343
pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu
4444
```
4545
where `{pytorch_version}` should be your PyTorch version, for instance 2.1.0.
@@ -59,13 +59,13 @@ Use this standards-based MPI implementation to deliver flexible, efficient, scal
5959
oneccl_bindings_for_pytorch is installed along with the MPI tool set. Need to source the environment before using it.
6060

6161
for Intel® oneCCL >= 1.12.0
62-
```
62+
```bash
6363
oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)")
6464
source $oneccl_bindings_for_pytorch_path/env/setvars.sh
6565
```
6666

6767
for Intel® oneCCL whose version < 1.12.0
68-
```
68+
```bash
6969
torch_ccl_path=$(python -c "import torch; import torch_ccl; import os; print(os.path.abspath(os.path.dirname(torch_ccl.__file__)))")
7070
source $torch_ccl_path/env/setvars.sh
7171
```
@@ -154,7 +154,7 @@ This example assumes that you have:
154154

155155
The snippet below is an example of a Dockerfile that uses a base image that supports distributed CPU training and then
156156
extracts a Transformers release to the `/workspace` directory, so that the example scripts are included in the image:
157-
```
157+
```dockerfile
158158
FROM intel/ai-workflows:torch-2.0.1-huggingface-multinode-py3.9
159159

160160
WORKDIR /workspace
@@ -286,7 +286,7 @@ set the same CPU and memory amounts for both the resource limits and requests.
286286
287287
After the PyTorchJob spec has been updated with values appropriate for your cluster and training job, it can be deployed
288288
to the cluster using:
289-
```
289+
```bash
290290
kubectl create -f pytorchjob.yaml
291291
```
292292

@@ -304,7 +304,7 @@ transformers-pytorchjob-worker-3 1/1 Running
304304
```
305305

306306
The logs for worker can be viewed using `kubectl logs -n kubeflow <pod name>`. Add `-f` to stream the logs, for example:
307-
```
307+
```bash
308308
kubectl logs -n kubeflow transformers-pytorchjob-worker-0 -f
309309
```
310310

docs/source/en/perf_train_gpu_many.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -140,7 +140,7 @@ Here is the benchmarking code and outputs:
140140

141141
**DP**
142142

143-
```
143+
```bash
144144
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \
145145
python examples/pytorch/language-modeling/run_clm.py \
146146
--model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
@@ -151,7 +151,7 @@ python examples/pytorch/language-modeling/run_clm.py \
151151

152152
**DDP w/ NVlink**
153153

154-
```
154+
```bash
155155
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \
156156
torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \
157157
--model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \
@@ -162,7 +162,7 @@ torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \
162162

163163
**DDP w/o NVlink**
164164

165-
```
165+
```bash
166166
rm -r /tmp/test-clm; NCCL_P2P_DISABLE=1 CUDA_VISIBLE_DEVICES=0,1 \
167167
torchrun --nproc_per_node 2 examples/pytorch/language-modeling/run_clm.py \
168168
--model_name_or_path gpt2 --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 \

docs/source/en/perf_train_gpu_one.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -201,7 +201,7 @@ of 23 bits precision it has only 10 bits (same as fp16) and uses only 19 bits in
201201
you can use the normal fp32 training and/or inference code and by enabling tf32 support you can get up to 3x throughput
202202
improvement. All you need to do is to add the following to your code:
203203

204-
```
204+
```python
205205
import torch
206206
torch.backends.cuda.matmul.allow_tf32 = True
207207
torch.backends.cudnn.allow_tf32 = True

docs/source/en/tasks/video_classification.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -483,7 +483,7 @@ You can also manually replicate the results of the `pipeline` if you'd like.
483483
484484
Now, pass your input to the model and return the `logits`:
485485
486-
```
486+
```py
487487
>>> logits = run_inference(trained_model, sample_test_video["video"])
488488
```
489489

docs/source/fr/installation.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -74,7 +74,7 @@ Pour les architectures mac M1 / ARM
7474

7575
Vous devez installer les outils suivants avant d'installer TensorFLow 2.0
7676

77-
```
77+
```bash
7878
brew install cmake
7979
brew install pkg-config
8080
```

docs/source/it/perf_hardware.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ Diamo quindi un'occhiata a uno degli aspetti più importanti quando si hanno pi
6363

6464
Se utilizzi più GPU, il modo in cui le schede sono interconnesse può avere un enorme impatto sul tempo totale di allenamento. Se le GPU si trovano sullo stesso nodo fisico, puoi eseguire:
6565

66-
```
66+
```bash
6767
nvidia-smi topo -m
6868
```
6969

docs/source/ja/chat_templating.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -215,7 +215,7 @@ LLM(Language Model)はさまざまな入力形式を処理できるほどス
215215

216216
If you like this one, here it is in one-liner form, ready to copy into your code:
217217

218-
```
218+
```python
219219
tokenizer.chat_template = "{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}"
220220
```
221221

docs/source/ja/custom_tools.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -385,7 +385,7 @@ Assistant:
385385

386386
したがって、カスタム`chat`プロンプトテンプレートの例もこのフォーマットを使用することが重要です。以下のように、インスタンス化時に`chat`テンプレートを上書きできます。
387387

388-
```
388+
```python
389389
template = """ [...] """
390390

391391
agent = HfAgent(url_endpoint=your_endpoint, chat_prompt_template=template)

docs/source/ja/main_classes/deepspeed.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -2202,7 +2202,7 @@ print(f"rank{rank}:\n in={text_in}\n out={text_out}")
22022202

22032203
それを`t0.py`として保存して実行しましょう。
22042204

2205-
```
2205+
```bash
22062206
$ deepspeed --num_gpus 2 t0.py
22072207
rank0:
22082208
in=Is this review positive or negative? Review: this is the best cast iron skillet you will ever buy
@@ -2226,13 +2226,13 @@ DeepSpeed 統合を含む PR を送信する場合は、CircleCI PR CI セット
22262226

22272227
DeepSpeed テストを実行するには、少なくとも以下を実行してください。
22282228

2229-
```
2229+
```bash
22302230
RUN_SLOW=1 pytest tests/deepspeed/test_deepspeed.py
22312231
```
22322232

22332233
モデリングまたは pytorch サンプル コードのいずれかを変更した場合は、Model Zoo テストも実行します。以下はすべての DeepSpeed テストを実行します。
22342234

2235-
```
2235+
```bash
22362236
RUN_SLOW=1 pytest tests/deepspeed
22372237
```
22382238

docs/source/ja/perf_hardware.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ GPUが重要な負荷の下でどのような温度を目指すべきかを正
6464
複数のGPUを使用する場合、カードの相互接続方法はトータルのトレーニング時間に大きな影響を与える可能性があります。GPUが同じ物理ノードにある場合、次のように実行できます:
6565

6666

67-
```
67+
```bash
6868
nvidia-smi topo -m
6969
```
7070

docs/source/ja/perf_torch_compile.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,7 @@ model = AutoModelForImageClassification.from_pretrained(MODEL_ID).to("cuda")
4242

4343
### Image Classification with ViT
4444

45-
```
45+
```python
4646
from PIL import Image
4747
import requests
4848
import numpy as np

docs/source/ja/perf_train_cpu.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ IPEXのリリースはPyTorchに従っており、pipを使用してインスト
3636
| 1.11 | 1.11.200+cpu |
3737
| 1.10 | 1.10.100+cpu |
3838

39-
```
39+
```bash
4040
pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
4141
```
4242

docs/source/ja/perf_train_cpu_many.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -38,7 +38,7 @@ Wheelファイルは、以下のPythonバージョン用に利用可能です:
3838
| 1.11.0 | |||||
3939
| 1.10.0 ||||| |
4040

41-
```
41+
```bash
4242
pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu
4343
```
4444

@@ -70,13 +70,13 @@ oneccl_bindings_for_pytorchはMPIツールセットと一緒にインストー
7070

7171

7272
for Intel® oneCCL >= 1.12.0
73-
```
73+
```bash
7474
oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)")
7575
source $oneccl_bindings_for_pytorch_path/env/setvars.sh
7676
```
7777

7878
for Intel® oneCCL whose version < 1.12.0
79-
```
79+
```bash
8080
torch_ccl_path=$(python -c "import torch; import torch_ccl; import os; print(os.path.abspath(os.path.dirname(torch_ccl.__file__)))")
8181
source $torch_ccl_path/env/setvars.sh
8282
```

docs/source/ja/perf_train_gpu_many.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -131,7 +131,7 @@ DPとDDPの他にも違いがありますが、この議論には関係ありま
131131
`NCCL_P2P_DISABLE=1`を使用して、対応するベンチマークでNVLink機能を無効にしました。
132132

133133

134-
```
134+
```bash
135135

136136
# DP
137137
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \

docs/source/ja/perf_train_gpu_one.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -151,7 +151,7 @@ training_args = TrainingArguments(bf16=True, **default_args)
151151

152152
アンペアハードウェアは、tf32という特別なデータ型を使用します。これは、fp32と同じ数値範囲(8ビット)を持っていますが、23ビットの精度ではなく、10ビットの精度(fp16と同じ)を持ち、合計で19ビットしか使用しません。これは通常のfp32トレーニングおよび推論コードを使用し、tf32サポートを有効にすることで、最大3倍のスループットの向上が得られる点で「魔法のよう」です。行う必要があるのは、次のコードを追加するだけです:
153153

154-
```
154+
```python
155155
import torch
156156
torch.backends.cuda.matmul.allow_tf32 = True
157157
torch.backends.cudnn.allow_tf32 = True

docs/source/ja/tasks/video_classification.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -490,7 +490,7 @@ def compute_metrics(eval_pred):
490490
491491
次に、入力をモデルに渡し、`logits `を返します。
492492
493-
```
493+
```py
494494
>>> logits = run_inference(trained_model, sample_test_video["video"])
495495
```
496496

docs/source/ko/custom_tools.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -373,7 +373,7 @@ Assistant:
373373
따라서 사용자 정의 `chat` 프롬프트 템플릿의 예제에서도 이 형식을 사용하는 것이 중요합니다.
374374
다음과 같이 인스턴스화 할 때 `chat` 템플릿을 덮어쓸 수 있습니다.
375375

376-
```
376+
```python
377377
template = """ [...] """
378378

379379
agent = HfAgent(url_endpoint=your_endpoint, chat_prompt_template=template)

docs/source/ko/perf_hardware.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ GPU가 과열될 때 정확한 적정 온도를 알기 어려우나, 아마도 +
6464

6565
다중 GPU를 사용하는 경우 GPU 간의 연결 방식은 전체 훈련 시간에 큰 영향을 미칠 수 있습니다. 만약 GPU가 동일한 물리적 노드에 있을 경우, 다음과 같이 확인할 수 있습니다:
6666

67-
```
67+
```bash
6868
nvidia-smi topo -m
6969
```
7070

docs/source/ko/perf_train_cpu.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -36,7 +36,7 @@ IPEX 릴리스는 PyTorch를 따라갑니다. pip를 통해 설치하려면:
3636
| 1.11 | 1.11.200+cpu |
3737
| 1.10 | 1.10.100+cpu |
3838

39-
```
39+
```bash
4040
pip install intel_extension_for_pytorch==<version_name> -f https://developer.intel.com/ipex-whl-stable-cpu
4141
```
4242

docs/source/ko/perf_train_cpu_many.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ rendered properly in your Markdown viewer.
3737
| 1.11.0 | |||||
3838
| 1.10.0 ||||| |
3939

40-
```
40+
```bash
4141
pip install oneccl_bind_pt=={pytorch_version} -f https://developer.intel.com/ipex-whl-stable-cpu
4242
```
4343
`{pytorch_version}`은 1.13.0과 같이 PyTorch 버전을 나타냅니다.
@@ -57,13 +57,13 @@ PyTorch 1.12.1은 oneccl_bindings_for_pytorch 1.12.10 버전과 함께 사용해
5757
oneccl_bindings_for_pytorch는 MPI 도구 세트와 함께 설치됩니다. 사용하기 전에 환경을 소스로 지정해야 합니다.
5858

5959
Intel® oneCCL 버전 1.12.0 이상인 경우
60-
```
60+
```bash
6161
oneccl_bindings_for_pytorch_path=$(python -c "from oneccl_bindings_for_pytorch import cwd; print(cwd)")
6262
source $oneccl_bindings_for_pytorch_path/env/setvars.sh
6363
```
6464

6565
Intel® oneCCL 버전이 1.12.0 미만인 경우
66-
```
66+
```bash
6767
torch_ccl_path=$(python -c "import torch; import torch_ccl; import os; print(os.path.abspath(os.path.dirname(torch_ccl.__file__)))")
6868
source $torch_ccl_path/env/setvars.sh
6969
```

docs/source/ko/perf_train_gpu_many.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -133,7 +133,7 @@ DP와 DDP 사이에는 다른 차이점이 있지만, 이 토론과는 관련이
133133

134134
해당 벤치마크에서 `NCCL_P2P_DISABLE=1`을 사용하여 NVLink 기능을 비활성화했습니다.
135135

136-
```
136+
```bash
137137

138138
# DP
139139
rm -r /tmp/test-clm; CUDA_VISIBLE_DEVICES=0,1 \

0 commit comments

Comments
 (0)