Skip to content
This repository was archived by the owner on Mar 30, 2022. It is now read-only.

Commit a57982b

Browse files
rxweidan-zheng
authored andcommitted
Properly indent nested text and code blocks. (#138)
1 parent 8690171 commit a57982b

File tree

1 file changed

+20
-20
lines changed

1 file changed

+20
-20
lines changed

docs/ParameterOptimization.md

+20-20
Original file line numberDiff line numberDiff line change
@@ -35,30 +35,30 @@ Here are some additional rules about models and parameters:
3535

3636
1. Not all properties of a model are required to be parameters: a model may have properties which aren't meant to be trainable (e.g. configuration flags or state-caching variables). This requires a clear way to distinguish parameters from other properties.
3737

38-
```swift
39-
struct MyMLModel {
40-
// These are parameters.
41-
var weight: Tensor<Float>
42-
var bias: Tensor<Float>
43-
44-
// Need to distinguish from non-parameter stored properties.
45-
let useBias: Bool
46-
var previousWeight: Tensor<Float>
47-
}
48-
```
38+
```swift
39+
struct MyMLModel {
40+
// These are parameters.
41+
var weight: Tensor<Float>
42+
var bias: Tensor<Float>
43+
44+
// Need to distinguish from non-parameter stored properties.
45+
let useBias: Bool
46+
var previousWeight: Tensor<Float>
47+
}
48+
```
4949

5050
2. There must exist some mechanism to update all parameters of a model given their gradients.
5151

52-
The ability to jointly iterate over parameters and gradients is crucial for writing simple, generic code that works with all models. Without this ability to perform "generic parameter update", users must duplicate code for each parameter, with no potential for generalization:
52+
The ability to jointly iterate over parameters and gradients is crucial for writing simple, generic code that works with all models. Without this ability to perform "generic parameter update", users must duplicate code for each parameter, with no potential for generalization:
5353

54-
```swift
55-
// w1, w2, b1, b2: Tensor<Float>
56-
w1 -= learningRate * dw1
57-
w2 -= learningRate * dw2
58-
b1 -= learningRate * db1
59-
b2 -= learningRate * db2
60-
...
61-
```
54+
```swift
55+
// w1, w2, b1, b2: Tensor<Float>
56+
w1 -= learningRate * dw1
57+
w2 -= learningRate * dw2
58+
b1 -= learningRate * db1
59+
b2 -= learningRate * db2
60+
...
61+
```
6262

6363
### Existing approaches
6464

0 commit comments

Comments
 (0)