Skip to content
This repository was archived by the owner on Mar 30, 2022. It is now read-only.

Commit 4be5f6d

Browse files
committed
Minor fixes.
1 parent 02e9e14 commit 4be5f6d

File tree

2 files changed

+2
-3
lines changed

2 files changed

+2
-3
lines changed

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -139,7 +139,7 @@ out the following documentation.
139139
- [Automatic Differentiation Whitepaper](docs/AutomaticDifferentiation.md)
140140
- [Automatic Differentiation Manifesto](https://gist.github.com/rxwei/30ba75ce092ab3b0dce4bde1fc2c9f1d)
141141
- [Dynamic Property Iteration using Key Paths](docs/DynamicPropertyIteration.md)
142-
- [Hierarchical parameter iteration and optimization](docs/ParameterOptimization.md)
142+
- [Hierarchical Parameter Iteration and Optimization](docs/ParameterOptimization.md)
143143
- [Python Interoperability](docs/PythonInteroperability.md)
144144
- [Graph Program Extraction](docs/GraphProgramExtraction.md)
145145

docs/ParameterOptimization.md

+1-2
Original file line numberDiff line numberDiff line change
@@ -103,10 +103,9 @@ for (inout θ, dθ) in zip(parameters, gradients) {
103103
We don't want to actually lower the for-loop or zip operation to TensorFlow (lowering wouldn't be straightforward or and lowered representation wouldn't be efficient). Instead, we want to fully unroll the loop into individual straight-line statements:
104104

105105
```swift
106-
// w1, w2: Tensor<Float>
106+
// w1, w2, b1, b2: Tensor<Float>
107107
w1 -= learningRate * dw1
108108
w2 -= learningRate * dw2
109-
// b1, b2: Float
110109
b1 -= learningRate * db1
111110
b2 -= learningRate * db2
112111
```

0 commit comments

Comments
 (0)