Skip to content
This repository was archived by the owner on Jul 1, 2023. It is now read-only.

Commit 3830c7a

Browse files
authored
[README] Use top-level differential operators. (#803)
Update README to use top-level differential operators. Method-style differential operators were removed in 0.8.
1 parent 4814609 commit 3830c7a

File tree

1 file changed

+6
-5
lines changed

1 file changed

+6
-5
lines changed

README.md

+6-5
Original file line numberDiff line numberDiff line change
@@ -41,17 +41,18 @@ struct Model: Layer {
4141
var classifier = Model()
4242
let optimizer = SGD(for: classifier, learningRate: 0.02)
4343
Context.local.learningPhase = .training
44-
let x: Tensor<Float> = ...
45-
let y: Tensor<Int32> = ...
44+
// Dummy data.
45+
let x: Tensor<Float> = Tensor(randomNormal: [100, 4])
46+
let y: Tensor<Int32> = Tensor(randomUniform: [100])
4647
```
4748

4849
#### Run a training loop
4950

50-
One way to define a training epoch is to use the [`Differentiable.gradient(in:)`](https://github.com/apple/swift/blob/652523f49581a42986ef2b6b04a593ed47496122/stdlib/public/core/AutoDiff.swift#L214) method.
51+
One way to define a training epoch is to use the [`gradient(at:in:)`](https://www.tensorflow.org/swift/api_docs/Functions#/s:10TensorFlow8gradient2at2in13TangentVectorQzx_AA0A0Vyq_GxXEtAA14DifferentiableRzAA0aB13FloatingPointR_r0_lF) function.
5152

5253
```swift
5354
for _ in 0..<1000 {
54-
let 𝛁model = classifier.gradient { classifier -> Tensor<Float> in
55+
let 𝛁model = gradient(at: classifier) { classifier -> Tensor<Float> in
5556
let ŷ = classifier(x)
5657
let loss = softmaxCrossEntropy(logits: ŷ, labels: y)
5758
print("Loss: \(loss)")
@@ -66,7 +67,7 @@ Another way is to make use of methods on `Differentiable` or `Layer` that produc
6667
```swift
6768
for _ in 0..<1000 {
6869
let (ŷ, backprop) = classifier.appliedForBackpropagation(to: x)
69-
let (loss, 𝛁ŷ) = ŷ.valueWithGradient { ŷ in softmaxCrossEntropy(logits: ŷ, labels: y) }
70+
let (loss, 𝛁ŷ) = valueWithGradient(at: ŷ) { ŷ in softmaxCrossEntropy(logits: ŷ, labels: y) }
7071
print("Model output: \(ŷ), Loss: \(loss)")
7172
let (𝛁model, _) = backprop(𝛁ŷ)
7273
optimizer.update(&classifier, along: 𝛁model)

0 commit comments

Comments
 (0)