Hybrid quantum-classical network written in PyTorch does not get the correct gradients using automatic-differentiation #230
Unanswered
zivchen9993
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi all,
I'm trying to implement a hybrid quantum-classical network and train it using GPU.
It is written in pytorch and TensorCircuit, and I use automatic-differentiation (AD) to calculate the cost function.
I test it using a simple toy model, where I try to learn a sine function between [0, pi] according to the loss function:
du_dt - cosine(t)
and initial-condition loss:u[0] = 0
. (Solving the Ordinary Differential Equation (ODE))The QLayer:
The network:
The setting and optimization loop:
The model does not converge to the correct answer, even though the loss gets smaller and smaller...
I think that this is due to incorrect calculation of the gradients. how can I solve it? I need the GPU capabilities for scaling to larger networks and circuits.
The expected output (the one received if I change the

QLayer to nn.Linear(n_qubits, n_qubits)
):The output I receive:

(in both cases everything remains the same but the layer change)
Thanks in advance!
Beta Was this translation helpful? Give feedback.
All reactions