You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/source/tutorials/tfim_vqe.ipynb
+23-16
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,12 @@
1
1
{
2
2
"cells": [
3
+
{
4
+
"cell_type": "markdown",
5
+
"metadata": {},
6
+
"source": [
7
+
"##### Copyright 2021 The TensorCircuit Authors."
8
+
]
9
+
},
3
10
{
4
11
"cell_type": "markdown",
5
12
"metadata": {},
@@ -13,8 +20,8 @@
13
20
"source": [
14
21
"## Overview\n",
15
22
"\n",
16
-
"The main aim of this tutorial is not about the physics perspective of VQE, instead we demonstrate\n",
17
-
"the main ingredients of tensorcircuit by this simple VQE toy model."
23
+
"The main aim of this tutorial is not about the physics perspective of VQE, instead, we demonstrate\n",
24
+
"the main ingredients of TensorCircuit by this simple VQE toy model."
18
25
]
19
26
},
20
27
{
@@ -23,7 +30,7 @@
23
30
"source": [
24
31
"## Background\n",
25
32
"\n",
26
-
"Baiscally, we train a parameterized quantum circuit with repetions of $e^{i\\theta} ZZ$ and $e^{i\\theta X}$ layers as $U(\\rm{\\theta})$. And the objective to be minimized is this task is $\\mathcal{L}(\\rm{\\theta})=\\langle 0^n\\vert U(\\theta)^\\dagger H U(\\theta)\\vert 0^n\\rangle$. The Hamiltonian is from TFIM as $H = \\sum_{i} Z_iZ_{i+1} -\\sum_i X_i$."
33
+
"Basically, we train a parameterized quantum circuit with repetitions of $e^{i\\theta} ZZ$ and $e^{i\\theta X}$ layers as $U(\\rm{\\theta})$. And the objective to be minimized is this task is $\\mathcal{L}(\\rm{\\theta})=\\langle 0^n\\vert U(\\theta)^\\dagger H U(\\theta)\\vert 0^n\\rangle$. The Hamiltonian is from TFIM as $H = \\sum_{i} Z_iZ_{i+1} -\\sum_i X_i$."
27
34
]
28
35
},
29
36
{
@@ -55,7 +62,7 @@
55
62
"cell_type": "markdown",
56
63
"metadata": {},
57
64
"source": [
58
-
"To enable automatical differentiation support, we should set the tensorcircuit backend beyond the default one \"numpy\".\n",
65
+
"To enable automatic differentiation support, we should set the TensorCircuit backend beyond the default one \"NumPy\".\n",
59
66
"And we can also set the high precision complex128 for the simulation."
60
67
]
61
68
},
@@ -121,7 +128,7 @@
121
128
"cell_type": "markdown",
122
129
"metadata": {},
123
130
"source": [
124
-
"## higher level API"
131
+
"## Higher-level API"
125
132
]
126
133
},
127
134
{
@@ -183,7 +190,7 @@
183
190
"metadata": {},
184
191
"source": [
185
192
"To train the parameterized circuit, we should utilize the gradient information $\\frac{\\partial \\mathcal{L}}{\\partial \\rm{\\theta}}$ with gradient descent.\n",
186
-
"We also use ``jit`` to wrap the value and grad function for a substantial speed up. Note how (1, 2) args of ``vqe_tfim`` is labelled as static since they are just integers for qubit number and layer number instead of tensors."
193
+
"We also use ``jit`` to wrap the value and grad function for a substantial speedup. Note how (1, 2) args of ``vqe_tfim`` are labeled as static since they are just integers for qubit number and layer number instead of tensors."
187
194
]
188
195
},
189
196
{
@@ -252,9 +259,9 @@
252
259
"cell_type": "markdown",
253
260
"metadata": {},
254
261
"source": [
255
-
"### batched VQE example\n",
262
+
"### Batched VQE Example\n",
256
263
"\n",
257
-
"We can even run a batched version of VQE optimization, namely, we simutaneously optimize parameterized circuit for different random initializations, so that we can try best to avoid local minimum be locate the best of the converged energies."
264
+
"We can even run a batched version of VQE optimization, namely, we simultaneously optimize parameterized circuits for different random initializations, so that we can try our best to avoid local minimums and locate the best of the converged energies."
258
265
]
259
266
},
260
267
{
@@ -357,12 +364,12 @@
357
364
"cell_type": "markdown",
358
365
"metadata": {},
359
366
"source": [
360
-
"### Different backends\n",
367
+
"### Different Backends\n",
361
368
"\n",
362
369
"We can change the backends at runtime without even changing one line of the code!\n",
363
370
"\n",
364
-
"However, in normal user cases, we strongly recommend the users stick to one backend in one jupyter or python scripts.\n",
365
-
"One can enjoy the facility provided by other backends by changing the ``set_backend`` line and running the same script again. This approach is much safer than using multiple backends in the same file unless you know the lowerlevel details of tensorcircuit enough."
371
+
"However, in normal user cases, we strongly recommend the users stick to one backend in one jupyter or python script.\n",
372
+
"One can enjoy the facility provided by other backends by changing the ``set_backend`` line and running the same script again. This approach is much safer than using multiple backends in the same file unless you know the lower-level details of TensorCircuit enough."
366
373
]
367
374
},
368
375
{
@@ -469,11 +476,11 @@
469
476
"cell_type": "markdown",
470
477
"metadata": {},
471
478
"source": [
472
-
"## lower level API\n",
479
+
"## Lower-level API\n",
473
480
"\n",
474
-
"The higherlevel API under the namespace of ``tensorcircuit`` provides a unified framework to do linear algebra and automatic differentiation which is backend agnostic.\n",
481
+
"The higher-level API under the namespace of ``TensorCircuit`` provides a unified framework to do linear algebra and automatic differentiation which is backend agnostic.\n",
475
482
"\n",
476
-
"One may also use the related APIs (ops, ADrelated, jitrelated) directly provided by tensorflow or jax, as long as one is ok to stick with one fixed backend. See tensorflow backend example below.\n"
483
+
"One may also use the related APIs (ops, AD-related, jit-related) directly provided by TensorFlow or Jax, as long as one is ok to stick with one fixed backend. See the tensorflow backend example below.\n"
Copy file name to clipboardexpand all lines: docs/source/tutorials/tfim_vqe_cn.ipynb
+26-28
Original file line number
Diff line number
Diff line change
@@ -11,26 +11,26 @@
11
11
"cell_type": "markdown",
12
12
"metadata": {},
13
13
"source": [
14
-
"## Overview\n",
14
+
"## 概述\n",
15
15
"\n",
16
-
"The main aim of this tutorial is not about the physics perspective of VQE, instead we demonstrate\n",
17
-
"the main ingredients of tensorcircuit by this simple VQE toy model. "
16
+
"本教程的主要目的不是关于 VQE 物理层面的讨论,而是我们通过演示\n",
17
+
"这个简单的 VQE 玩具模型来了解张量电路的主要技术组件和用法。"
18
18
]
19
19
},
20
20
{
21
21
"cell_type": "markdown",
22
22
"metadata": {},
23
23
"source": [
24
-
"## Background\n",
24
+
"## 背景\n",
25
25
"\n",
26
-
"Baiscally, we train a parameterized quantum circuit with repetions of $e^{i\\theta} ZZ$ and $e^{i\\theta X}$ layers as $U(\\rm{\\theta})$. And the objective to be minimized is this task is $\\mathcal{L}(\\rm{\\theta})=\\langle 0^n\\vert U(\\theta)^\\dagger H U(\\theta)\\vert 0^n\\rangle$. The Hamiltonian is from TFIM as $H = \\sum_{i} Z_iZ_{i+1} -\\sum_i X_i$."
" ) # We assume the input param with dtype float64\n",
169
+
" ) # 我们假设输入参数的 dtype 为 float64\n",
170
170
" for i in range(n):\n",
171
171
" c.H(i)\n",
172
172
" for j in range(nlayers):\n",
@@ -182,8 +182,8 @@
182
182
"cell_type": "markdown",
183
183
"metadata": {},
184
184
"source": [
185
-
"To train the parameterized circuit, we should utilize the gradient information $\\frac{\\partial \\mathcal{L}}{\\partial \\rm{\\theta}}$ with gradient descent.\n",
186
-
"We also use ``jit`` to wrap the value and grad function for a substantial speed up. Note how (1, 2) args of ``vqe_tfim`` is labelled as static since they are just integers for qubit number and layer number instead of tensors."
"我们还使用 ``jit`` 来包装 value 和 grad 函数以显著加快速度。 注意 ``vqe_tfim`` 的 (1, 2) args 是如何被标记为静态的,因为它们只是量子比特数和层数的整数,而不是张量。"
187
187
]
188
188
},
189
189
{
@@ -252,9 +252,9 @@
252
252
"cell_type": "markdown",
253
253
"metadata": {},
254
254
"source": [
255
-
"### batched VQE example\n",
255
+
"### 批处理 VQE 示例\n",
256
256
"\n",
257
-
"We can even run a batched version of VQE optimization, namely, we simutaneously optimize parameterized circuit for different random initializations, so that we can try best to avoid local minimum be locate the best of the converged energies."
"We can change the backends at runtime without even changing one line of the code!\n",
363
-
"\n",
364
-
"However, in normal user cases, we strongly recommend the users stick to one backend in one jupyter or python scripts.\n",
365
-
"One can enjoy the facility provided by other backends by changing the ``set_backend`` line and running the same script again. This approach is much safer than using multiple backends in the same file unless you know the lower level details of tensorcircuit enough."
"The higher level API under the namespace of ``tensorcircuit`` provides a unified framework to do linear algebra and automatic differentiation which is backend agnostic.\n",
471
+
"### 更低层的 API\n",
475
472
"\n",
476
-
"One may also use the related APIs (ops, AD related, jit related) directly provided by tensorflow or jax, as long as one is ok to stick with one fixed backend. See tensorflow backend example below.\n"
473
+
"`TensorCircuit` 命名空间下的更高级别 API 提供了一个统一的框架来进行线性代数和自动微分,这与后端无关。\n",
0 commit comments