Skip to content

Commit 75cf4e1

Browse files
Merge branch 'master' of github.com:SexyCarrots/tensorcircuit-dev into pr31
2 parents 5e10f41 + 6ceb310 commit 75cf4e1

File tree

4 files changed

+602
-54
lines changed

4 files changed

+602
-54
lines changed

docs/source/tutorials/tfim_vqe.ipynb

+23-16
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,12 @@
11
{
22
"cells": [
3+
{
4+
"cell_type": "markdown",
5+
"metadata": {},
6+
"source": [
7+
"##### Copyright 2021 The TensorCircuit Authors."
8+
]
9+
},
310
{
411
"cell_type": "markdown",
512
"metadata": {},
@@ -13,8 +20,8 @@
1320
"source": [
1421
"## Overview\n",
1522
"\n",
16-
"The main aim of this tutorial is not about the physics perspective of VQE, instead we demonstrate\n",
17-
"the main ingredients of tensorcircuit by this simple VQE toy model. "
23+
"The main aim of this tutorial is not about the physics perspective of VQE, instead, we demonstrate\n",
24+
"the main ingredients of TensorCircuit by this simple VQE toy model."
1825
]
1926
},
2027
{
@@ -23,7 +30,7 @@
2330
"source": [
2431
"## Background\n",
2532
"\n",
26-
"Baiscally, we train a parameterized quantum circuit with repetions of $e^{i\\theta} ZZ$ and $e^{i\\theta X}$ layers as $U(\\rm{\\theta})$. And the objective to be minimized is this task is $\\mathcal{L}(\\rm{\\theta})=\\langle 0^n\\vert U(\\theta)^\\dagger H U(\\theta)\\vert 0^n\\rangle$. The Hamiltonian is from TFIM as $H = \\sum_{i} Z_iZ_{i+1} -\\sum_i X_i$."
33+
"Basically, we train a parameterized quantum circuit with repetitions of $e^{i\\theta} ZZ$ and $e^{i\\theta X}$ layers as $U(\\rm{\\theta})$. And the objective to be minimized is this task is $\\mathcal{L}(\\rm{\\theta})=\\langle 0^n\\vert U(\\theta)^\\dagger H U(\\theta)\\vert 0^n\\rangle$. The Hamiltonian is from TFIM as $H = \\sum_{i} Z_iZ_{i+1} -\\sum_i X_i$."
2734
]
2835
},
2936
{
@@ -55,7 +62,7 @@
5562
"cell_type": "markdown",
5663
"metadata": {},
5764
"source": [
58-
"To enable automatical differentiation support, we should set the tensorcircuit backend beyond the default one \"numpy\".\n",
65+
"To enable automatic differentiation support, we should set the TensorCircuit backend beyond the default one \"NumPy\".\n",
5966
"And we can also set the high precision complex128 for the simulation."
6067
]
6168
},
@@ -121,7 +128,7 @@
121128
"cell_type": "markdown",
122129
"metadata": {},
123130
"source": [
124-
"## higher level API"
131+
"## Higher-level API"
125132
]
126133
},
127134
{
@@ -183,7 +190,7 @@
183190
"metadata": {},
184191
"source": [
185192
"To train the parameterized circuit, we should utilize the gradient information $\\frac{\\partial \\mathcal{L}}{\\partial \\rm{\\theta}}$ with gradient descent.\n",
186-
"We also use ``jit`` to wrap the value and grad function for a substantial speed up. Note how (1, 2) args of ``vqe_tfim`` is labelled as static since they are just integers for qubit number and layer number instead of tensors."
193+
"We also use ``jit`` to wrap the value and grad function for a substantial speedup. Note how (1, 2) args of ``vqe_tfim`` are labeled as static since they are just integers for qubit number and layer number instead of tensors."
187194
]
188195
},
189196
{
@@ -252,9 +259,9 @@
252259
"cell_type": "markdown",
253260
"metadata": {},
254261
"source": [
255-
"### batched VQE example\n",
262+
"### Batched VQE Example\n",
256263
"\n",
257-
"We can even run a batched version of VQE optimization, namely, we simutaneously optimize parameterized circuit for different random initializations, so that we can try best to avoid local minimum be locate the best of the converged energies."
264+
"We can even run a batched version of VQE optimization, namely, we simultaneously optimize parameterized circuits for different random initializations, so that we can try our best to avoid local minimums and locate the best of the converged energies."
258265
]
259266
},
260267
{
@@ -357,12 +364,12 @@
357364
"cell_type": "markdown",
358365
"metadata": {},
359366
"source": [
360-
"### Different backends\n",
367+
"### Different Backends\n",
361368
"\n",
362369
"We can change the backends at runtime without even changing one line of the code!\n",
363370
"\n",
364-
"However, in normal user cases, we strongly recommend the users stick to one backend in one jupyter or python scripts.\n",
365-
"One can enjoy the facility provided by other backends by changing the ``set_backend`` line and running the same script again. This approach is much safer than using multiple backends in the same file unless you know the lower level details of tensorcircuit enough."
371+
"However, in normal user cases, we strongly recommend the users stick to one backend in one jupyter or python script.\n",
372+
"One can enjoy the facility provided by other backends by changing the ``set_backend`` line and running the same script again. This approach is much safer than using multiple backends in the same file unless you know the lower-level details of TensorCircuit enough."
366373
]
367374
},
368375
{
@@ -469,11 +476,11 @@
469476
"cell_type": "markdown",
470477
"metadata": {},
471478
"source": [
472-
"## lower level API\n",
479+
"## Lower-level API\n",
473480
"\n",
474-
"The higher level API under the namespace of ``tensorcircuit`` provides a unified framework to do linear algebra and automatic differentiation which is backend agnostic.\n",
481+
"The higher-level API under the namespace of ``TensorCircuit`` provides a unified framework to do linear algebra and automatic differentiation which is backend agnostic.\n",
475482
"\n",
476-
"One may also use the related APIs (ops, AD related, jit related) directly provided by tensorflow or jax, as long as one is ok to stick with one fixed backend. See tensorflow backend example below.\n"
483+
"One may also use the related APIs (ops, AD-related, jit-related) directly provided by TensorFlow or Jax, as long as one is ok to stick with one fixed backend. See the tensorflow backend example below.\n"
477484
]
478485
},
479486
{
@@ -582,9 +589,9 @@
582589
"name": "python",
583590
"nbconvert_exporter": "python",
584591
"pygments_lexer": "ipython3",
585-
"version": "3.8.0"
592+
"version": "3.7.0"
586593
}
587594
},
588595
"nbformat": 4,
589596
"nbformat_minor": 2
590-
}
597+
}

docs/source/tutorials/tfim_vqe_cn.ipynb

+26-28
Original file line numberDiff line numberDiff line change
@@ -11,26 +11,26 @@
1111
"cell_type": "markdown",
1212
"metadata": {},
1313
"source": [
14-
"## Overview\n",
14+
"## 概述\n",
1515
"\n",
16-
"The main aim of this tutorial is not about the physics perspective of VQE, instead we demonstrate\n",
17-
"the main ingredients of tensorcircuit by this simple VQE toy model. "
16+
"本教程的主要目的不是关于 VQE 物理层面的讨论,而是我们通过演示\n",
17+
"这个简单的 VQE 玩具模型来了解张量电路的主要技术组件和用法。"
1818
]
1919
},
2020
{
2121
"cell_type": "markdown",
2222
"metadata": {},
2323
"source": [
24-
"## Background\n",
24+
"## 背景\n",
2525
"\n",
26-
"Baiscally, we train a parameterized quantum circuit with repetions of $e^{i\\theta} ZZ$ and $e^{i\\theta X}$ layers as $U(\\rm{\\theta})$. And the objective to be minimized is this task is $\\mathcal{L}(\\rm{\\theta})=\\langle 0^n\\vert U(\\theta)^\\dagger H U(\\theta)\\vert 0^n\\rangle$. The Hamiltonian is from TFIM as $H = \\sum_{i} Z_iZ_{i+1} -\\sum_i X_i$."
26+
"基本上,我们训练一个参数化的量子电路,其线路结构为重复的 $e^{i\\theta} ZZ$ $e^{i\\theta X}$ 层的 $U(\\rm{\\theta})$。 而要最小化的目标是这个任务 $\\mathcal{L}(\\rm{\\theta})=\\langle 0^n\\vert U(\\theta)^\\dagger HU(\\theta)\\vert 0^n \\rangle$。 哈密顿量来自 TFIM$H = \\sum_{i} Z_iZ_{i+1} -\\sum_i X_i$"
2727
]
2828
},
2929
{
3030
"cell_type": "markdown",
3131
"metadata": {},
3232
"source": [
33-
"## Setup"
33+
"## 设置"
3434
]
3535
},
3636
{
@@ -55,8 +55,8 @@
5555
"cell_type": "markdown",
5656
"metadata": {},
5757
"source": [
58-
"To enable automatical differentiation support, we should set the tensorcircuit backend beyond the default one \"numpy\".\n",
59-
"And we can also set the high precision complex128 for the simulation."
58+
"为了启用自动微分支持,我们应该将 TensorCircuit 设置为非 “numpy” 后端。\n",
59+
"而且我们还可以设置高精度 complex128 进行模拟。"
6060
]
6161
},
6262
{
@@ -112,7 +112,7 @@
112112
}
113113
],
114114
"source": [
115-
"# zz gate matrix to be utilized\n",
115+
"# 要使用的 zz 门矩阵\n",
116116
"zz = np.kron(tc.gates._z_matrix, tc.gates._z_matrix)\n",
117117
"print(zz)"
118118
]
@@ -121,14 +121,14 @@
121121
"cell_type": "markdown",
122122
"metadata": {},
123123
"source": [
124-
"## higher level API"
124+
"## 更高层的 API"
125125
]
126126
},
127127
{
128128
"cell_type": "markdown",
129129
"metadata": {},
130130
"source": [
131-
"We first design the Hamiltonian energy expectation function with the input as quantum circuit."
131+
"我们首先设计了以量子电路为输入的哈密顿能量期望函数。"
132132
]
133133
},
134134
{
@@ -153,7 +153,7 @@
153153
"cell_type": "markdown",
154154
"metadata": {},
155155
"source": [
156-
"Now we make the quantum function with $\\rm{\\theta}$ as input and energy expectation $\\mathcal{L}$ as output."
156+
"现在我们以 $\\rm{\\theta}$ 作为输入;并将能量期望 $\\mathcal{L}$ 作为输出来制作量子函数。"
157157
]
158158
},
159159
{
@@ -166,7 +166,7 @@
166166
" c = tc.Circuit(n)\n",
167167
" paramc = tc.backend.cast(\n",
168168
" param, tc.dtypestr\n",
169-
" ) # We assume the input param with dtype float64\n",
169+
" ) # 我们假设输入参数的 dtype float64\n",
170170
" for i in range(n):\n",
171171
" c.H(i)\n",
172172
" for j in range(nlayers):\n",
@@ -182,8 +182,8 @@
182182
"cell_type": "markdown",
183183
"metadata": {},
184184
"source": [
185-
"To train the parameterized circuit, we should utilize the gradient information $\\frac{\\partial \\mathcal{L}}{\\partial \\rm{\\theta}}$ with gradient descent.\n",
186-
"We also use ``jit`` to wrap the value and grad function for a substantial speed up. Note how (1, 2) args of ``vqe_tfim`` is labelled as static since they are just integers for qubit number and layer number instead of tensors."
185+
"为了训练参数化电路,我们应该利用梯度下降的梯度信息 $\\frac{\\partial \\mathcal{L}}{\\partial \\rm{\\theta}}$\n",
186+
"我们还使用 ``jit`` 来包装 value grad 函数以显著加快速度。 注意 ``vqe_tfim`` 的 (1, 2) args 是如何被标记为静态的,因为它们只是量子比特数和层数的整数,而不是张量。"
187187
]
188188
},
189189
{
@@ -252,9 +252,9 @@
252252
"cell_type": "markdown",
253253
"metadata": {},
254254
"source": [
255-
"### batched VQE example\n",
255+
"### 批处理 VQE 示例\n",
256256
"\n",
257-
"We can even run a batched version of VQE optimization, namely, we simutaneously optimize parameterized circuit for different random initializations, so that we can try best to avoid local minimum be locate the best of the converged energies."
257+
"我们甚至可以运行批量版本的 VQE 优化,即我们针对不同的随机初始化同时优化参数化电路,这样我们就可以尽量避免局部最小值,从而找到收敛能量的最佳值。"
258258
]
259259
},
260260
{
@@ -357,12 +357,11 @@
357357
"cell_type": "markdown",
358358
"metadata": {},
359359
"source": [
360-
"### Different backends\n",
360+
"### 不同的后端\n",
361361
"\n",
362-
"We can change the backends at runtime without even changing one line of the code!\n",
363-
"\n",
364-
"However, in normal user cases, we strongly recommend the users stick to one backend in one jupyter or python scripts.\n",
365-
"One can enjoy the facility provided by other backends by changing the ``set_backend`` line and running the same script again. This approach is much safer than using multiple backends in the same file unless you know the lower level details of tensorcircuit enough."
362+
"我们可以在运行时更改后端,甚至无需更改一行代码!\n",
363+
"但是,在普通用户情况下,我们强烈建议用户在一个 jupyter 或 python 脚本中坚持使用一个后端。\n",
364+
"通过更改``set_backend``行并再次运行相同的脚本,可以享受其他后端提供的便利。 这种方法比在同一个文件中使用多个后端更安全,除非你足够了解 TensorCircuit 的底层细节。"
366365
]
367366
},
368367
{
@@ -371,7 +370,7 @@
371370
"metadata": {},
372371
"outputs": [],
373372
"source": [
374-
"tc.set_backend(\"jax\") # change to jax backend"
373+
"tc.set_backend(\"jax\") # 更改为 jax 后端"
375374
]
376375
},
377376
{
@@ -469,11 +468,10 @@
469468
"cell_type": "markdown",
470469
"metadata": {},
471470
"source": [
472-
"## lower level API\n",
473-
"\n",
474-
"The higher level API under the namespace of ``tensorcircuit`` provides a unified framework to do linear algebra and automatic differentiation which is backend agnostic.\n",
471+
"### 更低层的 API\n",
475472
"\n",
476-
"One may also use the related APIs (ops, AD related, jit related) directly provided by tensorflow or jax, as long as one is ok to stick with one fixed backend. See tensorflow backend example below.\n"
473+
"`TensorCircuit` 命名空间下的更高级别 API 提供了一个统一的框架来进行线性代数和自动微分,这与后端无关。\n",
474+
"也可以使用 TensorFlow 或 Jax 直接提供的相关 API(ops、自动微分相关、可即时编译相关),只要坚持一个固定后端即可。 请参阅下面的 TensorFlow 后端示例。"
477475
]
478476
},
479477
{
@@ -587,4 +585,4 @@
587585
},
588586
"nbformat": 4,
589587
"nbformat_minor": 2
590-
}
588+
}

docs/source/tutorials/tfim_vqe_diffreph.ipynb

+10-10
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
"id": "281f452f",
66
"metadata": {},
77
"source": [
8-
"# VQE on 1D TFIM with different Hamiltonian representation"
8+
"# VQE on 1D TFIM with Different Hamiltonian Representation"
99
]
1010
},
1111
{
@@ -35,7 +35,7 @@
3535
"$$\n",
3636
"H = \\sum_{i} \\sigma_{i}^{x} \\sigma_{i+1}^{x} - \\sum_{i} \\sigma_{i}^{z},\n",
3737
"$$\n",
38-
"where $\\sigma_{i}^{x,z}$ are Pauli matrixes of the $i$-th qubit."
38+
"where $\\sigma_{i}^{x,z}$ are Pauli matrices of the $i$-th qubit."
3939
]
4040
},
4141
{
@@ -92,7 +92,7 @@
9292
"id": "8a8b70bb-a312-4934-813a-035547ca8090",
9393
"metadata": {},
9494
"source": [
95-
"## Parameterized quantum circuit"
95+
"## Parameterized Quantum Circuits"
9696
]
9797
},
9898
{
@@ -117,7 +117,7 @@
117117
"id": "99343a0a",
118118
"metadata": {},
119119
"source": [
120-
"## Pauli-string operators"
120+
"## Pauli-string Operators"
121121
]
122122
},
123123
{
@@ -177,7 +177,7 @@
177177
"id": "80cc3f92-4d76-470d-b1c0-65c2736c6383",
178178
"metadata": {},
179179
"source": [
180-
"### Main optimization loop"
180+
"### Main Optimization Loop"
181181
]
182182
},
183183
{
@@ -239,7 +239,7 @@
239239
"id": "f39f9640-e637-4b78-9519-bde9fed3e2fb",
240240
"metadata": {},
241241
"source": [
242-
"## Sparse matrix, dense matrix and mpo"
242+
"## Sparse Matrix, Dense Matrix, and MPO"
243243
]
244244
},
245245
{
@@ -365,7 +365,7 @@
365365
"id": "a23af8cd-a734-448b-b248-56ead572affb",
366366
"metadata": {},
367367
"source": [
368-
"### Main optimization loop"
368+
"### Main Optimization Loop"
369369
]
370370
},
371371
{
@@ -398,7 +398,7 @@
398398
"id": "949180b3-e40d-42f8-a224-721b0bc67d0b",
399399
"metadata": {},
400400
"source": [
401-
"### sparse matrix, dense matrix and mpo"
401+
"### Sparse Matrix, Dense Matrix, and MPO"
402402
]
403403
},
404404
{
@@ -428,7 +428,7 @@
428428
"hamiltonian_mpo = tn.matrixproductstates.mpo.FiniteTFI(\n",
429429
" Jx, Bz, dtype=dtype\n",
430430
") # matrix product operator\n",
431-
"hamiltonian_mpo = quoperator_mpo(hamiltonian_mpo) # generate quoperator from mpo"
431+
"hamiltonian_mpo = quoperator_mpo(hamiltonian_mpo) # generate QuOperator from mpo"
432432
]
433433
},
434434
{
@@ -541,4 +541,4 @@
541541
},
542542
"nbformat": 4,
543543
"nbformat_minor": 5
544-
}
544+
}

0 commit comments

Comments
 (0)