Skip to content

Commit 9de1501

Browse files
committed
Update classical_shadows.ipynb
1 parent 12d67fb commit 9de1501

File tree

1 file changed

+9
-9
lines changed

1 file changed

+9
-9
lines changed

docs/source/tutorials/classical_shadows.ipynb

+9-9
Original file line numberDiff line numberDiff line change
@@ -30,7 +30,7 @@
3030
{
3131
"cell_type": "markdown",
3232
"source": [
33-
"Let's first briefly review the classic shadows in Pauli basis. For an $n$-qubit quantum state $\\rho$, we randomly perform Pauli projection measurement on each qubit and obtain a snapshot like $\\{1,-1,-1,1\\cdots1,-1\\}$. This process is equivalent to apply a random unitary $U$ to $\\rho$ and measure in computational basis to obtain $|b\\rangle=|s_1\\cdots s_n\\rangle,\\ s_j\\in\\{0,1\\}$:\n",
33+
"Let's first briefly review the classic shadows in Pauli basis. For an $n$-qubit quantum state $\\rho$, we randomly perform Pauli projection measurement on each qubit and obtain a snapshot like $\\{1,-1,-1,1,\\cdots,1,-1\\}$. This process is equivalent to apply a random unitary $U$ to $\\rho$ and measure in computational basis to obtain $|b\\rangle=|s_1\\cdots s_n\\rangle,\\ s_j\\in\\{0,1\\}$:\n",
3434
"$$\n",
3535
"\\begin{equation}\n",
3636
" \\rho\\rightarrow U\\rho U^{\\dagger}\\xrightarrow{measure}|b\\rangle\\langle b|,\n",
@@ -54,7 +54,7 @@
5454
" \\rho=\\mathbb{E}\\left[\\mathcal{M}^{-1}(U^{\\dagger}|b\\rangle\\langle b|U)\\right].\n",
5555
"\\end{equation}\n",
5656
"$$\n",
57-
"We call each $\\rho_i=\\mathcal{M}^{-1}(U_i^{\\dagger}|b_i\\rangle\\langle b_i|U_i)$ a shadow snapshot state and their ensemble $S(\\rho;N)=\\{\\rho_i|i=1\\cdots N\\}$ classical shadows."
57+
"We call each $\\rho_i=\\mathcal{M}^{-1}(U_i^{\\dagger}|b_i\\rangle\\langle b_i|U_i)$ a shadow snapshot state and their ensemble $S(\\rho;N)=\\{\\rho_i|i=1,\\cdots,N\\}$ classical shadows."
5858
],
5959
"metadata": {
6060
"collapsed": false
@@ -70,7 +70,7 @@
7070
" \\rho&=&\\frac{1}{N}\\sum_{i=1}^{N}\\rho_i\\ .\n",
7171
"\\end{eqnarray}\n",
7272
"$$\n",
73-
"For an observable Pauli string $O=\\bigotimes_{j=1}^{n}P_j,\\ P_j\\in\\{\\mathbb{I}, X, Y, Z\\}$, we can directly use $\\rho$ to calculate $\\langle O\\rangle=\\text{Tr}(O\\rho)$. In practice, we will divide the classical shadows into $K$ parts to calculate the expectation value independently and take the median to avoid the influence of outliers:\n",
73+
"For an observable Pauli string $O=\\bigotimes_{j=1}^{n}P_j,\\ P_j\\in\\{\\mathbb{I}, X, Y, Z\\}$, we can directly use $\\rho$ to calculate $\\langle O\\rangle=\\text{Tr}(O\\rho)$. In practice, we will divide the classical shadows into $K$ parts to calculate the expectation values independently and take the median to avoid the influence of outliers:\n",
7474
"$$\n",
7575
"\\begin{equation}\n",
7676
" \\langle O\\rangle=\\text{median}\\{\\langle O_{(1)}\\rangle\\cdots\\langle O_{(K)}\\rangle\\},\n",
@@ -176,7 +176,7 @@
176176
"\n",
177177
"epsilon, delta = 0.1, 0.01\n",
178178
"N, K = shadows.shadow_bound(ps, epsilon, delta)\n",
179-
"nps = N // r # number of random pauli strings\n",
179+
"nps = N // r # number of random selected Pauli strings\n",
180180
"print(f\"N: {N}\\tK: {K}\\tnumber of Pauli strings: {nps}\")"
181181
],
182182
"metadata": {
@@ -226,7 +226,7 @@
226226
{
227227
"cell_type": "markdown",
228228
"source": [
229-
"We randomly generate Pauli strings. Since the function after just-in-time compilation does not support random sampling, we need to generate all random states in advance, that is, variable `status`."
229+
"We randomly generate Pauli strings. Since the function after just-in-time (jit) compilation does not support random sampling, we need to generate all random states in advance, that is, variable `status`."
230230
],
231231
"metadata": {
232232
"collapsed": false
@@ -328,7 +328,7 @@
328328
{
329329
"cell_type": "markdown",
330330
"source": [
331-
"It can be seen from the running time that every time the number of Pauli strings changes, `shadow_expec` will be recompiled, but for the same number of Pauli strings but different observables, `shadow_expec` will only be compiled once. In the end, the absolute errors given by classical shadows are much smaller than the $\\epsilon=0.1$ we set, so shadow_bound gives a very loose upper bound."
331+
"It can be seen from the running time that every time the number of Pauli strings changes, `shadow_expec` will be recompiled, but for the same number of Pauli strings but different observables, `shadow_expec` will only be compiled once. In the end, the absolute errors given by classical shadows are much smaller than the $\\epsilon=0.1$ we set, so `shadow_bound` gives a very loose upper bound."
332332
],
333333
"metadata": {
334334
"collapsed": false
@@ -552,14 +552,14 @@
552552
{
553553
"cell_type": "markdown",
554554
"source": [
555-
"On the other hand, for the second order Renyi entropy, we have another method to calculate it in polynomial time by random measurement:\n",
555+
"On the other hand, for the second order Renyi entropy, we have another method to calculate it in polynomial time by random measurements:\n",
556556
"$$\n",
557557
"\\begin{eqnarray}\n",
558558
" S_2&=&-\\log\\left(\\text{Tr}(\\rho_A^2)\\right),\\\\\n",
559559
" \\text{Tr}(\\rho_A^2)&=&2^k\\sum_{b,b'\\in\\{0,1\\}^k}(-2)^{-H(b,b')}\\overline{P(b)P(b')},\n",
560560
"\\end{eqnarray}\n",
561561
"$$\n",
562-
"where $A$ is the $k$-d reduced system, $H(b,b')$ is the Hamming distance between $b$ and $b'$, $P(b)$ is the probability for measuring $\\rho_A$ and obtaining the outcomes $b$ thus we need a larger $r$ to obtain a good enough priori probability, and the overline means the average on all random selected Pauli strings. Please refer to [Brydges, et al. (2019)](https://www.science.org/doi/full/10.1126/science.aau4963) for more details. We can use `renyi_entropy_2` to implement this method, but it is not jitable because we need to build the dictionary based on the bit strings obtained by measurement. Compared with `entropy_shadow`, it cannot filter out non-negative eigenvalues, so the accuracy is slightly worse."
562+
"where $A$ is the $k$-d reduced system, $H(b,b')$ is the Hamming distance between $b$ and $b'$, $P(b)$ is the probability for measuring $\\rho_A$ and obtaining the outcomes $b$ thus we need a larger $r$ to obtain a good enough priori probability, and the overline means the average on all random selected Pauli strings. Please refer to [Brydges, et al. (2019)](https://www.science.org/doi/full/10.1126/science.aau4963) for more details. We can use `renyi_entropy_2` to implement this method, but it is not jitable because we need to build the dictionary based on the bit strings obtained by measurements, which is a dynamical process. Compared with `entropy_shadow`, it cannot filter out non-negative eigenvalues, so the accuracy is slightly worse."
563563
],
564564
"metadata": {
565565
"collapsed": false
@@ -608,7 +608,7 @@
608608
{
609609
"cell_type": "markdown",
610610
"source": [
611-
"We can `global_shadow_state`, `global_shadow_state1` or `global_shadow_state2` to reconstruct the density matrix. These three functions use different methods, but the results are exactly the same and all of them are jitable. In specific, `global_shadow_state` uses `kron` and is recommended, the other two use `einsum`."
611+
"We can use `global_shadow_state`, `global_shadow_state1` or `global_shadow_state2` to reconstruct the density matrix. These three functions use different methods, but the results are exactly the same and all of them are jitable. In specific, `global_shadow_state` uses `kron` and is recommended, the other two use `einsum`."
612612
],
613613
"metadata": {
614614
"collapsed": false

0 commit comments

Comments
 (0)