|
30 | 30 | {
|
31 | 31 | "cell_type": "markdown",
|
32 | 32 | "source": [
|
33 |
| - "Let's first briefly review the classic shadows in Pauli basis. For an $n$-qubit quantum state $\\rho$, we randomly perform Pauli projection measurement on each qubit and obtain a snapshot like $\\{1,-1,-1,1\\cdots1,-1\\}$. This process is equivalent to apply a random unitary $U$ to $\\rho$ and measure in computational basis to obtain $|b\\rangle=|s_1\\cdots s_n\\rangle,\\ s_j\\in\\{0,1\\}$:\n", |
| 33 | + "Let's first briefly review the classic shadows in Pauli basis. For an $n$-qubit quantum state $\\rho$, we randomly perform Pauli projection measurement on each qubit and obtain a snapshot like $\\{1,-1,-1,1,\\cdots,1,-1\\}$. This process is equivalent to apply a random unitary $U$ to $\\rho$ and measure in computational basis to obtain $|b\\rangle=|s_1\\cdots s_n\\rangle,\\ s_j\\in\\{0,1\\}$:\n", |
34 | 34 | "$$\n",
|
35 | 35 | "\\begin{equation}\n",
|
36 | 36 | " \\rho\\rightarrow U\\rho U^{\\dagger}\\xrightarrow{measure}|b\\rangle\\langle b|,\n",
|
|
54 | 54 | " \\rho=\\mathbb{E}\\left[\\mathcal{M}^{-1}(U^{\\dagger}|b\\rangle\\langle b|U)\\right].\n",
|
55 | 55 | "\\end{equation}\n",
|
56 | 56 | "$$\n",
|
57 |
| - "We call each $\\rho_i=\\mathcal{M}^{-1}(U_i^{\\dagger}|b_i\\rangle\\langle b_i|U_i)$ a shadow snapshot state and their ensemble $S(\\rho;N)=\\{\\rho_i|i=1\\cdots N\\}$ classical shadows." |
| 57 | + "We call each $\\rho_i=\\mathcal{M}^{-1}(U_i^{\\dagger}|b_i\\rangle\\langle b_i|U_i)$ a shadow snapshot state and their ensemble $S(\\rho;N)=\\{\\rho_i|i=1,\\cdots,N\\}$ classical shadows." |
58 | 58 | ],
|
59 | 59 | "metadata": {
|
60 | 60 | "collapsed": false
|
|
70 | 70 | " \\rho&=&\\frac{1}{N}\\sum_{i=1}^{N}\\rho_i\\ .\n",
|
71 | 71 | "\\end{eqnarray}\n",
|
72 | 72 | "$$\n",
|
73 |
| - "For an observable Pauli string $O=\\bigotimes_{j=1}^{n}P_j,\\ P_j\\in\\{\\mathbb{I}, X, Y, Z\\}$, we can directly use $\\rho$ to calculate $\\langle O\\rangle=\\text{Tr}(O\\rho)$. In practice, we will divide the classical shadows into $K$ parts to calculate the expectation value independently and take the median to avoid the influence of outliers:\n", |
| 73 | + "For an observable Pauli string $O=\\bigotimes_{j=1}^{n}P_j,\\ P_j\\in\\{\\mathbb{I}, X, Y, Z\\}$, we can directly use $\\rho$ to calculate $\\langle O\\rangle=\\text{Tr}(O\\rho)$. In practice, we will divide the classical shadows into $K$ parts to calculate the expectation values independently and take the median to avoid the influence of outliers:\n", |
74 | 74 | "$$\n",
|
75 | 75 | "\\begin{equation}\n",
|
76 | 76 | " \\langle O\\rangle=\\text{median}\\{\\langle O_{(1)}\\rangle\\cdots\\langle O_{(K)}\\rangle\\},\n",
|
|
176 | 176 | "\n",
|
177 | 177 | "epsilon, delta = 0.1, 0.01\n",
|
178 | 178 | "N, K = shadows.shadow_bound(ps, epsilon, delta)\n",
|
179 |
| - "nps = N // r # number of random pauli strings\n", |
| 179 | + "nps = N // r # number of random selected Pauli strings\n", |
180 | 180 | "print(f\"N: {N}\\tK: {K}\\tnumber of Pauli strings: {nps}\")"
|
181 | 181 | ],
|
182 | 182 | "metadata": {
|
|
226 | 226 | {
|
227 | 227 | "cell_type": "markdown",
|
228 | 228 | "source": [
|
229 |
| - "We randomly generate Pauli strings. Since the function after just-in-time compilation does not support random sampling, we need to generate all random states in advance, that is, variable `status`." |
| 229 | + "We randomly generate Pauli strings. Since the function after just-in-time (jit) compilation does not support random sampling, we need to generate all random states in advance, that is, variable `status`." |
230 | 230 | ],
|
231 | 231 | "metadata": {
|
232 | 232 | "collapsed": false
|
|
328 | 328 | {
|
329 | 329 | "cell_type": "markdown",
|
330 | 330 | "source": [
|
331 |
| - "It can be seen from the running time that every time the number of Pauli strings changes, `shadow_expec` will be recompiled, but for the same number of Pauli strings but different observables, `shadow_expec` will only be compiled once. In the end, the absolute errors given by classical shadows are much smaller than the $\\epsilon=0.1$ we set, so shadow_bound gives a very loose upper bound." |
| 331 | + "It can be seen from the running time that every time the number of Pauli strings changes, `shadow_expec` will be recompiled, but for the same number of Pauli strings but different observables, `shadow_expec` will only be compiled once. In the end, the absolute errors given by classical shadows are much smaller than the $\\epsilon=0.1$ we set, so `shadow_bound` gives a very loose upper bound." |
332 | 332 | ],
|
333 | 333 | "metadata": {
|
334 | 334 | "collapsed": false
|
|
552 | 552 | {
|
553 | 553 | "cell_type": "markdown",
|
554 | 554 | "source": [
|
555 |
| - "On the other hand, for the second order Renyi entropy, we have another method to calculate it in polynomial time by random measurement:\n", |
| 555 | + "On the other hand, for the second order Renyi entropy, we have another method to calculate it in polynomial time by random measurements:\n", |
556 | 556 | "$$\n",
|
557 | 557 | "\\begin{eqnarray}\n",
|
558 | 558 | " S_2&=&-\\log\\left(\\text{Tr}(\\rho_A^2)\\right),\\\\\n",
|
559 | 559 | " \\text{Tr}(\\rho_A^2)&=&2^k\\sum_{b,b'\\in\\{0,1\\}^k}(-2)^{-H(b,b')}\\overline{P(b)P(b')},\n",
|
560 | 560 | "\\end{eqnarray}\n",
|
561 | 561 | "$$\n",
|
562 |
| - "where $A$ is the $k$-d reduced system, $H(b,b')$ is the Hamming distance between $b$ and $b'$, $P(b)$ is the probability for measuring $\\rho_A$ and obtaining the outcomes $b$ thus we need a larger $r$ to obtain a good enough priori probability, and the overline means the average on all random selected Pauli strings. Please refer to [Brydges, et al. (2019)](https://www.science.org/doi/full/10.1126/science.aau4963) for more details. We can use `renyi_entropy_2` to implement this method, but it is not jitable because we need to build the dictionary based on the bit strings obtained by measurement. Compared with `entropy_shadow`, it cannot filter out non-negative eigenvalues, so the accuracy is slightly worse." |
| 562 | + "where $A$ is the $k$-d reduced system, $H(b,b')$ is the Hamming distance between $b$ and $b'$, $P(b)$ is the probability for measuring $\\rho_A$ and obtaining the outcomes $b$ thus we need a larger $r$ to obtain a good enough priori probability, and the overline means the average on all random selected Pauli strings. Please refer to [Brydges, et al. (2019)](https://www.science.org/doi/full/10.1126/science.aau4963) for more details. We can use `renyi_entropy_2` to implement this method, but it is not jitable because we need to build the dictionary based on the bit strings obtained by measurements, which is a dynamical process. Compared with `entropy_shadow`, it cannot filter out non-negative eigenvalues, so the accuracy is slightly worse." |
563 | 563 | ],
|
564 | 564 | "metadata": {
|
565 | 565 | "collapsed": false
|
|
608 | 608 | {
|
609 | 609 | "cell_type": "markdown",
|
610 | 610 | "source": [
|
611 |
| - "We can `global_shadow_state`, `global_shadow_state1` or `global_shadow_state2` to reconstruct the density matrix. These three functions use different methods, but the results are exactly the same and all of them are jitable. In specific, `global_shadow_state` uses `kron` and is recommended, the other two use `einsum`." |
| 611 | + "We can use `global_shadow_state`, `global_shadow_state1` or `global_shadow_state2` to reconstruct the density matrix. These three functions use different methods, but the results are exactly the same and all of them are jitable. In specific, `global_shadow_state` uses `kron` and is recommended, the other two use `einsum`." |
612 | 612 | ],
|
613 | 613 | "metadata": {
|
614 | 614 | "collapsed": false
|
|
0 commit comments