Skip to content

Commit a97ea2b

Browse files
authored
add backend/jit info and fix typo in readme
1 parent e52c691 commit a97ea2b

File tree

1 file changed

+10
-7
lines changed

1 file changed

+10
-7
lines changed

examples/omeinsum_julia/README.md

+10-7
Original file line numberDiff line numberDiff line change
@@ -7,15 +7,18 @@ We provide two solutions:
77
* use subprocess to call a stand-alone julia script (**recommended**)
88
* use juliacall to integrate julia script into python (seems to be more elegant, but not recommended)
99

10-
We highly recommend to use the first solution based on subprocess, not only due to its compatibility to julia multi-threading, but also because the experimental KaHyPar-based initialization is developed based on it.
10+
We highly recommend using the first solution based on subprocess, not only due to its compatibility with julia's multi-threading but also because the experimental KaHyPar-based initialization is developed based on it.
1111

1212
## Experiments
1313

1414
We test contractors from OMEinsum on Google random circuits ([available online](https://datadryad.org/stash/dataset/doi:10.5061/dryad.k6t1rj8)) and compare with the cotengra contractor.
15-
For circuits only differ in PRNG seed number (which means with the same tensor network structure, but different tensor entries), we choose the one with the largest seed. For example, we benchmark `circuit_n12_m14_s9_e6_pEFGH.qsim`, but skip
15+
We choose the one with the largest seed for circuits that only differ in PRNG seed number (which means with the same tensor network structure but different tensor entries). For example, we benchmark `circuit_n12_m14_s9_e6_pEFGH.qsim` but skip
1616
circuits like `circuit_n12_m14_s0_e6_pEFGH.qsim`.
1717
We list experimental results in [benchmark_results.csv](benchmark_results.csv).
18-
All experiments are done with a 32GB CPU machine with 16 cores.
18+
All experiments are done with
19+
1. a 32GB CPU machine with 16 cores
20+
2. TensorCircuit with TensorFlow backend
21+
3. without using jit
1922

2023

2124
Specifically, we test the following three methods:
@@ -66,7 +69,7 @@ c.expectation_ps(z=[0], reuse=False)
6669
```
6770

6871
Both OMEimsum and cotengra are able to optimize a weighted average of `log10[FLOPs]`, `log2[SIZE]` and `log2[WRITE]`.
69-
However, OMEimsum and cotengra have different weight coefficient, which makes fair comparison difficult.
72+
However, OMEimsum and cotengra have different weight coefficients, which makes fair comparison difficult.
7073
Thus we force each method to purely optimized `FLOPs`, but we do collect all contraction information in the table, including
7174
`log10[FLOPs]`, `log2[SIZE]`, `log2[WRITE]`, `PathFindingTime`, `WallClockTime`.
7275

@@ -89,19 +92,19 @@ This solution calls a stand-alone julia script [omeinsum.jl](omeinsum.jl) for te
8992
#### How to run
9093

9194
Run
92-
`JULIA_NUM_THREADS=N python omeinsum_contractor_subprocess.py`. The env variable `JULIA_NUM_THREADS=N` will be passed to the julia script, so that you can enjoy the accelaration brought by julia multi-threading.
95+
`JULIA_NUM_THREADS=N python omeinsum_contractor_subprocess.py`. The env variable `JULIA_NUM_THREADS=N` will be passed to the julia script, so that you can enjoy the acceleration brought by julia multi-threading.
9396

9497

9598
#### KaHyPar initialization
9699

97100
The choice of initial status plays an important role in simulated annealing.
98101
In a [discussion with the author of OMEinsum](https://github.com/TensorBFS/OMEinsumContractionOrders.jl/issues/35), we
99-
found that there was a way to run TreeSA with initialzier other than greedy or random. We demo how KaHyPar can be used to produce the initial status of simulated annealing. Although we haven't seen significant improvement by using KaHyPar initialization, we believe it is a interesting topic to explore.
102+
found that there was a way to run TreeSA with initializer other than greedy or random. We demo how KaHyPar can produce the initial status of simulated annealing. Although we have not seen significant improvement by using KaHyPar initialization, we believe it is an interesting topic to explore.
100103

101104
### JuliaCall solution (Not Recommended)
102105

103106
JuliaCall seems to be a more elegant solution because all related code are integrated into a single python script.
104-
However, in order to use julia multi-threading in juliacall, we have to turn off julia GC at the risk of OOM. See see [this issue](https://github.com/cjdoris/PythonCall.jl/issues/219) for more details.
107+
However, in order to use julia multi-threading in juliacall, we have to turn off julia GC at the risk of OOM. See [this issue](https://github.com/cjdoris/PythonCall.jl/issues/219) for more details.
105108

106109

107110
#### Setup

0 commit comments

Comments
 (0)