You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docker/README.md
+5-1
Original file line number
Diff line number
Diff line change
@@ -16,10 +16,14 @@ Run the docker container by the following command:
16
16
```bash
17
17
sudo docker run -it --network host --gpus all tensorcircuit
18
18
19
-
# if one also wants mount local source code, also add args `-v "$(pwd)":/app`
19
+
# if one also wants to mount local source code, also add args `-v "$(pwd)":/app`
20
+
21
+
# using tensorcircuit/tensorcircuit to run the prebuild docker image from dockerhub
20
22
21
23
# for old dockerfile with no runtime env setting
22
24
# sudo docker run -it --network host -e LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-11.0/targets/x86_64-linux/lib -e PYTHONPATH=/app -v "$(pwd)":/app --gpus all tensorcircuit
23
25
```
24
26
25
27
`export TF_CPP_MIN_LOG_LEVEL=3` maybe necessary since jax suprisingly frequently complain about ptxas version problem. And `export CUDA_VISIBLE_DEVICES=-1` if you want to test only on CPU.
28
+
29
+
The built docker has no tensorcircuit installed but left with a tensorcircuit source code dir. So one can `python setup.py develop` to install tensorcircuit locally (in which one can also mount the tensorcircuit codebase on host) or `pip install tensorcircuit` within the running docker.
0 commit comments