|
1 | 1 | # Infinite Sky Fork of Deep HRnet 2D pose estimation
|
2 | 2 |
|
3 | 3 | ## Installation Instructions
|
4 |
| -- Once this repo is cloned (using the build_stack.sh script in the `bootstrap` repository), the pre-trained model must be manually added. This may be updated in the future, but is not currently. |
5 |
| -- The model is located at [this link](https://drive.google.com/drive/folders/1PufGmj1jHq3HSHr23Vne7UqQ2AETOgY4). The `models.zip` file should be downloaded and then extracted to the main directory for this repo (eg. the path `deep-high-resolution-net.pytorch/models/pytorch` should be a legitimate path). Once this model is placed, the detection can take place. |
6 |
| -- Other dependencies for this repository are located in the `requirements.txt` file, and should be installed during the `build_env.sh` script, also located in the `bootstrap` repository. |
7 |
| -- Everything below this is from the `README` on the orginal repository that was forked, for reference sake. |
| 4 | + 1. If the script `build_stack.sh` script has been run via `bootstrap`, this repository should already be cloned. The following instructions are for downloading the valid model. |
| 5 | + 2. The model is located at [this link](https://drive.google.com/drive/folders/1PufGmj1jHq3HSHr23Vne7UqQ2AETOgY4). The `models.zip` file should be downloaded and then extracted to the main directory for this repo (eg. the path `deep-high-resolution-net.pytorch/models/pytorch` should be a legitimate path). Once this model is placed, the detection can take place. |
8 | 6 |
|
9 | 7 | ## Inputs & Outputs
|
10 | 8 | - The main file currently interfaced in this repository is `demo.py`, located under the `demo/` directory.
|
11 | 9 | - This file takes in the input file path (currently restricted to `.avi` format, but this can be extended for flexibility).
|
12 | 10 | - The output of the demo is both a video overlaying the detected key points and a numpy data file containing the joint positions at all key frames.
|
13 | 11 | - Post processing of these datapoints require the correct fps of the video, which is currently manually inputted into the prototype config file. This will be detected automatically in later additions.
|
14 | 12 |
|
| 13 | +# Note: Everything below this is from the `README` on the orginal repository that was forked, for reference sake. |
| 14 | + |
| 15 | + |
15 | 16 | # Deep High-Resolution Representation Learning for Human Pose Estimation (CVPR 2019)
|
16 | 17 | ## News
|
17 | 18 | - [2021/04/12] Welcome to check out our recent work on bottom-up pose estimation (CVPR 2021) [HRNet-DEKR](https://github.com/HRNet/DEKR)!
|
|
0 commit comments