diff --git a/README.md b/README.md index 2ff1c717..316df333 100644 --- a/README.md +++ b/README.md @@ -1,3 +1,17 @@ +# Ifinite Sky Fork of Deep HRnet 2D pose estimation + +## Installation Instructions +- Once this repo is cloned (using the build_stack.sh script in the `bootstrap` repository), the pre-trained model must be manually added. This may be updated in the future, but is not currently. +- The model is located at [this link](https://drive.google.com/drive/folders/1PufGmj1jHq3HSHr23Vne7UqQ2AETOgY4). The `models.zip` file should be downloaded and then extracted to the main directory for this repo (eg. the path `deep-high-resolution-net.pytorch/models/pytorch` should be a legitimate path). Once this model is placed, the detection can take place. +- Other dependencies for this repository are located in the `requirements.txt` file, and should be installed during the `build_env.sh` script, also located in the `bootstrap` repository. +- Everything below this is from the `README` on the orginal repository that was forked, for reference sake. + +## Inputs & Outputs +- The main file currently interfaced in this repository is `demo.py`, located under the `demo/` directory. +- This file takes in the input file path (currently restricted to `.avi` format, but this can be extended for flexibility). +- The output of the demo is both a video overlaying the detected key points and a numpy data file containing the joint positions at all key frames. +- Post processing of these datapoints require the correct fps of the video, which is currently manually inputted into the prototype config file. This will be detected automatically in later additions. + # Deep High-Resolution Representation Learning for Human Pose Estimation (CVPR 2019) ## News - [2021/04/12] Welcome to check out our recent work on bottom-up pose estimation (CVPR 2021) [HRNet-DEKR](https://github.com/HRNet/DEKR)! diff --git a/demo/demo.py b/demo/demo.py index 1633ea39..5ce13c66 100644 --- a/demo/demo.py +++ b/demo/demo.py @@ -192,8 +192,6 @@ def parse_args(): # general parser.add_argument('--cfg', type=str, default='demo/inference-config.yaml') parser.add_argument('--video', type=str) - parser.add_argument('--webcam',action='store_true') - parser.add_argument('--image',type=str) parser.add_argument('--write',action='store_true') parser.add_argument('--showFps',action='store_true') parser.add_argument('--output_dir',type=str, default='/') @@ -234,7 +232,11 @@ def main(): if cfg.TEST.MODEL_FILE: print('=> loading model from {}'.format(cfg.TEST.MODEL_FILE)) - pose_model.load_state_dict(torch.load(cfg.TEST.MODEL_FILE), strict=False) + if torch.cuda.is_available(): + pose_model.load_state_dict(torch.load(cfg.TEST.MODEL_FILE), strict=False) + else: + pose_model.load_state_dict(torch.load(cfg.TEST.MODEL_FILE, map_location='cpu'), strict=False) + else: print('expected model defined in config at TEST.MODEL_FILE')