Skip to content
Permalink

Comparing changes

Choose two branches to see what’s changed or to start a new pull request. If you need to, you can also or learn more about diff comparisons.

Open a pull request

Create a new pull request by comparing changes across two branches. If you need to, you can also . Learn more about diff comparisons here.
base repository: saif-source/deep-high-resolution-net.pytorch
Failed to load repositories. Confirm that selected base ref is valid, then try again.
Loading
base: master
Choose a base ref
...
head repository: leoxiaobin/deep-high-resolution-net.pytorch
Failed to load repositories. Confirm that selected head ref is valid, then try again.
Loading
compare: master
Choose a head ref
Checking mergeability… Don’t worry, you can still create the pull request.
  • 5 commits
  • 7 files changed
  • 3 contributors

Commits on Jul 15, 2020

  1. Update README.md

    leoxiaobin authored Jul 15, 2020

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    ba50a82 View commit details

Commits on Apr 12, 2021

  1. Add the HRNet-DEKR link

    welleast authored Apr 12, 2021

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    1ee551d View commit details

Commits on May 19, 2021

  1. Copy the full SHA
    00616df View commit details
  2. update demo/README.md

    CrystalSixone authored and leoxiaobin committed May 19, 2021
    Copy the full SHA
    ce8f362 View commit details

Commits on Dec 14, 2022

  1. Update README.md

    Add implementation of timm and modelscope.
    leoxiaobin authored Dec 14, 2022

    Verified

    This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
    Copy the full SHA
    6f69e46 View commit details
Showing with 424 additions and 8 deletions.
  1. +8 −0 README.md
  2. +44 −6 demo/README.md
  3. +27 −0 demo/_init_paths.py
  4. +343 −0 demo/demo.py
  5. +2 −2 demo/inference-config.yaml
  6. BIN demo/inference_6.jpg
  7. BIN demo/inference_7.jpg
8 changes: 8 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
# Deep High-Resolution Representation Learning for Human Pose Estimation (CVPR 2019)
## News
- [2021/04/12] Welcome to check out our recent work on bottom-up pose estimation (CVPR 2021) [HRNet-DEKR](https://github.com/HRNet/DEKR)!
- [2020/07/05] [A very nice blog](https://towardsdatascience.com/overview-of-human-pose-estimation-neural-networks-hrnet-higherhrnet-architectures-and-faq-1954b2f8b249) from Towards Data Science introducing HRNet and HigherHRNet for human pose estimation.
- [2020/03/13] A longer version is accepted by TPAMI: [Deep High-Resolution Representation Learning for Visual Recognition](https://arxiv.org/pdf/1908.07919.pdf). It includes more HRNet applications, and the codes are available: [semantic segmentation](https://github.com/HRNet/HRNet-Semantic-Segmentation), [objection detection](https://github.com/HRNet/HRNet-Object-Detection), [facial landmark detection](https://github.com/HRNet/HRNet-Facial-Landmark-Detection), and [image classification](https://github.com/HRNet/HRNet-Image-Classification).
- [2020/02/01] We have added demo code for HRNet. Thanks [Alex Simes](https://github.com/alex9311).
- Visualization code for showing the pose estimation results. Thanks Depu!
@@ -240,6 +242,12 @@ python visualization/plot_coco.py \
### Other applications
Many other dense prediction tasks, such as segmentation, face alignment and object detection, etc. have been benefited by HRNet. More information can be found at [High-Resolution Networks](https://github.com/HRNet).

### Other implementation
[mmpose](https://github.com/open-mmlab/mmpose) </br>
[ModelScope (中文)](https://modelscope.cn/models/damo/cv_hrnetv2w32_body-2d-keypoints_image/summary)</br>
[timm](https://huggingface.co/docs/timm/main/en/models/hrnet)


### Citation
If you use our code or models in your research, please cite with:
```
50 changes: 44 additions & 6 deletions demo/README.md
Original file line number Diff line number Diff line change
@@ -5,12 +5,13 @@ Inferencing the deep-high-resolution-net.pytoch without using Docker.
## Prep
1. Download the researchers' pretrained pose estimator from [google drive](https://drive.google.com/drive/folders/1hOTihvbyIxsm5ygDpbUuJ7O_tzv4oXjC?usp=sharing) to this directory under `models/`
2. Put the video file you'd like to infer on in this directory under `videos`
3. build the docker container in this directory with `./build-docker.sh` (this can take time because it involves compiling opencv)
4. update the `inference-config.yaml` file to reflect the number of GPUs you have available
3. (OPTIONAL) build the docker container in this directory with `./build-docker.sh` (this can take time because it involves compiling opencv)
4. update the `inference-config.yaml` file to reflect the number of GPUs you have available and which trained model you want to use.

## Running the Model
### 1. Running on the video
```
python inference.py --cfg inference-config.yaml \
python demo/inference.py --cfg demo/inference-config.yaml \
--videoFile ../../multi_people.mp4 \
--writeBoxFrames \
--outputDir output \
@@ -23,9 +24,9 @@ Even with usage of GPU (GTX1080 in my case), the person detection will take near
take nearly **0.07 sec**. In total. inference time per frame will be **0.13 sec**, nearly 10fps. So if you prefer a real-time (fps >= 20)
pose estimation then you should try other approach.

## Result
**===Result===**

Some output image is as:
Some output images are as:

![1 person](inference_1.jpg)
Fig: 1 person inference
@@ -34,4 +35,41 @@ Fig: 1 person inference
Fig: 3 person inference

![3 person](inference_5.jpg)
Fig: 3 person inference
Fig: 3 person inference

### 2. Demo with more common functions
Remember to update` TEST.MODEL_FILE` in `demo/inference-config.yaml `according to your model path.

`demo.py` provides the following functions:

- use `--webcam` when the input is a real-time camera.
- use `--video [video-path]` when the input is a video.
- use `--image [image-path]` when the input is an image.
- use `--write` to save the image, camera or video result.
- use `--showFps` to show the fps (this fps includes the detection part).
- draw connections between joints.

#### (1) the input is a real-time carema
```python
python demo/demo.py --webcam --showFps --write
```

#### (2) the input is a video
```python
python demo/demo.py --video test.mp4 --showFps --write
```
#### (3) the input is a image

```python
python demo/demo.py --image test.jpg --showFps --write
```

**===Result===**

![show_fps](inference_6.jpg)

Fig: show fps

![multi-people](inference_7.jpg)

Fig: multi-people
27 changes: 27 additions & 0 deletions demo/_init_paths.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
# ------------------------------------------------------------------------------
# pose.pytorch
# Copyright (c) 2018-present Microsoft
# Licensed under The Apache-2.0 License [see LICENSE for details]
# Written by Bin Xiao (Bin.Xiao@microsoft.com)
# ------------------------------------------------------------------------------

from __future__ import absolute_import
from __future__ import division
from __future__ import print_function

import os.path as osp
import sys


def add_path(path):
if path not in sys.path:
sys.path.insert(0, path)


this_dir = osp.dirname(__file__)

lib_path = osp.join(this_dir, '..', 'lib')
add_path(lib_path)

mm_path = osp.join(this_dir, '..', 'lib/poseeval/py-motmetrics')
add_path(mm_path)
Loading