You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository was archived by the owner on Jun 5, 2024. It is now read-only.
The object detection output for each frame will be in `test_json_out/` and in COCO format. The visualization frames will be in `test_vis_out/`. The ROI features will be in `test_box_feat_out/`. Remove `--visualize --vis_path test_vis_out` and `--get_box_feat --box_feat_path test_box_feat_out` if you only want the json files.
76
-
77
-
3. Run object detection & tracking on the test videos
69
+
2. Run object detection & tracking on the test videos
To have the object detection output in COCO json format, add `--out_dir test_json_out `; To have the bounding box visualization, add `--visualize --vis_path test_vis_out`.
76
+
To speed it up, try `--frame_gap 8`, and the tracks between detection frames will be linearly interpolated.
83
77
The tracking results will be in `test_track_out/` and in MOTChallenge format. To visualize the tracking results:
84
78
```
85
-
$ ls $PWD/v1-val_testvideos/* > v1-val_testvideos.abs.lst
Now you have the tracking visualization videos for both "Person" and "Vehicle" class.
105
88
106
-
4. You can also run both inferencing with frozen graph (See [this](SPEED.md) for instructions of how to pack the model). Change `--model_path obj_v3.pb` and add `--is_load_from_pb`. It is about 30% faster.
89
+
3. You can also run inferencing with frozen graph (See [this](SPEED.md) for instructions of how to pack the model). Change `--model_path obj_v3.pb` and add `--is_load_from_pb`. It is about 30% faster. For running on [MEVA](http://mevadata.org/) dataset (avi videos & indoor scenes) or with [EfficientDet](https://github.com/google/automl/tree/master/efficientdet) models, see examples [here](COMMANDS.md).
107
90
108
91
## Models
109
92
These are the models you can use for inferencing. The original ActEv annotations can be downloaded from [here](https://next.cs.cmu.edu/data/actev-v1-drop4-yaml.tgz). I will add instruction for training and testing if requested. Click to download each model.
0 commit comments