File tree 5 files changed +21
-12
lines changed
5 files changed +21
-12
lines changed Original file line number Diff line number Diff line change @@ -4,18 +4,18 @@ This repository contains the code and models for the following paper:
4
4
5
5
6
6
** [ Peeking into the Future: Predicting Future Person Activities and Locations in Videos] ( https://arxiv.org/abs/1902.03748 ) ** \
7
- [ Junwei Liang] ( https://www.cs.cmu.edu/~junweil / ) ,
7
+ [ Junwei Liang] ( https://junweiliang.me / ) ,
8
8
[ Lu Jiang] ( http://www.lujiang.info/ ) ,
9
9
[ Juan Carlos Niebles] ( http://www.niebles.net/ ) ,
10
10
[ Alexander Hauptmann] ( https://www.cs.cmu.edu/~alex/ ) ,
11
11
[ Li Fei-Fei] ( http://vision.stanford.edu/feifeili/ ) \
12
12
[ CVPR 2019] ( http://cvpr2019.thecvf.com/ )
13
13
14
- You can find more information at our [ Project Page] ( https://next.cs.cmu.edu/ ) .\
14
+ You can find more information at our [ Project Page] ( https://precognition.team/next ) .\
15
15
* Please note that this is not an officially supported Google product.*
16
16
17
17
+ * [ 11/2022] CMU server is down. You can replace all ` https://next.cs.cmu.edu ` with ` https://precognition.team/next/ ` to download necessary resources.*
18
- + * [ 02/2020] [ New paper] ( https://next.cs.cmu.edu /multiverse/ ) on multi-future trajectory prediction is accepted by CVPR 2020.*
18
+ + * [ 02/2020] [ New paper] ( https://precognition.team/next /multiverse/ ) on multi-future trajectory prediction is accepted by CVPR 2020.*
19
19
20
20
If you find this code useful in your research then please cite
21
21
Original file line number Diff line number Diff line change 2
2
## Step 1: Prepare the data and model
3
3
We experimented on the [ ActEv dataset] ( https://actev.nist.gov ) and the
4
4
[ ETH & UCY dataset] ( https://graphics.cs.ucy.ac.cy/research/downloads/crowd-data ) .
5
- The original ActEv annotations can be downloaded from [ here] ( https://next.cs.cmu.edu /data/actev-v1-drop4-yaml.tgz ) .
5
+ The original ActEv annotations can be downloaded from [ here] ( https://precognition.team/next /data/actev-v1-drop4-yaml.tgz ) .
6
6
* Please do obtain the data copyright and download the raw videos from their website.*
7
- You can download our prepared features from the [ project page] ( https://next.cs.cmu.edu )
7
+ You can download our prepared features from the [ project page] ( https://precognition.team/next/ )
8
8
by running the script ` bash scripts/download_prepared_data.sh ` .
9
9
This will download the following data,
10
10
and will require about 31 GB of disk space:
Original file line number Diff line number Diff line change 2
2
## Step 1: Prepare the data and model
3
3
We experimented on the [ ActEv dataset] ( https://actev.nist.gov ) and
4
4
the [ ETH & UCY dataset] ( https://graphics.cs.ucy.ac.cy/research/downloads/crowd-data ) .
5
- The original ActEv annotations can be downloaded from [ here] ( https://next.cs.cmu.edu /data/actev-v1-drop4-yaml.tgz ) .
5
+ The original ActEv annotations can be downloaded from [ here] ( https://precognition.team/next /data/actev-v1-drop4-yaml.tgz ) .
6
6
* Please do obtain the data copyright and download the raw videos from their website.*
7
- You can download our prepared features from the [ project page] ( next.cs.cmu.edu )
7
+ You can download our prepared features from the [ project page] ( https://precognition.team/next/ )
8
8
by running the script ` bash scripts/download_prepared_data.sh ` .
9
9
This will download the following data, and will require
10
10
about 31 GB of disk space:
Original file line number Diff line number Diff line change 18
18
19
19
mkdir -p next-data
20
20
21
- wget https://next.cs.cmu.edu/data/final_annos.tgz -O next-data/final_annos.tgz
22
- wget https://next.cs.cmu.edu/data/person_features/actev_personboxfeat.tgz -O next-data/actev_personboxfeat.tgz
23
- wget https://next.cs.cmu.edu/data/person_features/ethucy_personboxfeat.tgz -O next-data/ethucy_personboxfeat.tgz
21
+ # [02/2023] CMU server is down. Switching to our HKUST (Guangzhou) Precognition lab server
22
+ # wget https://next.cs.cmu.edu/data/final_annos.tgz -O next-data/final_annos.tgz
23
+ # wget https://next.cs.cmu.edu/data/person_features/actev_personboxfeat.tgz -O next-data/actev_personboxfeat.tgz
24
+ # wget https://next.cs.cmu.edu/data/person_features/ethucy_personboxfeat.tgz -O next-data/ethucy_personboxfeat.tgz
25
+
26
+ wget https://precognition.team/next/data/final_annos.tgz -O next-data/final_annos.tgz
27
+ wget https://precognition.team/next/data/person_features/actev_personboxfeat.tgz -O next-data/actev_personboxfeat.tgz
28
+ wget https://precognition.team/next/data/person_features/ethucy_personboxfeat.tgz -O next-data/ethucy_personboxfeat.tgz
24
29
25
30
# extract and delete the tar files
26
31
cd next-data
Original file line number Diff line number Diff line change 18
18
19
19
mkdir -p next-models
20
20
21
- wget https://next.cs.cmu.edu/data/pretrained_models/actev_single_model.tar -O next-models/actev_single_model.tar
22
- wget https://next.cs.cmu.edu/data/pretrained_models/ethucy_single_model.tar -O next-models/ethucy_single_model.tar
21
+ # [02/2023] CMU server is down. Switching to our HKUST (Guangzhou) Precognition lab server
22
+ # wget https://next.cs.cmu.edu/data/pretrained_models/actev_single_model.tar -O next-models/actev_single_model.tar
23
+ # wget https://next.cs.cmu.edu/data/pretrained_models/ethucy_single_model.tar -O next-models/ethucy_single_model.tar
24
+
25
+ wget https://precognition.team/next/data/pretrained_models/actev_single_model.tar -O next-models/actev_single_model.tar
26
+ wget https://precognition.team/next/data/pretrained_models/ethucy_single_model.tar -O next-models/ethucy_single_model.tar
23
27
24
28
# extract and delete the tar files
25
29
cd next-models
You can’t perform that action at this time.
0 commit comments