diff --git a/.github/FUNDING.yml b/.github/FUNDING.yml new file mode 100644 index 00000000..87a36a6b --- /dev/null +++ b/.github/FUNDING.yml @@ -0,0 +1,15 @@ +# These are supported funding model platforms + +github: [KaihuaTang] +patreon: # Replace with a single Patreon username +open_collective: # Replace with a single Open Collective username +ko_fi: # Replace with a single Ko-fi username +tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel +community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry +liberapay: # Replace with a single Liberapay username +issuehunt: # Replace with a single IssueHunt username +lfx_crowdfunding: # Replace with a single LFX Crowdfunding project-name e.g., cloud-foundry +polar: # Replace with a single Polar username +buy_me_a_coffee: tkhchipaomg +thanks_dev: # Replace with a single thanks.dev username +custom: ['https://kaihuatang.github.io/donate'] diff --git a/DATASET.md b/DATASET.md index 60ab16ee..3b411328 100644 --- a/DATASET.md +++ b/DATASET.md @@ -7,3 +7,8 @@ Note that our codebase intends to support attribute-head too, so our ```VG-SGG.h 1. Download the VG images [part1 (9 Gb)](https://cs.stanford.edu/people/rak248/VG_100K_2/images.zip) [part2 (5 Gb)](https://cs.stanford.edu/people/rak248/VG_100K_2/images2.zip). Extract these images to the file `datasets/vg/VG_100K`. If you want to use other directory, please link it in `DATASETS['VG_stanford_filtered']['img_dir']` of `maskrcnn_benchmark/config/paths_catelog.py`. 2. Download the [scene graphs](https://1drv.ms/u/s!AmRLLNf6bzcir8xf9oC3eNWlVMTRDw?e=63t7Ed) and extract them to `datasets/vg/VG-SGG-with-attri.h5`, or you can edit the path in `DATASETS['VG_stanford_filtered_with_attribute']['roidb_file']` of `maskrcnn_benchmark/config/paths_catalog.py`. +### Backup Download Links +Thanks for the sponsorship from [Catchip](https://github.com/Catchip), we now provide some backup download links for VG-SGG-with-attri.h5 and other files. +1. [Baidu](https://pan.baidu.com/s/1oyPQBDHXMQ5Tsl0jy5OzgA) Extraction code:1234 +2. [Weiyun](https://share.weiyun.com/ViTWrFxG) + diff --git a/README.md b/README.md index a61db5ed..6f8e6933 100644 --- a/README.md +++ b/README.md @@ -6,6 +6,8 @@ Our paper [Unbiased Scene Graph Generation from Biased Training](https://arxiv.org/abs/2002.11949) has been accepted by CVPR 2020 (Oral). +[Support my open source work](https://kaihuatang.github.io/donate.html) + ## Recent Updates - [x] 2020.06.23 Add no graph constraint mean Recall@K (ng-mR@K) and no graph constraint Zero-Shot Recall@K (ng-zR@K) [\[link\]](METRICS.md#explanation-of-our-metrics) @@ -71,12 +73,13 @@ After you download the [Faster R-CNN model](https://1drv.ms/u/s!AmRLLNf6bzcir8xe The above pretrained Faster R-CNN model achives 38.52/26.35/28.14 mAp on VG train/val/test set respectively. ## Alternate links +Thanks for sponsorship from [Catchip](https://github.com/Catchip). Since OneDrive links might be broken in mainland China, we also provide the following alternate links for all the pretrained models and dataset annotations: -Since OneDrive links might be broken in mainland China, we also provide the following alternate links for all the pretrained models and dataset annotations using BaiduNetDisk: - -Link:[https://pan.baidu.com/s/1oyPQBDHXMQ5Tsl0jy5OzgA](https://pan.baidu.com/s/1oyPQBDHXMQ5Tsl0jy5OzgA) +Link1(Baidu):[https://pan.baidu.com/s/1oyPQBDHXMQ5Tsl0jy5OzgA](https://pan.baidu.com/s/1oyPQBDHXMQ5Tsl0jy5OzgA) Extraction code:1234 +Link2(Weiyun): [https://share.weiyun.com/ViTWrFxG](https://share.weiyun.com/ViTWrFxG) + ## Faster R-CNN pre-training The following command can be used to train your own Faster R-CNN model: ```bash