Skip to content

Files

Latest commit

7b2f849 · Dec 22, 2021

History

History
This branch is 178 commits behind BR-IDL/PaddleViT:develop.

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
Nov 3, 2021
Nov 4, 2021
Dec 22, 2021
Nov 3, 2021
Dec 3, 2021
Oct 13, 2021
Sep 23, 2021
Nov 3, 2021
Dec 20, 2021
Dec 20, 2021
Oct 13, 2021
Nov 3, 2021
Nov 3, 2021
Nov 3, 2021
Nov 3, 2021
Nov 3, 2021
Oct 13, 2021
Sep 23, 2021
Sep 23, 2021
Nov 3, 2021
Nov 3, 2021
Nov 3, 2021

Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition, arxiv

PaddlePaddle training/validation code and pretrained models for ViP.

The official and 3rd party pytorch implementation are here.

This implementation is developed by PPViT.

drawing drawing

ViP Model Overview

Update

  • Update (2021-11-03): Code and weights are updated.
  • Update (2021-09-23): Code is released and ported weights are uploaded.

Models Zoo

Model Acc@1 Acc@5 #Params FLOPs Image Size Crop_pct Interpolation Link
vip_s7 81.50 95.76 25.1M 7.0G 224 0.875 bicubic google/baidu(mh9b)
vip_m7 82.75 96.05 55.3M 16.4G 224 0.875 bicubic google/baidu(hvm8)
vip_l7 83.18 96.37 87.8M 24.5G 224 0.875 bicubic google/baidu(tjvh)

*The results are evaluated on ImageNet2012 validation set.

Note: ViP weights are ported from here

Notebooks

We provide a few notebooks in aistudio to help you get started:

*(coming soon)*

Requirements

Data

ImageNet2012 dataset is used in the following folder structure:

│imagenet/
├──train/
│  ├── n01440764
│  │   ├── n01440764_10026.JPEG
│  │   ├── n01440764_10027.JPEG
│  │   ├── ......
│  ├── ......
├──val/
│  ├── n01440764
│  │   ├── ILSVRC2012_val_00000293.JPEG
│  │   ├── ILSVRC2012_val_00002138.JPEG
│  │   ├── ......
│  ├── ......

Usage

To use the model with pretrained weights, download the .pdparam weight file and change related file paths in the following python scripts. The model config files are located in ./configs/.

For example, assume the downloaded weight file is stored in ./vip_s7.pdparams, to use the vip_s7 model in python:

from config import get_config
from vip import build_vip as build_model
# config files in ./configs/
config = get_config('./configs/vip_s7.yaml')
# build model
model = build_model(config)
# load pretrained weights, .pdparams is NOT needed
model_state_dict = paddle.load('./vip_s7')
model.set_dict(model_state_dict)

Evaluation

To evaluate ViP model performance on ImageNet2012 with a single GPU, run the following script using command line:

sh run_eval.sh

or

CUDA_VISIBLE_DEVICES=0 \
python main_single_gpu.py \
    -cfg='./configs/vip_s7.yaml' \
    -dataset='imagenet2012' \
    -batch_size=16 \
    -data_path='/dataset/imagenet' \
    -eval \
    -pretrained='./vip_s7'
Run evaluation using multi-GPUs:
sh run_eval_multi.sh

or

CUDA_VISIBLE_DEVICES=0,1,2,3 \
python main_multi_gpu.py \
    -cfg='./configs/vip_s7.yaml' \
    -dataset='imagenet2012' \
    -batch_size=16 \
    -data_path='/dataset/imagenet' \
    -eval \
    -pretrained='./vip_s7'

Training

To train the ViP Transformer model on ImageNet2012 with single GPUs, run the following script using command line:

sh run_train.sh

or

CUDA_VISIBLE_DEVICES=0 \
python main_single_gpu.py \
  -cfg='./configs/vip_s7.yaml' \
  -dataset='imagenet2012' \
  -batch_size=32 \
  -data_path='/dataset/imagenet' \
Run training using multi-GPUs:
sh run_train_multi.sh

or

CUDA_VISIBLE_DEVICES=0,1,2,3 \
python main_multi_gpu.py \
    -cfg='./configs/vip_s7.yaml' \
    -dataset='imagenet2012' \
    -batch_size=16 \
    -data_path='/dataset/imagenet' \ 

Visualization Attention Map

(coming soon)

Reference

@misc{hou2021vision,
    title={Vision Permutator: A Permutable MLP-Like Architecture for Visual Recognition},
    author={Qibin Hou and Zihang Jiang and Li Yuan and Ming-Ming Cheng and Shuicheng Yan and Jiashi Feng},
    year={2021},
    eprint={2106.12368},
    archivePrefix={arXiv},
    primaryClass={cs.CV}
}