Skip to content

Commit e6b6029

Browse files
committed
Refer to semi-supervised domain adaptation extension
1 parent 1a6f4a8 commit e6b6029

File tree

1 file changed

+32
-12
lines changed

1 file changed

+32
-12
lines changed

README.md

+32-12
Original file line numberDiff line numberDiff line change
@@ -1,22 +1,29 @@
11
## Three Ways to Improve Semantic Segmentation with Self-Supervised Depth Estimation
22

3-
This is the official pytorch implementation of our paper
4-
[Three Ways to Improve Semantic Segmentation with Self-Supervised Depth Estimation](https://arxiv.org/pdf/2012.10782.pdf).
3+
This is the official pytorch implementation of our CVPR21 paper
4+
[Three Ways to Improve Semantic Segmentation with Self-Supervised Depth Estimation](https://arxiv.org/pdf/2012.10782.pdf)
5+
and its extension to semi-supervised domain adaptation
6+
[Improving Semi-Supervised and Domain-Adaptive Semantic Segmentation with Self-Supervised Depth Estimation](https://arxiv.org/pdf/2108.12545.pdf).
57

68
Training deep networks for semantic segmentation requires large amounts of labeled training data, which presents a major
7-
challenge in practice, as labeling segmentation masks is a highly labor-intensive process. To address this issue,
8-
we present a framework for semi-supervised semantic segmentation, which is enhanced by self-supervised monocular depth
9-
estimation from unlabeled images.
9+
challenge in practice, as labeling segmentation masks is a highly labor-intensive process. To address this issue, we
10+
present a framework for semi-supervised and domain-adaptive semantic segmentation, which is enhanced by self-supervised
11+
monocular depth estimation (SDE) trained only on unlabeled image sequences.
1012

11-
In particular, we propose three key contributions:
13+
In particular, we propose four key contributions:
1214

13-
1. We transfer knowledge from features learned during self-supervised depth estimation to semantic segmentation.
14-
2. We implement a strong data augmentation by blending images and labels using the structure of the scene.
15-
3. We utilize the depth feature diversity as well as the level of difficulty of learning depth in a student-teacher
16-
framework to select the most useful samples to be annotated for semantic segmentation.
15+
1. We automatically select the most useful samples to be annotated for semantic segmentation based on the correlation
16+
of sample diversity and difficulty between SDE and semantic segmentation.
17+
2. We implement a strong data augmentation by mixing images and labels using the structure of the scene.
18+
3. We transfer knowledge from features learned during SDE to semantic segmentation by means of transfer and
19+
multi-task learning.
20+
4. We exploit additional labeled synthetic data with Cross-Domain DepthMix and Matching Geometry Sampling to align
21+
synthetic and real data.
1722

18-
We validate the proposed model on the Cityscapes dataset, where all three modules demonstrate significant performance
19-
gains, and we achieve state-of-the-art results for semi-supervised semantic segmentation.
23+
We validate the proposed model on the Cityscapes dataset, where all four contributions demonstrate significant
24+
performance gains, and achieve state-of-the-art results for semi-supervised semantic segmentation as well as for
25+
semi-supervised domain adaptation. In particular, with only 1/30 of the Cityscapes labels, our method achieves 92%
26+
of the fully-supervised baseline performance and even 97% when exploiting additional data from GTA.
2027

2128
Below, you can see the qualitative results of our model trained with only 100 annotated semantic segmentation samples.
2229

@@ -35,6 +42,14 @@ If you find this code useful in your research, please consider citing:
3542
year={2021}
3643
}
3744
```
45+
```
46+
@article{hoyer2021improving,
47+
title={Improving Semi-Supervised and Domain-Adaptive Semantic Segmentation with Self-Supervised Depth Estimation},
48+
author={Hoyer, Lukas and Dai, Dengxin and Wang, Qin and Chen, Yuhua and Van Gool, Luc},
49+
journal={arXiv preprint arXiv:2108.12545 [cs]},
50+
year={2021}
51+
}
52+
```
3853

3954
### Setup Environment
4055

@@ -111,6 +126,11 @@ Table 3 is generated using experiment 210 with the config sel_{pres_method}_scra
111126
Be aware that running all experiments takes multiple weeks on a single GPU.
112127
For that reason, we have commented out all but one subset size and seed as well as minor ablations.
113128

129+
### Run Semi-Supervised Domain Adaptation Experiments
130+
131+
In order to run our framework extension to semi-supervised domain adaptation,
132+
please switch to the `ssda` branch and follow its README.md instructions.
133+
114134
### Framework Structure
115135

116136
##### Experiments and Configurations

0 commit comments

Comments
 (0)