@inproceedings{domain_adapt,
author = {Jiang, Y.},
booktitle = {3rd International Symposium on Electrical, Electronics and Information Engineering (ISEEIE 2023)},
title = {Domain adaptation semantic segmentation for self-driving cars},
year = {2023},
volume = {2023},
number = {},
pages = {126-130},
keywords = {},
file = {domain_adapt.pdf},
doi = {10.1049/icp.2023.1892}
}
As an extension of the traditional deep learning network, semantic segmentation aims to generate a class label for each pixel in the input image, thus grouping up together pixels that possess the same semantic label. It is always one of the key topics in computer vision domain, and it is also a vital concept in building self-driving automobile systems. One problem often met in real-life semantic segmentation problem is that researchers often can’t provide a large enough dataset containing all diversely different scenes around the world, causing the model to underfit. In this article we show you how we use VGG16 deep learning model to build a Fully Convolutional Network as to achieve pixel-wise semantic prediction, and how we addressed aforementioned limitation by training the model using video game scenes from GTA5. Since there are style-level discrepancies between the game scenes and the real life scenes, we utilize Adversarial Domain Adaptation to overcome problems caused by such domain shifts and to make the model perform better when working in real life scenes.