Research‎ > ‎Deep Learning‎ > ‎

Domain Adaptation

    도메인 적응은 레이블이 불충분하거나 존재하지 않는 목표 도메인에서 효과적으로 추론하는 모델을 학습하기 위해 레이블이 풍부하고 목표 도메인과 관련이 있는 소스 도메인을 이용하는 방법론입니다. 도메인 적응은 학습 데이터와 실제 세계 데이터 간의 차이로 인한 성능 저하 문제와 불충분한 양질의 벤치마크 문제로 인해 응용이 제한되는 문제를 해결하기 위한 돌파구로 최근에 많은 주목을 받고 있습니다. 현재 이미지 분류, 객체 검출, 의미론적 분할, 사람 재식별을 비롯한 다양한 컴퓨터 비전 및 인공지능 분야에서 활발하게 연구되고 있습니다.

    Domain adaptation is a methodology for training effective inference model for insufficiently labeled or unlabeled target domain through utilization of the related and labeled source domain. Domain adaptation has received a lot of attention in recent years as a breakthrough in addressing the problems of performance degradation due to domain discrepancy between training and real-world data and the limited applications due to insufficient fine benchmarks. This topic is being actively studied in various computer vision and artificial intelligence fields including image classification, semantic segmentation, object detection, person re-identification, etc.

  • Abstract

    We propose a novel unsupervised domain adaptation method for object detection. We aim to alleviate limitations of feature-level and pixel-level domain adaptation approaches. Our approach is composed of various style translation and robust intra-class feature learning, and we construct a structured domain adaptation framework. Our method outperforms the state-of-the-art methods by a large margin in terms of mean average precision (mAP) on cartoon datasets.


  • Domain adaptation framework
    Our proposed domain adaptation framework is composed of various style translation module, object detection module, and domain discriminator module. Various style translation module perturbs the input image to arbitrary cartoon styles. The translated source image is utilized to learn the large intra-class variance through domain discriminator. This scheme encourages the network to generate less domain-specific and more semantic features. Finally, these features are used for effective object detection on the target domain.


  • Various cartoon style transfer

    We observe that varying the learning trend with alternative constraints causes the image translator to perturb the appearance of the translated images. Thus, we apply several variants of constraints to achieve distinct cartoon styles.


  •  Quantitative results for the object detection of the cartoon style test set

    We compared our method with source only and oracle methods on Faster R-CNN backbone. Our learning method achieved the higher class-wise AP than source only cases. Furthermore, we achieved the higher mean AP than oracle method, the supervision case with target (cartoon) domain labels.

  • Qualitative results for the object detection of the cartoon style test set


[1] Taekyung Kim, Minki Jeong, Seunghyeon Kim, Seokeon Choi, Changick Kim, Under review.