One paper has been accepted by Expert Systems with Applications (SCIE Q1, IF = 7.5)
Ba Hung Ngo, Doanh C. Bui, Tae Jong Choi.
Semi-supervised domain adaptation (SSDA) often suffers from bias in visual presentation learning towards the source domain due to the significant imbalance between labeled source data and limited labeled target data. Recent SSDA methods employ pseudo-labeling to address this problem, which estimates labels of the unlabeled target data. However, we have recognized that the quantity and quality of pseudo-labels created by previous approaches still have room for improvement because they primarily focused on each input image’s local information without considering the training data’s structure. This paper introduces a novel method for enriching semantic information in SSDA that unifies the benefits of data augmentation, the learning behavior of Graph Convolutional Networks (GCNs), and pseudo-labeling. The proposed method, Enriching Semantic Representations (EnSR), addresses the issue of insufficient training data through data augmentation, thereby improving the model’s generalization capabilities and reducing the possibility of overfitting. Additionally, EnSR leverages GCN’s feature aggregation properties, enabling images to acquire more comprehensive and enriched representations. Furthermore, it employs pseudo-labeling to enlarge the labeled set by incorporating these generated labels into the original training sets. Our approach successfully achieves class-wise matching across domains. Furthermore, we demonstrate that the proposed method effectively handles bias in visual presentation learning due to the imbalance between labeled source and target data in SSDA. The experimental results show a significant increase over the current state-of-the-art SSDA methods, with notable margins of improvement (e.g., +9.7% and +11.8% with 1-shot and 3-shot on DomainNet, respectively).