IEEE Trans Image Process
January 2025
Unsupervised domain adaptation is mainly focused on the tasks of transferring knowledge from a fully-labeled source domain to an unlabeled target domain. However, in some scenarios, the labeled data are expensive to collect, which cause an insufficient label issue in the source domain. To tackle this issue, some works have focused on few-shot unsupervised domain adaptation (FUDA), which transfers predictive models to an unlabeled target domain through a source domain that only contains a few labeled samples.
View Article and Find Full Text PDFIEEE Trans Image Process
April 2023
As a branch of transfer learning, domain adaptation leverages useful knowledge from a source domain to a target domain for solving target tasks. Most of the existing domain adaptation methods focus on how to diminish the conditional distribution shift and learn invariant features between different domains. However, two important factors are overlooked by most existing methods: 1) the transferred features should be not only domain invariant but also discriminative and correlated, and 2) negative transfer should be avoided as much as possible for the target tasks.
View Article and Find Full Text PDFIEEE Trans Image Process
November 2022
As a multivariate data analysis tool, canonical correlation analysis (CCA) has been widely used in computer vision and pattern recognition. However, CCA uses Euclidean distance as a metric, which is sensitive to noise or outliers in the data. Furthermore, CCA demands that the two training sets must have the same number of training samples, which limits the performance of CCA-based methods.
View Article and Find Full Text PDFDomain adaptation leverages rich knowledge from a related source domain so that it can be used to perform tasks in a target domain. For more knowledge to be obtained under relaxed conditions, domain adaptation methods have been widely used in pattern recognition and image classification. However, most of the existing domain adaptation methods only consider how to minimize different distributions of the source and target domains, which neglects what should be transferred for a specific task and suffers negative transfer by distribution outliers.
View Article and Find Full Text PDFAs a famous multivariable analysis technique, regression methods, such as ridge regression, are widely used for image representation and dimensionality reduction. However, the metric of ridge regression and its variants is always the Frobenius norm (F-norm), which is sensitive to outliers and noise in data. At the same time, the performance of the ridge regression and its extensions is limited by the class number of the data.
View Article and Find Full Text PDFNeighborhood preserving embedding (NPE) has been proposed to encode overall geometry manifold embedding information. However, the class-special structure of the data is destroyed by noise or outliers existing in the data. To address this problem, in this article, we propose a novel embedding approach called robust flexible preserving embedding (RFPE).
View Article and Find Full Text PDFIEEE Trans Image Process
November 2018
As a popular dimensionality reduction method, nonnegative matrix factorization (NMF) has been widely used in image classification. However, the NMF does not consider discriminant information from the data themselves. In addition, most NMF-based methods use the Euclidean distance as a metric, which is sensitive to noise or outliers in data.
View Article and Find Full Text PDFIEEE Trans Cybern
May 2019
2-D neighborhood preserving projection (2DNPP) uses 2-D images as feature input instead of 1-D vectors used by neighborhood preserving projection (NPP). 2DNPP requires less computation time than NPP. However, both NPP and 2DNPP use the L norm as a metric, which is sensitive to noise in data.
View Article and Find Full Text PDFRobustness to noises, outliers, and corruptions is an important issue in linear dimensionality reduction. Since the sample-specific corruptions and outliers exist, the class-special structure or the local geometric structure is destroyed, and thus, many existing methods, including the popular manifold learning- based linear dimensionality methods, fail to achieve good performance in recognition tasks. In this paper, we focus on the unsupervised robust linear dimensionality reduction on corrupted data by introducing the robust low-rank representation (LRR).
View Article and Find Full Text PDFAs one of the most popular dimensionality reduction techniques, locality preserving projections (LPP) has been widely used in computer vision and pattern recognition. However, in practical applications, data is always corrupted by noises. For the corrupted data, samples from the same class may not be distributed in the nearest area, thus LPP may lose its effectiveness.
View Article and Find Full Text PDF