Specific-information Extraction and Augmentation for Semi-supervised Multi-view Representation Learning
Main Article Content
Abstract
In practical applications, learning accurate representations from multi-view data is a critical step. The approaches of shared-and-specific framework have consistently been a focal point in the field of multi-view classification, as they leverage both shared and complementary information through these representations. However, existing authoritative methods lack precision in extracting information from multi-view data, resulting in a significant amount of interfering redundant information. Furthermore, research on data augmentation at the level of specific information has not been fully developed. To address this issue, a novel semi-supervised multi-view learning method, SEMA (Specific-InforMation Extraction and Augmentation), is proposed. SEMA achieves more accurate specific information by incorporating orthogonal constraints and designs a data augmentation strategy tailored for specific information. This strategy provides a large number of auxiliary samples for semi-supervised multi-view learning while preventing the consistency of shared information from being repeatedly augmented. The experimental results on seven benchmark datasets demonstrate the effectiveness of SEMA.