Multi-view learning integrates diverse representations of the same instances to improve performance. Most existing kernel-based multi-view learning methods use fusion techniques without enforcing an explicit collaboration type across views or co-regularization which limits global collaboration. We propose AW-LSSVM, an adaptive weighted LS-SVM that promotes complementary learning by an iterative global coupling to make each view focus on hard samples of others from previous iterations. Experiments demonstrate that AW-LSSVM outperforms existing kernel-based multi-view methods on most datasets, while keeping raw features isolated, making it also suitable for privacy-preserving scenarios.
翻译:多视图学习通过整合同一实例的不同表示来提升性能。现有大多数基于核的多视图学习方法采用融合技术,但未强制视图间的显式协作类型或协同正则化,这限制了全局协作。我们提出AW-LSSVM,一种自适应加权LS-SVM,通过迭代全局耦合促进互补学习,使每个视图专注于其他视图在前次迭代中的困难样本。实验表明,AW-LSSVM在多数数据集上优于现有基于核的多视图方法,同时保持原始特征隔离,使其也适用于隐私保护场景。