Partial Label Learning (PLL) is a form of weakly supervised learning in which each training instance is assigned multiple candidate labels, but only one label is considered the true label. However, this assumption may not always hold true due to potential mistakes in the labeling process, resulting in the true label not being among the candidate labels. This scenario is referred to as Unreliable Partial Label Learning (UPLL), which introduces an additional challenge due to the inherent unreliability and ambiguity of partial labels. Existing methods often suffer from suboptimal performance in such cases.
To address this challenge, we propose the Unreliability-Robust Representation Learning framework (URRL). URRL leverages unreliability-robust contrastive learning to help the model effectively handle unreliable partial labels. Additionally, we introduce a dual strategy that combines KNN-based correction of candidate label sets and consistency-regularization-based label disambiguation. This strategy improves the quality of labels and enhances the ability of representation learning within the URRL framework.
Extensive experiments demonstrate that our proposed method outperforms state-of-the-art PLL methods on various datasets with different levels of unreliability and ambiguity. Furthermore, we provide a theoretical analysis of our approach based on the expectation maximization (EM) algorithm. If our proposal is accepted, we commit to making the code publicly available.