Multi-task learning (MTL) has become increasingly popular in recommendation systems as it allows for the simultaneous optimization of multiple objectives. However, a major challenge in MTL is negative transfer, where the performance of certain tasks deteriorates due to conflicts between tasks. Previous research has explored negative transfer by treating all samples as a whole and overlooking the complexities within them. This study aims to address this limitation by splitting samples based on the relative amount of positive feedback among tasks. Surprisingly, negative transfer still occurs in existing MTL methods even on samples that receive comparable feedback across tasks. The failure of existing methods can be attributed to the limited capacity of modeling diverse user preferences across tasks using shared embeddings.

To tackle this issue, the authors propose a new paradigm called Shared and Task-specific Embeddings (STEM). This paradigm incorporates both shared and task-specific embeddings to effectively capture task-specific user preferences. To implement this paradigm, the authors introduce a simple model called STEM-Net. STEM-Net is equipped with shared and task-specific embedding tables, along with a customized gating network with stop-gradient operations to facilitate the learning of these embeddings. Remarkably, STEM-Net outperforms the Single-Task Like model on comparable samples and achieves positive transfer.

The effectiveness and superiority of STEM-Net are demonstrated through comprehensive evaluation on three public MTL recommendation datasets. The results show that STEM-Net outperforms state-of-the-art models by a significant margin, providing strong evidence of its effectiveness and superiority.