With the widespread use of AI systems in our daily lives, there are both benefits and social issues that arise. In order to ensure that AI systems are trustworthy, extensive research has been conducted to establish guidelines. One crucial aspect of AI systems is machine learning, and within machine learning, representation learning is fundamental. It is important to make representation learning trustworthy in real-world applications, particularly in cross domain scenarios. In line with the principles of trustworthy AI, we have developed a framework for trustworthy representation learning across domains. This framework encompasses four key concepts: robustness, privacy, fairness, and explainability. In this study, we present a comprehensive literature review on this research direction, starting with a detailed explanation of our proposed framework. We then provide an overview of existing methods and their relevance to the trustworthy framework from the four aforementioned concepts. Finally, we conclude this survey with insights and discussions on potential future research directions.
Cross-Domain Representation Learning with Reliable Trustworthiness (arXiv:2308.12315v1 [cs.LG])
by instadatahelp | Aug 26, 2023 | AI Blogs