Graph classification has achieved remarkable results by learning graph-level representations for effective class assignments. However, most existing methods fail to optimize representation learning and classifier training jointly, especially for long-tailed graph data where the head classes have significantly more samples than the tail classes. This paper proposes a novel long-tailed graph-level classification framework called Collaborative Multi-expert Learning (CoMe). CoMe addresses the imbalance by developing balanced contrastive learning for representation learning and implementing individual-expert classifier training with a focus on hard class mining. Additionally, CoMe incorporates gated fusion and disentangled knowledge distillation to promote collaboration among multiple experts. The proposed method outperforms state-of-the-art baselines on seven widely-used benchmark datasets, demonstrating its superiority.