In order to enable effective manipulation of objects by robots in real-world settings, accurate estimation of their 6D pose is crucial. However, many current approaches struggle to make accurate predictions when faced with new instances of objects and heavy occlusions. In this study, we propose a few-shot pose estimation (FSPE) approach called SA6D. SA6D utilizes a self-adaptive segmentation module to identify the new target object and create a point cloud model of it using only a small number of reference images that may be cluttered. Unlike existing methods, SA6D does not rely on object-centric reference images or additional object information, making it a more versatile and scalable solution across different object categories. We evaluate SA6D on real-world datasets of objects placed on tabletops and show that it outperforms other FSPE methods, particularly in scenes with clutter and occlusions, while requiring fewer reference images.