The trustworthiness of machine learning models in practical applications has been recently threatened by vulnerabilities to backdoor attacks. While it is commonly believed that not everyone can be an attacker due to the significant effort and extensive experimentation required to design trigger generation algorithms, this paper presents a more severe backdoor threat. It demonstrates that anyone can exploit a readily accessible algorithm to carry out silent backdoor attacks. By utilizing widely-used lossy image compression tools, the attacker can effortlessly inject a trigger pattern into an image without leaving any noticeable trace, making the generated triggers appear as natural artifacts. This attack does not necessitate extensive knowledge and can be executed simply by clicking on the “convert” or “save as” button while using lossy image compression tools. Unlike previous works that require trigger generator design, this attack only requires poisoning the data. In empirical tests, the proposed attack consistently achieves a 100% success rate in various benchmark datasets such as MNIST, CIFAR-10, GTSRB, and CelebA. Remarkably, even with a small poisoning rate (approximately 10%) in the clean label setting, the proposed attack can still achieve almost a 100% success rate. Furthermore, the generated trigger from one lossy compression algorithm can be transferred to other related compression algorithms, exacerbating the severity of this backdoor threat. This research sheds light on the extensive risks of backdoor attacks in practice and emphasizes the need for practitioners to investigate similar attacks and relevant backdoor mitigation methods.