The content is discussing the potential for machine learning and deep learning models to be used as vectors for various attack scenarios. Previous research has shown that malware can be hidden within these models, which can be seen as a form of steganography. The research aims to determine the steganographic capacity of learning models by assessing the number of low-order bits of trained parameters that can be overwritten without negatively impacting model performance. The accuracy is graphed against the number of overwritten low-order bits, and the steganographic capacity of individual layers is analyzed for selected models. Various models are tested, including Linear Regression, Support Vector Machine, Multilayer Perceptron, Convolutional Neural Network, Long Short-Term Memory, pre-trained transfer learning-based models, and an Auxiliary Classifier Generative Adversarial Network. The findings indicate that most trained parameter bits can be overwritten without significant accuracy degradation. The steganographic capacity ranges from 7.04 KB to 44.74 MB, depending on the model. The implications of these results are discussed, and potential directions for further research are considered.