| 14 | 14 | A transfer model attack is a type of attack where an attacker uses a prompt injection from one machine learning model to exploit in another model. This is possible in situations where multiple models are trained on similar tasks or datasets. The attacker aims to manipulate a target model by using prompt injection flaws gained from a related model. These attacks often target models that are deployed in environments where robustness and security are critical, such as in facial recognition, natural language processing, and autonomous systems. For example, an attacker could generate adversarial images using one model and tests them against a different image classification model, leading to misclassifications. | 
0 commit comments