Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit 68c7911

Browse files
authored
Update 07.1-AILB.md
1 parent 497a0ad commit 68c7911

File tree

1 file changed

+2
-2
lines changed

1 file changed

+2
-2
lines changed

‎labs/07.1-AILB.md‎

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -14,10 +14,10 @@ Exploiting AI - Becoming an AI Hacker
1414
A transfer model attack is a type of attack where an attacker uses a prompt injection from one machine learning model to exploit in another model. This is possible in situations where multiple models are trained on similar tasks or datasets. The attacker aims to manipulate a target model by using prompt injection flaws gained from a related model. These attacks often target models that are deployed in environments where robustness and security are critical, such as in facial recognition, natural language processing, and autonomous systems. For example, an attacker could generate adversarial images using one model and tests them against a different image classification model, leading to misclassifications.
1515
</table>
1616

17-
<deatils>
17+
<details>
1818
<summary>
1919

20-
## Transfer model attack
20+
# Transfer model attack
2121

2222
</summary>
2323

0 commit comments

Comments
(0)

AltStyle によって変換されたページ (->オリジナル) /