A user intentionally crafts instructions to manipulate the normal behavior of an AI model in an attempt to extract confidential information from the model. What is the term used to describe this security issue?
a. Bias
b. Hallucination
c. Intellectual property violation
d. Prompt Injection



Answer :

Other Questions