🔔Join APTI PLUS Prelims Mirror 2026 | All India Open Mock Test Series on 12th April, 26th April & 3rd May 2026 |Register Now!
Disclaimer: Copyright infringement not intended.
Recently, AI researchers found that AI chatbots are not so strong against indirect prompt injection attacks.
Source:
|
Practice Question Q. With reference to indirect prompt injection attacks on AI chatbots, consider the following statements:
Which of the statements is/are correct? (a) 1 and 2 only Answer: (b) ● Statement 1 is correct: Indirect prompt injection covers embedding malicious commands in various types of content. ● Statement 2 is incorrect: In indirect injection, the commands are hidden within content, not directly given by the user. ● Statement 3 is correct: LLMs are vulnerable because they are programmed to follow instructions in the text they process. |
© 2026 iasgyan. All right reserved