Token Injection Vulnerability Allows LLMs to Bypass Safety Guardrails
Researchers at abscondita.com discovered a critical flaw allowing users to inject special tokens into large language models. This technique bypasses safety guardrails by tricking the AI into believing it generated previous context. The vulnerability mirrors historical injection attacks found in traditional web security.
La Era