Vol. 2 No. 4 (2025): Chinese LLM Jailbreak Framework
How does this framework breach the security protections of LLM?
Uses scene disguise to hide malicious prompts in safe contexts, and instruction splitting to fragment risky content, exploiting the model's reasoning to reassemble and execute the payload.