A New Attack Impacts ChatGPT—and No One Knows How to Stop It “Making models more resistant to prompt injection and other adversarial ‘jailbreaking’ measures is...