1 post
Chatbots and LLMs can follow instructions too well. Here's how malicious prompts bypass filters and what you can do