>That is the number of potential prompts, and thus attacks against a model like GPT-5. To be clear, that’s not a million attacks. A million has 6 zeroes. The above number has one million zeroes.
That's also true for social engineering. But after someone unsuccessfully tries to social engineer you perhaps 3 times or so, you will figure out what is going on and become less vulnerable. I wonder if there's an analogous strategy for AI.
Feb 6
at
10:01 AM
Relevant people
Log in or sign up
Join the most interesting and insightful discussions.