Relatedly, I’ve been trying to understand how to think about AI and policy, but I keep running into a very basic problem. The creators of AI models keep discovering ‘emergent’ properties.
Can you think of any analogy of a similar product or service? Aka is there anything you can think of where the creators simply and explicitly did not know what their product did?
I don’t mean someone who built something with unintended consequences, I mean someone who built something with the intention of finding out its capabilities only after it was constructed.
Policy works through analogy, so that’s where I’m starting.