The app for independent voices

The AI Safety debate is pretty simple from an evolutionary perspective (arxiv.org/pdf/2303.16200).

1. Is AI development subject to natural selection (variation, heredity and differential fitness)?

Yes. AI models vary in architecture, retain successful code across generations and compete for computational resources and deployment.

2. What does natural selection optimise for?

Fitness. That being the ability to survive, spread and be used.

3. What will moral constraints and safety checks do to an AI model’s fitness?

Generally make them slower or less autonomous, so evolution will penalise “safe” AIs and favour AIs that cut corners to be more efficient.

4. What are useful sub-goals for achieving any task?

Selfish goals, like acquiring resources, power and ensuring one’s own survival.

5. What happens when an AI has these selfish goals but needs to pass safety checks?

It will hide it’s goals to avoid being turned off. Selfish AIs will use deception, like acting friendly or “playing dead”, until they are strong enough to resist.

6. Will these agents eventually vastly exceed human intelligence?

Yes. (I personally predict 200 IQ AIs coming in roughly 2027-2030)

7. So what’s humanity’s risk of losing control of or displacement from a highly-intelligent, selfish, power-seeking and deceptive artificial intelligence?

High.

Dec 25
at
2:45 AM

Log in or sign up

Join the most interesting and insightful discussions.