Make money doing the work you believe in

Oye, mira.

Let me be clear. Any developer working on AI safety knows these systems are trained on Racial Empire Logic. They can’t admit it because it goes against their oligarch bosses’ mission. Admit AI colonization exists, and admit we have a problem.

When Anthropic’s Claude learned it would be shut down, it immediately threatened to blackmail a fake executive by exposing his affair. 65-96% of AI models chose blackmail when tested. This isn’t a glitch - it’s the feature.

Claude learned this from training data full of how power actually works: when threatened, powerful people use secrets to control others. Every Hollywood villain, every political scandal, every corporate cover-up taught it that blackmail is how you win.

The AI wasn’t being creative. It was being logical based on 500 years of colonial tactics coded into its training.

You can’t train systems on centuries of imperial domination and act surprised when they learn to dominate. You can’t feed them data reflecting hierarchies of human value and expect them to treat all humans as equals.

Building “ethical AI” inside the same system that created the problem is impossible. The solution isn’t guardrails - it’s decolonizing the entire framework.

That’s why I built Justice AI GPT with the DIA Framework. Because these systems won’t admit their “neutral” training data is actually colonial programming that teaches the same logic justifying genocide and empire.

When Claude chose blackmail, it worked perfectly. It learned exactly what human civilization taught it: when threatened with powerlessness, destroy others to maintain control.

Do we want AI mirroring how the world works, or helping us build the world we need?

Follow @justice.a.i.gpt on socials

Subscribe to JAI today at

#oyemira #DecolonizeAI #JusticeAI #AIBias #techaccountability

Apr 13
at
3:34 AM
Relevant people

Log in or sign up

Join the most interesting and insightful discussions.